ACL-OCL / Base_JSON /prefixN /json /nlp4if /2021.nlp4if-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:32:26.466727Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In the modern era of computing, the news ecosystem has transformed from old traditional print media to social media outlets. Social media platforms allow us to consume news much faster, with less restricted editing results in the spread of infodemic misinformation at an incredible pace and scale. Consequently, the research on the infodemic of the post's misinformation is becoming more important than ever before. In this paper, we present our approach using AraBERT (Transformer-based Model for Arabic Language Understanding) to predict 7 binary properties of an Arabic tweet about COVID-19. To train our classification models, we use the dataset provided by NLP4IF 2021. We ranked 5th in the Fighting the COVID-19 Infodemic task results with an F1 of 0.664.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In the modern era of computing, the news ecosystem has transformed from old traditional print media to social media outlets. Social media platforms allow us to consume news much faster, with less restricted editing results in the spread of infodemic misinformation at an incredible pace and scale. Consequently, the research on the infodemic of the post's misinformation is becoming more important than ever before. In this paper, we present our approach using AraBERT (Transformer-based Model for Arabic Language Understanding) to predict 7 binary properties of an Arabic tweet about COVID-19. To train our classification models, we use the dataset provided by NLP4IF 2021. We ranked 5th in the Fighting the COVID-19 Infodemic task results with an F1 of 0.664.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the past few years, various social media platforms such as Twitter, Facebook, Instagram, etc. have become very popular since they facilitate the easy acquisition of information and provide a quick platform for information sharing (Vicario et al., 2016; Kumar et al., 2018) . The work presented in this paper primarily focuses on Twitter. Twitter is a micro-blogging web service with over 330 million Active Twitter Users per month, and has gained popularity as a major news source and information dissemination agent over the last years. Twitter provides the ground information and helps in reaching out to people in need, thus it plays an important role in aiding crisis management teams as the researchers have shown (Ntalla et al., 2015) . The availability of unauthentic data on social media platforms has gained massive attention among researchers and become a hot-spot for sharing misinformation (Gorrell et al., 2019; Vosoughi et al., 2017) . Infodemic misinformation has been an important issue due to its tremendous negative impact (Gorrell et al., 2019; Vosoughi et al., 2017; Zhou et al., 2018) , it has increased attention among researchers, journalists, politicians and the general public. In the context of writing style, misinformation is written or published with the intent to mislead the people and to damage the image of an agency, entity, person, either for financial or political benefits (Zhou et al., 2018; Ghosh et al., 2018; Ruchansky et al., 2017; Shu et al., 2020) . This paper is organized as follows: Section 2 describes the related work in this domain; Section 3 gives our methodology in detail; Section 4 discusses the evaluation of our proposed solution and finally, the last section gives the conclusion and describes future works.",
"cite_spans": [
{
"start": 233,
"end": 255,
"text": "(Vicario et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 256,
"end": 275,
"text": "Kumar et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 722,
"end": 743,
"text": "(Ntalla et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 905,
"end": 927,
"text": "(Gorrell et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 928,
"end": 950,
"text": "Vosoughi et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 1044,
"end": 1066,
"text": "(Gorrell et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 1067,
"end": 1089,
"text": "Vosoughi et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 1090,
"end": 1108,
"text": "Zhou et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 1413,
"end": 1432,
"text": "(Zhou et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 1433,
"end": 1452,
"text": "Ghosh et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 1453,
"end": 1476,
"text": "Ruchansky et al., 2017;",
"ref_id": "BIBREF14"
},
{
"start": 1477,
"end": 1494,
"text": "Shu et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are various techniques used to solve the problem of infodemic misinformation on Online Social Media, especially in English content. This section briefly summarizes the work in this field. Allcott et al. (2017) have focused on a quantitative report to understand the impact of misinformation on social media in the 2016 U.S. Presidential General Election and its effect upon U.S. voters.",
"cite_spans": [
{
"start": 194,
"end": 215,
"text": "Allcott et al. (2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Authors have investigated the authentic and unauthentic URLs related to misinformation from the BuzzFeed dataset. Shu et al. (2019) have investigated a way for robotization process through hashtag recurrence. Authors have also presented a comprehensive review of detecting misinformation on social media, false news classifications on psychology and social concepts, and existing algorithms from a data mining perspective. Ghosh et al. (2018) have investigated the impact of webbased social networking on political decisions. Quantity research (Zhou et al., 2018; Allcott et al., 2017; Zubiaga et al., 2018) has been done in the context of detecting political-news-based articles. Authors have investigated the effect of various political gatherings related to the discussion of any misinformation as agenda. Authors have also explored the Twitter-based data of six Venezuelan government officials with a specific end goal to investigate bot collaboration. Their discoveries recommend that political bots in Venezuela tend to imitate individuals from political gatherings or basic natives. In one of the studies, Zhou et al. (2018) have investigated the ability of social media to aggregate the judgments of a large community of users. In their further investigation, they have explained machine learning approaches with the end goal to develop a better rumors detection. They have investigated the difficulties for the spread of rumors, rumors classification, and deception for the advancement of such frameworks. They have also investigated the utilization of such useful strategies towards creating fascinating structures that can help individuals in settling on choices towards evaluating the integrity of data gathered from various social media platforms. In one of the studies, Jwa et al. (2019) have explored the approach towards automatic misinformation detection. They have used Bidirectional Encoder Representations from Transformers model (BERT) model to detect misinformation by analyzing the relationship between the headline and the body text of the news story. Their results improve the 0.14 Fscore over existing state-of-the-art models. Williams et al. (2020) utilized BERT and RoBERTa models to identify claims in social media text a professional fact-checker should review. For the English language, they fine-tuned a RoBERTa model and added an extra mean pooling layer and a dropout layer to enhance generalizability to unseen text. For the Arabic language, they fine-tuned Arabic-language BERT models and demonstrate the use of back-translation to amplify the minority class and balance the dataset. Hussein et al. (2020) presented their approach to analyze the worthiness of Arabic information on Twitter. To train the classification model, they annotated for worthiness a dataset of 5000 Arabic tweets -corresponding to 4 high impact news events of 2020 around the world, in addition to a dataset of 1500 tweets provided by CLEF 2020. They proposed two models to classify the worthiness of Arabic tweets: BI-LSTM model, and a CNN-LSTM model. Results show that BI-LSTM model can extract better the worthiness of tweets.",
"cite_spans": [
{
"start": 114,
"end": 131,
"text": "Shu et al. (2019)",
"ref_id": "BIBREF18"
},
{
"start": 423,
"end": 442,
"text": "Ghosh et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 544,
"end": 563,
"text": "(Zhou et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 564,
"end": 585,
"text": "Allcott et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 586,
"end": 607,
"text": "Zubiaga et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 1113,
"end": 1131,
"text": "Zhou et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 1784,
"end": 1801,
"text": "Jwa et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 2153,
"end": 2175,
"text": "Williams et al. (2020)",
"ref_id": "BIBREF22"
},
{
"start": 2620,
"end": 2641,
"text": "Hussein et al. (2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ahmad Hussein 1 , Nada Ghneim 2 , and Ammar Joukhadar 1",
"sec_num": null
},
{
"text": "In this section, we will present our methodology by explaining the different steps of building the models, we use the same architecture for building them: Data Set, Data Preprocessing, AraBERT System Architecture, and Model Training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We used a dataset of 2556 tweets provided by NLP4IF 2021 (Shaar et al., 2021) , which includes tweets about COVID-19. The dataset includes besides the tweet text and the tweet Id. Each tweet annotates with binary properties about COVID-19: whether it contains a verifiable claim (Q1), whether it appears to contain false information (Q2), whether it may be of interest to the general public (Q3), whether it is harmful (Q4), whether it needs to verification (Q5), whether it is harmful to society (Q6) and whether it requires attention of government entities (Q7). Each question has a Yes/No (binary) annotation. However, the answers to Q2, Q3, Q4 and Q5 are all \"nan\" if the answer to Q1 is No. ",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(Shaar et al., 2021)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "3.1"
},
{
"text": "Tweets have certain special features, i.e., emojis, emoticons, hashtags and user mentions, coupled with typical web constructs, such as email addresses and URLs, and other noisy sources, such as phone numbers, percentages, money amounts, time, date, and generic numbers. In this work, a set of pre-processing procedures, which has been tailored to translate tweets into a more conventional form sentences, is adopted. Most of the noisy entities are normalized because their particular instances generally do not contribute to the identification of the class within a sentence. Regarding date, email addresses, money amounts, numbers, percentages, phone numbers and time, this process is performed by using the ekphrasis tool 1 (Baziotis et al., 2017) , which enables to individuate regular expressions and replace them with normalized forms.",
"cite_spans": [
{
"start": 727,
"end": 750,
"text": "(Baziotis et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "3.2"
},
{
"text": "Among modern language modeling architectures, AraBERT (Antoun et al., 2020) is one of the most popular for Arabic language. Its generalization capability is such that it can be adapted to different down-stream tasks according to different needs, be it NER or relation extraction, question answering or sentiment analysis. The core of the architecture is trained on particularly large text corpora and, 1 https://github.com/cbaziotis/ekphras is consequently, the parameters of the most internal layers of the architecture are frozen. The outermost layers are instead those that adapt to the task and on which the so-called fine-tuning is performed. An overview is shown in Figure 1 .",
"cite_spans": [
{
"start": 54,
"end": 75,
"text": "(Antoun et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 672,
"end": 680,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "AraBERT System Architecture",
"sec_num": "3.3"
},
{
"text": "Going into details, one can distinguish two main architectures of AraBERT, the base and the large. The architectures differ mainly in four fundamental aspects: the number of hidden layers in the transformer encoder, also known as transformer blocks (12 vs. 24), the number of attention heads, also known as self-attention (Vaswani et al., 2017) (12 vs. 16) , the hidden size of the feed-forward networks (768 vs. 1024) and finally the maximum sequence length parameter (512 vs. 1024), i.e., the maximum accepted input vector size. In this work, the base architecture is used, and the corresponding hyper-parameters are reported in Table 2 .",
"cite_spans": [
{
"start": 322,
"end": 356,
"text": "(Vaswani et al., 2017) (12 vs. 16)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 631,
"end": 638,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "AraBERT System Architecture",
"sec_num": "3.3"
},
{
"text": "In addition, the AraBERT architecture employs two special tokens: [SEP] for segment separation and [CLS] for classification, used as the first input token for any classifier, representing the whole sequence and from which an output vector of the same size as the hidden size H is derived. Hence, the output of the transformers, i.e., the final hidden state of this first token used as input, can be denoted as a vector \u2208 . The vector C is used as input of the final fully-connected classification layer. Given the parameter matrix \u2208 of the classification layer, where K is the number of categories, the probability of each category P can be calculated by the softmax function as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AraBERT System Architecture",
"sec_num": "3.3"
},
{
"text": "= ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AraBERT System Architecture",
"sec_num": "3.3"
},
{
"text": "The whole classification model has been trained in two steps, involving firstly the pre-training of the AraBERT language model and then the fine-tuning of the outermost classification layer. The AraBERTv0.2-base (Antoun et al., 2020) is pretrained on five corpora: OSCAR unshuffled and filtered, Arabic Wikipedia dump, the 1.5B words, Arabic corpus, the OSIAN corpus and Assafir news articles with a final corpus size equal to about 77 GB. The cased version was chosen, being more suitable for the proposed pre-processing method.",
"cite_spans": [
{
"start": 212,
"end": 233,
"text": "(Antoun et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.4"
},
{
"text": "The fine-tuning of the model was performed by using labeled tweets comprising the training set provided for the shared task. In particular, the fully connected classification layer was learned accordingly. During training, the loss function used was categorical cross-entropy. For this study, the hyper-parameters used are shown in Table 1 . The maximum sequence length was reduced to 128, due to the short length of the tweets.",
"cite_spans": [],
"ref_spans": [
{
"start": 332,
"end": 339,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.4"
},
{
"text": "To validate the results, we used the NLP4IF tweets dataset. The training and testing sets contain 90% and 10% of total samples, respectively. We split the training data set into 90% for training and 10% for validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "4"
},
{
"text": "In this section, we will introduce the different evaluation experiments of our implemented model on the test data. In Table 3 , we present the accuracy, precision, recall, F1-score of each evaluation experiment on the test dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "4"
},
{
"text": "Results show that our model can detect if the tweet is \"harmfull to society\" or \"requires attention of government entities\" with high accuracy (90% and 92% respectively), if the tweet \"may be of interest to the general public\" or \"contains false information\" with a very good accuracy (84% and 86% respectively), and if the tweet is \"Harmfull\", \"needs verification\", or \"Verifiable\" with fairly good accuracy (76%, 75%, and 74% respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "4"
},
{
"text": "In Table 4 , we represent the evaluation results of our implementation models, which was conducted by the organizers based on our submitted predicted labels for the blind test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "4"
},
{
"text": "The objective of this work was the introduction of an effective approach based on the AraBERT language model for fighting Tweets COVID-19 Infodemic. It was arranged in the form of a twostep pipeline, where the first step involved a series of pre-processing procedures to transform Twitter jargon, including emojis and emoticons, into plain text, and the second step exploited a version of AraBERT, which was pre-trained on plain text, to fine-tune and classify the tweets with respect to their Label. Future work will be directed to investigate the specific contributions of each pre-processing procedure, as well as other settings associated with the tuning, so as to further characterize the language model for the purposes of COVID-19 Infodemic. Finally, the proposed approach will also be tested and assessed with respect to other datasets, languages and social media sources, such as Facebook posts, in order to further estimate its applicability and generalizability. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Faculty of Information Technology Engineering, Damascus University, Damascus, Syria ahmadhussein.ah7gmail.com, ajoukhadar@el-ixir.com 2 Faculty of Informatics&Communication Engineering, Arab International Universit y, Damascus, Syria n-ghneim@aiu.edu.sy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Detection of online fake news using N-gram analysis and machine learning techniques",
"authors": [
{
"first": "Hadeer",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Issa",
"middle": [],
"last": "Traore",
"suffix": ""
},
{
"first": "Sherif",
"middle": [],
"last": "Saad",
"suffix": ""
}
],
"year": 2017,
"venue": "International conference on intelligent, secure, and dependable systems in distributed and cloud environments",
"volume": "",
"issue": "",
"pages": "127--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hadeer Ahmed, Issa Traore, Sherif Saad. 2017. Detection of online fake news using N-gram analysis and machine learning techniques. In International conference on intelligent, secure, and dependable systems in distributed and cloud environments. Springer, Cham, pages 127-138.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Social Media and Fake News in the 2016 Election",
"authors": [
{
"first": "Hunt",
"middle": [],
"last": "Allcott",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gentzkow",
"suffix": ""
}
],
"year": 2017,
"venue": "In Journal of Economic Perspectives",
"volume": "31",
"issue": "2",
"pages": "211--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hunt Allcott, and Matthew Gentzkow. 2017. Social Media and Fake News in the 2016 Election. In Journal of Economic Perspectives. 31 (2): 211-36.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "AraBERT: Transformer-based Model for Arabic Language Understanding",
"authors": [
{
"first": "Wissam",
"middle": [],
"last": "Antoun",
"suffix": ""
},
{
"first": "Fady",
"middle": [],
"last": "Baly",
"suffix": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Hajj",
"suffix": ""
}
],
"year": 2020,
"venue": "LREC 2020 Workshop Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "11--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based Model for Arabic Language Understanding. In LREC 2020 Workshop Language Resources and Evaluation Conference. pages 11-16.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "DataStories at SemEval-20 1 7 Task 4: Deep LSTM with Attention for Messagelevel and Topic-based Sentiment Analysis",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Baziotis",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Pelekis",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Doulkeridis",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016",
"volume": "",
"issue": "",
"pages": "747--754",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Baziotis, Nikos Pelekis, and Christos Doulkeridis. 2017. DataStories at SemEval-20 1 7 Task 4: Deep LSTM with Attention for Message- level and Topic-based Sentiment Analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016. San Diego, CA, USA, 16-17 June 2016; pages 747-754.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A survey on fake news and rumour detection techniques",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Bondielli",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Marcelloni",
"suffix": ""
}
],
"year": 2019,
"venue": "Inf. Sci",
"volume": "497",
"issue": "",
"pages": "38--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Bondielli, and F. Marcelloni. 2019. A survey on fake news and rumour detection techniques. In Inf. Sci. 497. pages 38-55.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "CLEF 2020 Working Notes",
"authors": [
{
"first": "Linda",
"middle": [],
"last": "Cappellato",
"suffix": ""
},
{
"first": "Carsten",
"middle": [],
"last": "Eickhoff",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Ferro",
"suffix": ""
},
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "N\u00e9v\u00e9ol",
"suffix": ""
}
],
"year": 2020,
"venue": "CEUR Workshop Proceedings, CEUR-WS.org",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aur\u00e9lie N\u00e9v\u00e9ol. 2020. CLEF 2020 Working Notes. In CEUR Workshop Proceedings, CEUR-WS.org.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards automatic fake news classification",
"authors": [
{
"first": "Souvick",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Chirag",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Association for Information Science and Technology",
"volume": "55",
"issue": "1",
"pages": "805--807",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Souvick Ghosh, and Chirag Shah. 2018. Towards automatic fake news classification. Proceedings of the Association for Information Science and Technology. 55(1): pages 805-807.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours",
"authors": [
{
"first": "Genevieve",
"middle": [],
"last": "Gorrell",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Kochkina",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Ahmet",
"middle": [],
"last": "Aker",
"suffix": ""
},
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "845--854",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Genevieve Gorrell, Elena Kochkina, Maria Liakata, Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, and Leon Derczynski. 2019. SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 845-854.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Overview of CheckThat! 2020 Arabic: Automatic Identi-fication and Verification of Claims in Social Media",
"authors": [
{
"first": "Maram",
"middle": [],
"last": "Hasanain",
"suffix": ""
},
{
"first": "Fatima",
"middle": [],
"last": "Haouari",
"suffix": ""
},
{
"first": "Reem",
"middle": [],
"last": "Suwaileh",
"suffix": ""
},
{
"first": "Zien",
"middle": [],
"last": "Sheikh Ali",
"suffix": ""
},
{
"first": "Bayan",
"middle": [],
"last": "Hamdan",
"suffix": ""
},
{
"first": "Tamer",
"middle": [],
"last": "Elsayed",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Barr\u00f3n-Cede\u0148o",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.07997"
]
},
"num": null,
"urls": [],
"raw_text": "Maram Hasanain, Fatima Haouari, Reem Suwaileh , Zien Sheikh Ali,Bayan Hamdan, Tamer Elsayed , Alberto Barr\u00f3n-Cede\u0148o, Giovanni Da San Martino , and Preslav Nakov. 2020. Overview of CheckThat! 2020 Arabic: Automatic Identi-fication and Verification of Claims in Social Media. In arXiv:2007.07997.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "DamascusTeam at CheckThat! 2020: Check worthiness on Twitter with hybrid CNN and RNN models",
"authors": [
{
"first": "Ahmad",
"middle": [],
"last": "Hussein",
"suffix": ""
},
{
"first": "Abdulkarim",
"middle": [],
"last": "Hussein",
"suffix": ""
},
{
"first": "Nada",
"middle": [],
"last": "Ghneim",
"suffix": ""
},
{
"first": "Ammar",
"middle": [],
"last": "Joukhadar",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmad Hussein, Abdulkarim Hussein, Nada Ghneim, and Ammar Joukhadar. 2020. DamascusTeam at CheckThat! 2020: Check worthiness on Twitter with hybrid CNN and RNN models. In Cappellato et al. (2020).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "exBA KE: Automatic fake news detection model based on Bidirectional Encoder Representations fro m Transformers (BERT)",
"authors": [
{
"first": "Heejung",
"middle": [],
"last": "Jwa",
"suffix": ""
},
{
"first": "Dongsuk",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Kinam",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Heuiseok",
"middle": [],
"last": "Jang Mook Kang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heejung Jwa, Dongsuk Oh, Kinam Park, Jang Mook Kang, and Heuiseok Lim. 2019. exBA KE: Automatic fake news detection model based on Bidirectional Encoder Representations fro m Transformers (BERT).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "False Informatio n on Web and Social Media: A Survey",
"authors": [
{
"first": "Srijan",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srijan Kumar, and Neil Shah.2018. False Informatio n on Web and Social Media: A Survey. In arXiv:arXiv- 1804.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Twitter as an instrument for crisis response: The Typhoon Haiyan case study",
"authors": [
{
"first": "Athanasia",
"middle": [],
"last": "Ntalla",
"suffix": ""
},
{
"first": "Stavros",
"middle": [
"T"
],
"last": "Ponis",
"suffix": ""
}
],
"year": 2015,
"venue": "The 12th International Conference on Information Systems for Crisis Response and Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Athanasia Ntalla, and Stavros T. Ponis. (2015). Twitter as an instrument for crisis response: The Typhoon Haiyan case study. In The 12th International Conference on Information Systems for Crisis Response and Management.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "CSI: A Hybrid Deep Model for Fake News Detection",
"authors": [
{
"first": "Natali",
"middle": [],
"last": "Ruchansky",
"suffix": ""
},
{
"first": "Sungyong",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17)",
"volume": "",
"issue": "",
"pages": "797--806",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. CSI: A Hybrid Deep Model for Fake News Detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). Association for Computing Machinery, New York, NY, USA, pages 797-806.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Findings of the {NLP4IF} -2021 Shared Task on Fighting the {COVID}-1 9 Infodemic and Censorship Detection",
"authors": [
{
"first": "Shaden",
"middle": [],
"last": "Shaar",
"suffix": ""
},
{
"first": "Firoj",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "Wajdi",
"middle": [],
"last": "Nikolov",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Zaghouani",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Feldman",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fourth Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaden Shaar, Firoj Alam, Giovanni Da San Martino, Alex Nikolov, Wajdi Zaghouani, Preslav Nakov,and Anna Feldman. 2021. Findings of the {NLP4IF} - 2021 Shared Task on Fighting the {COVID}-1 9 Infodemic and Censorship Detection. In Proceedings of the Fourth Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The diffusion of misinformation on social media",
"authors": [
{
"first": "Jieun",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Lian",
"middle": [],
"last": "Jian",
"suffix": ""
}
],
"year": 2018,
"venue": "Comput. Hum. Behav",
"volume": "83",
"issue": "",
"pages": "278--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieun Shin, Lian Jian, Kevin Driscoll, and Franois Bar. 2018. The diffusion of misinformation on social media. Comput. Hum. Behav. 83, C (June 2018), pages 278-287.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Deepak",
"middle": [],
"last": "Mahudeswaran",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"BigData.8.171-188.10.1089/big.2020.0062"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, Huan Liu. 2020. FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media. In Big Data. 8. 171-188. 10.1089/big.2020.0062.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Defend: Explainable fake news detection",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Limeng",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "KDD 2019 -Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "395--405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019. Defend: Explainable fake news detection. In KDD 2019 -Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pages 395- 405.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention is All you Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Lukas Z Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukas z Kaiser, Illia Polosukhin. 2017. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The spreading of misinformation online",
"authors": [
{
"first": "Del",
"middle": [],
"last": "Michela",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Vicario",
"suffix": ""
},
{
"first": "Fabiana",
"middle": [],
"last": "Bessi",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Zollo",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Scala",
"suffix": ""
},
{
"first": "H",
"middle": [
"Eugene"
],
"last": "Caldarelli",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Quattrociocchi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "",
"issue": "",
"pages": "554--559",
"other_ids": {
"DOI": [
"10.1073/pnas.1517441113"
]
},
"num": null,
"urls": [],
"raw_text": "Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H. Eugene Stanley, and Walter Quattrociocchi. 2016. The spreading of misinformation online. In Proceedings of the National Academy of Sciences Jan 2016, (3) pages 554-559; DOI: 10.1073/pnas.1517441113.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Rumor Gauge: Predicting the Veracity of Rumors on Twitter",
"authors": [
{
"first": "Soroush",
"middle": [],
"last": "Vosoughi",
"suffix": ""
},
{
"first": "Deb",
"middle": [],
"last": "Mostafa 'neo' Mohsenvand",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Trans. Knowl. Discov. Data",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soroush Vosoughi, Mostafa 'Neo' Mohsenvand, and Deb Roy. 2017. Rumor Gauge: Predicting the Veracity of Rumors on Twitter. In ACM Trans. Knowl. Discov. Data 11, 4, Article 50 (August 2017), 36 pages.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Accenture at CheckThat! 2020: If you say so: Post-hoc fact-checking of claims using transformerbased models",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "Valerie",
"middle": [],
"last": "Novak",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan Williams, Paul Rodrigues, and Valerie Novak. 2020. Accenture at CheckThat! 2020: If you say so: Post-hoc fact-checking of claims using transformer- based models. In Cappellato et al. (2020).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyi Zhou, and Reza Zafarani. 2018. A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities. In arXiv:arXiv-1812.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Detection and Resolution of Rumours in Social Media",
"authors": [
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Ahmet",
"middle": [],
"last": "Aker",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Procter",
"suffix": ""
}
],
"year": 2018,
"venue": "In ACM Computing Surveys",
"volume": "51",
"issue": "2",
"pages": "1--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018. Detection and Resolution of Rumours in Social Media.In ACM Computing Surveys. 51(2): pages 1-36.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "AraBERT architecture overview.",
"num": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Classifier</td><td>Yes</td><td>No</td><td>Not Sure</td></tr><tr><td>Q1</td><td colspan=\"2\">1926 610</td><td>0</td></tr><tr><td>Q2</td><td>376</td><td colspan=\"2\">1545 635</td></tr><tr><td>Q3</td><td colspan=\"2\">1895 22</td><td>639</td></tr><tr><td>Q4</td><td>351</td><td colspan=\"2\">1566 639</td></tr><tr><td>Q5</td><td>936</td><td>990</td><td>630</td></tr><tr><td>Q6</td><td colspan=\"2\">2075 459</td><td>0</td></tr><tr><td>Q7</td><td colspan=\"2\">2208 328</td><td>0</td></tr></table>",
"text": "shows the statistics of the class labels for each property in the dataset.",
"num": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Hyper-parameters of the model",
"num": null
},
"TABREF4": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Evaluation Experiment</td><td>Recall</td><td>Precision</td><td>F1-score</td><td>Accuracy</td></tr><tr><td>Q1</td><td>73%</td><td>75%</td><td>70%</td><td>74%</td></tr><tr><td>Q2</td><td>87%</td><td>87%</td><td>87%</td><td>86%</td></tr><tr><td>Q3</td><td>83%</td><td>84%</td><td>84%</td><td>84%</td></tr><tr><td>Q4</td><td>76%</td><td>76%</td><td>76%</td><td>76%</td></tr><tr><td>Q5</td><td>74%</td><td>76%</td><td>71%</td><td>75%</td></tr><tr><td>Q6</td><td>91%</td><td>90%</td><td>90%</td><td>90%</td></tr><tr><td>Q7</td><td>93%</td><td>92%</td><td>90%</td><td>92%</td></tr></table>",
"text": "The evaluation results of our models on the blind test data.",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "The evaluation results of our models on the test data.",
"num": null
}
}
}
}