|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:32:13.046004Z" |
|
}, |
|
"title": "iCompass at NLP4IF-2021-Fighting the COVID-19 Infodemic", |
|
"authors": [ |
|
{ |
|
"first": "Wassim", |
|
"middle": [], |
|
"last": "Henia", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Oumayma Rjab iCompass", |
|
"location": { |
|
"country": "Tunisia" |
|
} |
|
}, |
|
"email": "wassim.henia@etudiant-isi.utm.tn" |
|
}, |
|
{ |
|
"first": "Hatem", |
|
"middle": [], |
|
"last": "Haddad", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper provides a detailed overview of the system and its outcomes, which were produced as part of the NLP4IF Shared Task on Fighting the COVID-19 Infodemic at NAACL 2021. This task is accomplished using a variety of techniques. We used state-of-theart contextualized text representation models that were fine-tuned for the downstream task in hand. ARBERT, MARBERT,AraBERT, Arabic ALBERT and BERT-base-arabic were used. According to the results, BERT-basearabic had the highest 0.748 F1 score on the test set.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper provides a detailed overview of the system and its outcomes, which were produced as part of the NLP4IF Shared Task on Fighting the COVID-19 Infodemic at NAACL 2021. This task is accomplished using a variety of techniques. We used state-of-theart contextualized text representation models that were fine-tuned for the downstream task in hand. ARBERT, MARBERT,AraBERT, Arabic ALBERT and BERT-base-arabic were used. According to the results, BERT-basearabic had the highest 0.748 F1 score on the test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In recent years, there has been a massive increase in the number of people using social media (such as Facebook and Twitter) to share, post information, and voice their thoughts. The increasing number of users has resulted in the development of an enormous number of posts on Twitter. Although social media networks have enhanced information exchange, they have also created a space for antisocial and illegal activities such as spreading false information, rumors, and abuse. These anti-social behaviors intensify in a massive way during crisis cases, creating a toxic impact on society, either purposely or accidentally. The COVID-19 pandemic is one such situation that has impacted people's lives by locking them down to their houses and causing them to turn to social media. Since the beginning of the pandemic, false information concerning Covid-19 has circulated in a variety of languages, but the spread in Arabic is especially harmful due to a lack of quality reporting. For example, the tweet \" 40", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "# # \" \" # 40 %100", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "#\" is translated as follows: \"Good evening, good news, 40 seconds, the owner of the initiative to gather scientists to find a treatment against Corona announces on the air that an entire team, including a French doctor named \"Raoult\", discovered that the malaria treatment is the one that treats the new Corona, and it has been tried on 40 patients\". This tweet contains false information that is harmful to the society and people believing it could be faced with real danger. Basically, we are not only fighting the coronavirus, but there is a war against infodemic which makes it crucial to identify this type of false information. For instance, the NLP4IF Task 2 is fighting the COVID-19 Infodemic by predicting several binary properties of a tweet about COVID-19 as follows: whether it is harmful, whether it contains a verifiable claim, whether it may be of interest to the general public, whether it appears to contain false information, whether it needs verification or/and requires attention. This is why we performed a multilabel classification using Arabic pretrained models including ALBERT Arabic (Lan et al., 2019) , BERT-base-arabic (Devlin et al., 2018) , AraBERT (Antoun et al., 2020) , ARBERT(Abdul-Mageed et al., 2020), and MARBERT (Abdul-Mageed et al., 2020) with different hyper-parameters. The paper is structured as follows: Section 2 provides a concise description of the used dataset. Section 3 describes the used systems and the experimental setup to build models for Fighting the COVID-19 Infodemic. Section 4 presents the obtained results. Section 5 presents the official submission results. Finally, section 6 concludes and points to possible directions for future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1109, |
|
"end": 1127, |
|
"text": "(Lan et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1147, |
|
"end": 1168, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1179, |
|
"end": 1200, |
|
"text": "(Antoun et al., 2020)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The provided training dataset of the competition, fighting the COVID-19 Infodemic Arabic, consists of 2536 tweets and the development dataset con-sists of 520 tweets (Shaar et al., 2021) . The data was labelled as yes/no questions answering seven questions: Questions 2,3,4 and 5 will be labelled as nan if the answer to the first question is no. The tweets are in Modern Standard Arabic (MSA) and no other Arabic dialect was observed. Data was preprocessed by removing emojis, URLs, punctuation, duplicated characters in a word, diacritics, and any non Arabic words. We present an example sentence before and after preprocessing:", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 186, |
|
"text": "(Shaar et al., 2021)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 Before preprocessing: : # -Mageed et al., 2020) and Arabic BERT (Safaya et al., 2020) . Added-on, we used the xlarge version Arabic Albert 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 49, |
|
"text": "-Mageed et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 66, |
|
"end": 87, |
|
"text": "(Safaya et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "AraBERT (Antoun et al., 2020) , was trained on 70 million sentences, equivalent to 24 GB of text, covering news in Arabic from different media sources.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 29, |
|
"text": "(Antoun et al., 2020)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AraBERT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "It achieved state-of-the-art performances on three Arabic tasks including Sentiment Analysis. Yet, the pre-training dataset was mostly in MSA and therefore can't handle dialectal Arabic as much as official Arabic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AraBERT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "ARBERT (Abdul-Mageed et al., 2020) is a largescale pretrained language model using BERT base's architecture and focusing on MSA. It was trained on 61 GB of text gathered from books, news articles, crawled data and the Arabic Wikipedia. The vocabulary size was equal to 100k WordPieces which is the largest compared to AraBERT (60k for Arabic out of 64k) and mBERT (5k for Arabic out of 110k).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ARBERT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "MARBERT, also by (Abdul-Mageed et al., 2020), is a large-scale pretrained language model using BERT base's architecture and focusing on the various Arabic dialects. It was trained on 128 GB of Arabic Tweets. The authors chose to keep the Tweets that have at least three Arabic words. Therefore, Tweets that have three or more Arabic words and some other non-Arabic words are kept. This is because dialects are often times mixed with other foreign languages. Hence, the vocabulary size is equal to 100k WordPieces. MARBERT enhances the language variety as it focuses on representing the previously underrepresented dialects and Arabic variants.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MARBERT", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Arabic ALBERT 2 by (KUIS-AI-Lab) models were pretrained on 4.4 Billion words: Arabic version of OSCAR (unshuffled version of the corpus) filtered from Common Crawl and Recent dump of Arabic Wikipedia. Also, the corpus and vocabulary set are not restricted to MSA, but contain some dialectical Arabic too.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arabic ALBERT", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Arabic BERT (Safaya et al., 2020 ) is a set of BERT language models that consists of four models of different sizes trained using masked language modeling with whole word masking (Devlin et al., 2018) . Using a corpus that consists of the unshuffled version of OSCAR data (Ortiz Su\u00e1rez et al., 2020) and a recent data dump from Wikipedia, which sums up to 8.2B words, a vocabulary set of 32,000 Wordpieces was constructed. The final version of corpus contains some non-Arabic words inlines. The corpus and the vocabulary set are not restricted to MSA, they contain some dialectical (spoken) Arabic too, which boosted models performance in terms of data from social media platforms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 32, |
|
"text": "(Safaya et al., 2020", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 200, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 299, |
|
"text": "(Ortiz Su\u00e1rez et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arabic BERT", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We use these pretrained language models and build upon them to obtain our final models. Other than outperforming previous techniques, huge amounts of unlabelled text have been used to train general purpose models. Fine-tuning them on much smaller annotated datasets achieves good results thanks to the knowledge gained during the pretraining phase, which is expensive especially in terms of computational power. Hence, given our relatively small dataset, we chose to fine-tune these pretrained models. The fine-tuning actually consists of adding an untrained layer of neurons on top of the pretrained model and only tweaking the weights of the last layers to adjust them to the new labelled dataset. We chose to train our models on a Google Cloud GPU using Google Colaboratory. The average training time of one model is around 10 minutes. We experimented with Arabic ALBERT, Arabic BERT, AraBERT, ARBERT and MARBERT with different hyperparameters. The final model that we used to make the submission is a model based on BERT-base-arabic, trained for 10 epochs with a learning rate of 5e-5, a batch size of 32 and max sequence length of 128.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fine-tuning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "We have validated our models through the development dataset as mentioned in the data section. The results of all models were close but the BERT-basearabic achieved the best results performing 78.27% F1 score. For reference, and to compare with other models, we also showcase the results obtained with ARBERT, AraBERT, and Arabic ALBERT in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 348, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Development dataset results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 The best ARBERT model was achieved using 2e-5 learning rate, 32 batch size, 10 epochs, 128 max length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Development dataset results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 The best MARBERT model was achieved using 6e-5 learning rate, 32 batch size, 10 epochs, 128 max length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Development dataset results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 The best AraBERT model was achieved using 4e-5 learning rate, 32 batch size, 10 epochs, 128 max length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Development dataset results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 The best ALBERT Arabic model was achieved using 2e-5 learning rate, 16 batch size, 8 epochs, 128 max length. The result of all the models used are very close. However, bert-base-arabic outperformed all other models. This may be due to the pretrained data for bert-base-arabic. The final version has some non-Arabic words inlines. Also, the corpus of bertbase-arabic and vocabulary set are not restricted to MSA, they contain some dialectical Arabic too which can boost the model performance in terms of data from social media. Table 2 reviews the official results of iCompass system against the top three ranked systems. Table 3 : Official Results for each classifier as reported by the task organisers (Shaar et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 705, |
|
"end": 725, |
|
"text": "(Shaar et al., 2021)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 529, |
|
"end": 536, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 630, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Development dataset results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "that BERT-base-arabic outperforms all of the previously listed models in terms of overall performance, and was chosen for the final submission. Future work will include developing larger contextualized pretrained models and improving the current COVID-19 Infodemic Detection .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Official submission results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://t.co/6MEMHFMQj2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/KUIS-AI-Lab/Arabic-ALBERT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Arbert & marbert: Deep bidirectional transformers for arabic", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2101.01785" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Muhammad Abdul-Mageed, AbdelRahim Elmadany, and El Moatez Billah Nagoudi. 2020. Arbert & marbert: Deep bidirectional transformers for arabic. arXiv preprint arXiv:2101.01785.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Arabert: Transformer-based model for arabic language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Wissam", |
|
"middle": [], |
|
"last": "Antoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fady", |
|
"middle": [], |
|
"last": "Baly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2003.00104" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. Arabert: Transformer-based model for arabic language understanding. arXiv preprint arXiv:2003.00104.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Albert: A lite bert for self-supervised learning of language representations", |
|
"authors": [ |
|
{ |
|
"first": "Zhenzhong", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingda", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piyush", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Soricut", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.11942" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A monolingual approach to contextualized word embeddings for mid-resource languages", |
|
"authors": [ |
|
{ |
|
"first": "Pedro Javier Ortiz", |
|
"middle": [], |
|
"last": "Su\u00e1rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Romary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1703--1714", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pedro Javier Ortiz Su\u00e1rez, Laurent Romary, and Beno\u00eet Sagot. 2020. A monolingual approach to contextual- ized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703-1714, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Kuisail at semeval-2020 task 12: Bert-cnn for offensive speech identification in social media", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Safaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Moutasem", |
|
"middle": [], |
|
"last": "Abdullatif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deniz", |
|
"middle": [], |
|
"last": "Yuret", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Safaya, Moutasem Abdullatif, and Deniz Yuret. 2020. Kuisail at semeval-2020 task 12: Bert-cnn for offensive speech identification in social media.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Findings of the NLP4IF-2021 shared task on fighting the COVID-19 infodemic and censorship detection", |
|
"authors": [ |
|
{ |
|
"first": "Shaden", |
|
"middle": [], |
|
"last": "Shaar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Firoj", |
|
"middle": [], |
|
"last": "Alam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giovanni", |
|
"middle": [], |
|
"last": "Da San", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Martino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wajdi", |
|
"middle": [], |
|
"last": "Nikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Zaghouani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Feldman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the Fourth Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda, NLP4IF@NAACL' 21, Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaden Shaar, Firoj Alam, Giovanni Da San Martino, Alex Nikolov, Wajdi Zaghouani, Preslav Nakov, and Anna Feldman. 2021. Findings of the NLP4IF- 2021 shared task on fighting the COVID-19 info- demic and censorship detection. In Proceedings of the Fourth Workshop on Natural Language Process- ing for Internet Freedom: Censorship, Disinforma- tion, and Propaganda, NLP4IF@NAACL' 21, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Devlin et al., 2018) is, nowadays, the state-of-the-art model for language understanding, outperforming previous models and opening new perspectives in the Natural Language Processing (NLP) field. Recent similar work was conducted for Arabic which is increasingly gaining attention. In our work, we used three BERT Arabic variants: AraBERT(Antoun et al., 2020), ARBERT(Abdul-Mageed et al., 2020), MARBERT (Abdul", |
|
"content": "<table><tr><td>19</td><td>1442 1</td></tr><tr><td>\u2022 After preprocessing:</td><td/></tr><tr><td>1442</td><td/></tr><tr><td>19</td><td/></tr><tr><td>3 System description</td><td/></tr><tr><td colspan=\"2\">Pretrained contextualized text representation mod-</td></tr><tr><td colspan=\"2\">els have shown to perform effectively in order to</td></tr><tr><td colspan=\"2\">make a natural language understandable by ma-</td></tr><tr><td colspan=\"2\">chines. Bidirectional Encoder Representations</td></tr><tr><td>from Transformers (BERT) (</td><td/></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td>presents the results obtained over devel-</td></tr><tr><td>opment data for Fighting COVID-19 Infodemic.</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Models performances on the Dev dataset.", |
|
"content": "<table><tr><td>presents the official results per class of</td></tr><tr><td>iCompass system.</td></tr><tr><td>6 Conclusion</td></tr><tr><td>This paper describes the system built in the NLP4IF</td></tr><tr><td>2021 shared Task , along with comprehensive</td></tr><tr><td>results. Various learning techniques have been</td></tr><tr><td>investigated using five language models (Arabic</td></tr><tr><td>ALBERT, AraBERT, ARBERT, MARBERT, and</td></tr><tr><td>BERT-base-arabic) to accomplish the task of Fight-</td></tr><tr><td>ing the COVID-19 Infodemic. The results show</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Official Results on Test set and ranking as reported by the task organisers(Shaar et al., 2021).", |
|
"content": "<table><tr><td colspan=\"2\">Questions F1 Score</td></tr><tr><td>Q1</td><td>0.797</td></tr><tr><td>Q2</td><td>0.746</td></tr><tr><td>Q3</td><td>0.881</td></tr><tr><td>Q4</td><td>0.796</td></tr><tr><td>Q5</td><td>0.544</td></tr><tr><td>Q6</td><td>0.885</td></tr><tr><td>Q7</td><td>0.585</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |