{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:42:24.354818Z" }, "title": "PHEMEPlus: Enriching Social Media Rumour Verification with External Evidence", "authors": [ { "first": "John", "middle": [], "last": "Dougrez-Lewis", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Warwick", "location": { "country": "UK" } }, "email": "j.dougrez-lewis@warwick.ac.uk" }, { "first": "Elena", "middle": [], "last": "Kochkina", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Warwick", "location": { "country": "UK" } }, "email": "e.kochkina@qmul.ac.uk" }, { "first": "Miguel", "middle": [], "last": "Arana-Catania", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Warwick", "location": { "country": "UK" } }, "email": "miguel.arana-catania@warwick.ac.uk" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Warwick", "location": { "country": "UK" } }, "email": "m.liakata@qmul.ac.uk" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "", "affiliation": {}, "email": "yulan.he@warwick.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Work on social media rumour verification utilises signals from posts, their propagation and users involved. Other lines of work target identifying and fact-checking claims based on information from Wikipedia, or trustworthy news articles without considering social media context. However works combining the information from social media with external evidence from the wider web are lacking. To facilitate research in this direction, we release a novel dataset, PHEMEPlus 1 , an extension of the PHEME benchmark, which contains social media conversations as well as relevant external evidence for each rumour. We demonstrate the effectiveness of incorporating such evidence in improving rumour verification models. Additionally, as part of the evidence collection, we evaluate various ways of query formulation to identify the most effective method.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Work on social media rumour verification utilises signals from posts, their propagation and users involved. Other lines of work target identifying and fact-checking claims based on information from Wikipedia, or trustworthy news articles without considering social media context. However works combining the information from social media with external evidence from the wider web are lacking. To facilitate research in this direction, we release a novel dataset, PHEMEPlus 1 , an extension of the PHEME benchmark, which contains social media conversations as well as relevant external evidence for each rumour. We demonstrate the effectiveness of incorporating such evidence in improving rumour verification models. Additionally, as part of the evidence collection, we evaluate various ways of query formulation to identify the most effective method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The harm and prevalence of online misinformation made research into automated methods of information verification an important and active research area. This includes various tasks like fact-checking, social media rumour detection, stance classification and verification. In this work we are concerned with social media rumour verification, the task of identifying whether a rumour (i.e check-worthy claim circulating on social media whose veracity status is yet to be verified (Zubiaga et al., 2018) ), is True, False or Unverified.", "cite_spans": [ { "start": 478, "end": 500, "text": "(Zubiaga et al., 2018)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although a significant amount of work has been done towards evaluating the veracity of social media rumours (Zubiaga et al., 2016; Ma et al., 2017; Song et al., 2018; Dougrez-Lewis et al., 2021) , there is still a dearth of works and datasets combining the information from social media with external evidence from the wider web. While recent works focusing on rumours around the COVID-19 pandemic have been collecting data from a wide range of sources from news and social media to scientific publications (Cui and Lee, 2020; Zhou et al., 2020; , these are not sufficient for the creation of generalisable verification models as they only focus on a single topic. At the same time works on fact-checking, which do not focus on social media content, but use claims from debunking websites (Lim et al., 2019; Ahmadi et al., 2019) , as well as recent work by Li et al. (2021) have shown the benefits of utilising stance of evidence for verification.", "cite_spans": [ { "start": 108, "end": 130, "text": "(Zubiaga et al., 2016;", "ref_id": "BIBREF39" }, { "start": 131, "end": 147, "text": "Ma et al., 2017;", "ref_id": "BIBREF20" }, { "start": 148, "end": 166, "text": "Song et al., 2018;", "ref_id": "BIBREF31" }, { "start": 167, "end": 194, "text": "Dougrez-Lewis et al., 2021)", "ref_id": "BIBREF9" }, { "start": 507, "end": 526, "text": "(Cui and Lee, 2020;", "ref_id": "BIBREF4" }, { "start": 527, "end": 545, "text": "Zhou et al., 2020;", "ref_id": "BIBREF37" }, { "start": 789, "end": 807, "text": "(Lim et al., 2019;", "ref_id": "BIBREF16" }, { "start": 808, "end": 828, "text": "Ahmadi et al., 2019)", "ref_id": "BIBREF0" }, { "start": 857, "end": 873, "text": "Li et al. (2021)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here we aim to further enable research in this direction and release an enriched version of a popular benchmark dataset PHEME (Zubiaga et al., 2016) with timely evidence for each of the rumours, obtained from a wide range of web sources.", "cite_spans": [ { "start": 126, "end": 148, "text": "(Zubiaga et al., 2016)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although a few works use web search for evidence retrieval (Popat et al., 2018; Lim et al., 2019) , to our knowledge, only the work of Lim et al. (2017) touches upon the topic of the search query formulation. Here we analyse several query formulation strategies to find the most effective one.", "cite_spans": [ { "start": 59, "end": 79, "text": "(Popat et al., 2018;", "ref_id": "BIBREF26" }, { "start": 80, "end": 97, "text": "Lim et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work we make the following contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We collect and release the PHEMEPlus dataset of Twitter rumour conversations with the relevant heterogeneous evidence retrieved from the web to facilitate research on combining multiple sources of information for social media rumour verification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We investigate approaches towards search query formulation for evidence retrieval, together with evaluation metrics for the quality of evidence retrieved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We demonstrate the effectiveness of incorporating external evidence into rumour veracity classification models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Related work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Among existing datasets for veracity classification we can broadly discern two categories: (1) focusing on claims arising from social media in the form of posts (Zubiaga et al., 2016; Ma et al., 2017) and", "cite_spans": [ { "start": 161, "end": 183, "text": "(Zubiaga et al., 2016;", "ref_id": "BIBREF39" }, { "start": 184, "end": 200, "text": "Ma et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Veracity Classification Datasets", "sec_num": "2.1" }, { "text": "(2) focusing on manually formulated claims, either created specifically for a task (Thorne et al., 2018) , or consisting of titles from news or debunking websites (Wang, 2017; Alhindi et al., 2018; Lim et al., 2019; Ahmadi et al., 2019) . These different types of claims present different challenges for verification models and evidence retrieval systems. In particular social media posts often use non-standard grammar, hashtags and have typos (intentional or otherwise). It can be crucial to process claims directly from social media to enable early-stage misinformation detection as rumours often start spreading on social media, later making it into the mainstream media. Only a few datasets incorporate both social media and evidence from the web, however these often focus on a very limited number of sources of evidence or a single topic (Dai et al., 2020; Cui and Lee, 2020) . One of such datasets is FakeNewsNet (Shu et al., 2018) incorporating fake and true news articles from fact-checking websites PolitiFact 2 and GossipCop 3 . Articles are further augmented with users' posts on Twitter pertaining to them but not including full conversation structure. FakeHealth (Dai et al., 2020 ) is a similarly constructed dataset based on healthrelated news articles labelled by the Health News Review 4 , including Twitter users' replies and profiles. Barr\u00f3n-Cedeno et al. (2020) organised shared tasks for automatic identification and verification of claims in social media. Apart from tasks on checkworthiness estimation for tweets and verified claim retrieval, they also released tasks for supporting evidence retrieval and claim verification. However, the tasks mainly focused on misinformation about COVID-19 and the latter tasks were only offered in Arabic.", "cite_spans": [ { "start": 83, "end": 104, "text": "(Thorne et al., 2018)", "ref_id": "BIBREF33" }, { "start": 163, "end": 175, "text": "(Wang, 2017;", "ref_id": "BIBREF35" }, { "start": 176, "end": 197, "text": "Alhindi et al., 2018;", "ref_id": "BIBREF1" }, { "start": 198, "end": 215, "text": "Lim et al., 2019;", "ref_id": "BIBREF16" }, { "start": 216, "end": 236, "text": "Ahmadi et al., 2019)", "ref_id": "BIBREF0" }, { "start": 845, "end": 863, "text": "(Dai et al., 2020;", "ref_id": "BIBREF5" }, { "start": 864, "end": 882, "text": "Cui and Lee, 2020)", "ref_id": "BIBREF4" }, { "start": 921, "end": 939, "text": "(Shu et al., 2018)", "ref_id": "BIBREF30" }, { "start": 1178, "end": 1195, "text": "(Dai et al., 2020", "ref_id": "BIBREF5" }, { "start": 1356, "end": 1383, "text": "Barr\u00f3n-Cedeno et al. (2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Veracity Classification Datasets", "sec_num": "2.1" }, { "text": "In light of the wave of misinformation associated with COVID-19 pandemic researchers have been collecting relevant datasets of scientific publications, news articles and their headlines, social media posts and claims about COVID-19 (Shaar et al., 2020; Dharawat et al., 2020; Zhou et al., 2020; Li et al., 2020; Memon and Carley, 2020; Hossain et al., 2020; Barr\u00f3n-Cedeno et al., 2020) . One of the most relevant work to ours is COAID (Cui and Lee, 2020) , a large-scale dataset containing COVID-19 related news articles as well as social media posts. While these are rich resources, which enable further research against misinformation, they are insufficient for training generalisable models as they solely focus on one topic.", "cite_spans": [ { "start": 232, "end": 252, "text": "(Shaar et al., 2020;", "ref_id": "BIBREF29" }, { "start": 253, "end": 275, "text": "Dharawat et al., 2020;", "ref_id": "BIBREF8" }, { "start": 276, "end": 294, "text": "Zhou et al., 2020;", "ref_id": "BIBREF37" }, { "start": 295, "end": 311, "text": "Li et al., 2020;", "ref_id": "BIBREF14" }, { "start": 312, "end": 335, "text": "Memon and Carley, 2020;", "ref_id": "BIBREF23" }, { "start": 336, "end": 357, "text": "Hossain et al., 2020;", "ref_id": "BIBREF10" }, { "start": 358, "end": 385, "text": "Barr\u00f3n-Cedeno et al., 2020)", "ref_id": "BIBREF29" }, { "start": 435, "end": 454, "text": "(Cui and Lee, 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Veracity Classification Datasets", "sec_num": "2.1" }, { "text": "In this work we have augmented the PHEME dataset, a popular benchmark dataset for social media rumour verification, it contains rumours expressed via Twitter posts with full conversation threads from several news-breaking events on different topics. This dataset is set up to imitate realistic scenarios as (1) it was collected as the events were unfolding and then rumour stories were identified and annotated by a professional journalist as opposed to collecting tweets based on existing factchecks as in Ma et al. (2017) ; and (2) the evaluation is performed in on events unseen during training. We augment it with evidence articles from across the web to give it access to an unlimited set of resources. To preserve the realistic scenario of verifying emerging rumours, all of our evidence is restricted to articles indexed by Google no later than the day on which the rumour was posted to Twitter.", "cite_spans": [ { "start": 507, "end": 523, "text": "Ma et al. (2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Veracity Classification Datasets", "sec_num": "2.1" }, { "text": "Social media rumour verification models use various types of information available on social media platform: text of rumourous posts and responses (Dougrez-Lewis et al., 2021), user information and connections (Khoo et al., 2020), propagation patterns (Ma et al., 2018) . However, still only few works incorporate external evidence.", "cite_spans": [ { "start": 252, "end": 269, "text": "(Ma et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Social Media Rumour Verification Models Using External Information", "sec_num": "2.2" }, { "text": "Lim et al. 2017proposed the iFACT framework that extracts claims from tweets pertaining to major events. For each claim, it collects evidence from web search and estimates the likelihood of a claim being credible. To formulate the search query iFACT uses ClausIE (Del Corro and Gemulla, 2013) to extract (subject, predicate, object) triples from tweets. To determine the credibility of the claim iFACT uses features extracted from search results and dependencies between claims. Here we also experiment with using ClausIE to formulate the search query. Li et al. (2021) propose to improve rumour detection on PHEME dataset by using evidence from Wikipedia. They first train the evidence extraction module on the FEVER dataset and then use it as part of a rumour detection system to get relevant sentences from a Wikipedia dump along with Twitter conversation around a rumour. While being limited by a single source of information, they demonstrate performance improvements over previous models not using external information.", "cite_spans": [ { "start": 553, "end": 569, "text": "Li et al. (2021)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Social Media Rumour Verification Models Using External Information", "sec_num": "2.2" }, { "text": "In this work we use BERT-based models as strong baselines to demonstrate the effectiveness of incorporating the evidence for social media rumour verification. In future work we will be experimenting with various ways of incorporating it to maximise the benefits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Social Media Rumour Verification Models Using External Information", "sec_num": "2.2" }, { "text": "3 Augmenting PHEME dataset with External Evidence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Social Media Rumour Verification Models Using External Information", "sec_num": "2.2" }, { "text": "We chose to extend the PHEME-5 dataset (Zubiaga et al., 2016) , which consists of Twitter conversations discussing rumours around five real-world events including the Lindt Cafe siege in Sydney and the 2015 Charlie Hebdo terrorist attack. This dataset is a popular benchmark for rumour verification, it is particularly challenging due to class imbalance and evaluation using leave-one-eventout cross-validation, reflecting a real-world evaluation scenario. Table 1 shows the statistics of the PHEMEPlus dataset by extending the original PHEME-5 dataset with retrieved relevant articles. The first four columns show the number of conversation threads in each of the event and each of the classes in the orignal PHEME-5 dataset. Figure 1 shows an example entry in the PHEMEPlus dataset, comprised of a rumorous tweet, veracity label, its conversation thread, and relevant evidence retrieved from the web. It is notable that tweets in the conversation thread (and the rumour itself) often contain URLs provided by users which may be useful as a further source of evidence, and that the corresponding evidence is not a part of the original PHEME dataset. Kochkina (2019) has shown that True rumours in PHEME have a higher percentage of URLs attached (55%) than for False (48%) and Unverified (48%) rumours. For the portion of PHEME with comments annotated for stance, these supplementary URLs were overwhelmingly found in comments supporting the source tweet's claim (33%) as opposed to those, denying (8%), querying (6%), or commenting (9%) on it.", "cite_spans": [ { "start": 39, "end": 61, "text": "(Zubiaga et al., 2016)", "ref_id": "BIBREF39" }, { "start": 1151, "end": 1166, "text": "Kochkina (2019)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 457, "end": 464, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 727, "end": 733, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Base dataset", "sec_num": "3.1" }, { "text": "In order to obtain evidence from the unlimited number of sources we chose to use Web search for evidence retrieval. We choose Google Search as it is one of the most established search engines, and, importantly, allows us to filter results by date. This is crucial as rumours are often resolved and widely debunked in some time following their originating event and the rumourous post, but this information would not be available to the model in a real time evaluation scenario. Furthermore, the evidence we retrieve from Google appears robustly reputable, with popular news sources consistently ranking highly in the search results. This is to be expected, since their PageRank system weights heavily websites which are highly cited/referenced by others. Web search results are also more likely to be up-to-date than any corresponding Wikipedia pages regarding a current real world happening, which may not be updated nor appropriately checked for correctness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Retrieval through Web search", "sec_num": "3.2" }, { "text": "For every search we include the term (before: date) at the start of the query to restrict results to articles from before the date the rumourous tweet was posted. For each query we collect the top 5 non-empty results from the web search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Retrieval through Web search", "sec_num": "3.2" }, { "text": "While Google search is able to process various types of queries, from keywords to natural language utterances, we performed a set of experiments to identify the most suitable method of query formulation for our particular task of evidence retrieval for rumours conveyed in Twitter posts. We experiment with queries formulated as (1) natural language sentence, (2) keywords, and (3) (subject, object, predicate) triples. For each experiment, we include around 99% of the PHEME dataset since a few queries did not yield enough non-empty results. Although we are aware of some more advanced studies into query expansion and formulation (Tamannaee et al., 2020; Scells et al., 2020) , contributing to these fields is beyond the scope of this paper.", "cite_spans": [ { "start": 633, "end": 657, "text": "(Tamannaee et al., 2020;", "ref_id": "BIBREF32" }, { "start": 658, "end": 678, "text": "Scells et al., 2020)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Evidence Retrieval through Web search", "sec_num": "3.2" }, { "text": "Here we aim to demonstrate gains from relatively simple approaches described below towards evidence retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Retrieval through Web search", "sec_num": "3.2" }, { "text": "We experiment with the following search strategies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Strategies", "sec_num": "3.2.1" }, { "text": "Preprocessed The search query is the source rumour, obtained from the preprocessed tweet. Our preprocessing entails removing URLs, replacing user mentions with \"user\" (so as to retain lexical structure), removing hashtags from the end but not the middle (also for lexical structure) and segmenting any compound hashtags. URLs are saved aside since they may have future use as evidence. Hashtags at the end of the tweet (but not others) are also retained, placed in brackets for an \"OR\" search with the rest of the query. These hashtags in particular are expected to be highly telling of the topic/theme of the tweet, especially when it is otherwise lacking in contextual words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Strategies", "sec_num": "3.2.1" }, { "text": "We use Stanza (Qi et al., 2020) to parse preprocessed tweets. Having obtained a parse tree, words in the following constructs are retained in-place: {obl:npmod, compound, advcl, nummod, acl:relcl, nsubj:pass, acl, amod, aux:pass}. This combination of constructs was iteratively finetuned until the resultant queries felt similar to the author's own search style, the idea being to replicate the search strategy of an experienced user. Hashtags at the end of tweets are handled as before.", "cite_spans": [ { "start": 14, "end": 31, "text": "(Qi et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Shortening with StanfordNLP", "sec_num": null }, { "text": "We use ClausIE (Del Corro and Gemulla, 2013), a popular subjectrelation-object extraction system in the same manner to find (subject, predicate, object) triples. These are kept in-place whilst the other words are removed. Hashtags at the end of tweets are retained as before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shortening with ClausIE", "sec_num": null }, { "text": "Examples of the search queries formed can be found in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Shortening with ClausIE", "sec_num": null }, { "text": "We devise evaluation metrics to compare the quality of evidence retrieved using different query types, without the need for a rumour verification model in advance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation metrics", "sec_num": "3.2.2" }, { "text": "English words which are representative of the content on their webpage, which we can treat as goldstandard keywords as in (Ma et al., 2016) . To get a goodness score in the range [0,1] we compute the cosine similarity between the words in URLs of retrieved articles and those posted in re- : Examples of search queries generated by the various search strategies, given the original rumour. In this case, the ClausIE strategy only removes the words \"MORE\" and \"and\".", "cite_spans": [ { "start": 122, "end": 139, "text": "(Ma et al., 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "URL Words Metric URLs frequently contain", "sec_num": null }, { "text": "sponse to the rumour. Specifically, for each retrieved article, its URL-words are compared with those of each URL in the Twitter comments. The final score is the average of all such cosine similarities across all retrieved articles in the dataset, encoded by Word2Vec (Mikolov et al., 2013) .", "cite_spans": [ { "start": 268, "end": 290, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "URL Words Metric URLs frequently contain", "sec_num": null }, { "text": "GloVe Metric If an article is relevant to a rumour, they will be similar in content. We use GloVe (Pennington et al., 2014) to calculate the similarity between the first 3 paragraphs of an article and the source rumour, with the title also counting as a paragraph. We use only the first few paragraphs because they seem likely to contain the highest density of relevant information. Cosine similarity scores are calculated between each of these paragraphs and the source rumour, and are averaged to give the article a similarity score. Unknown words with zero vectors are ignored for this purpose, although there is a weakness that some of the most important event-specific words could be unknown.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "URL Words Metric URLs frequently contain", "sec_num": null }, { "text": "BERTScore Metric This is calculated similarly to the GloVe metric, except that BERTScore is used in its place. Table 3 displays the performance of our search strategies when evaluated via the URL Words, GloVe, and BERT evaluation metrics. These results suggest that searching for the preprocessed tweet may be the best way to get relevant background information from the web, as opposed to extracting keywords from the tweet. This narrowly surpasses the performance of our ClausIE-based search strategy, which outperforms the StanfordNLP approach. The ClausIE strategy may retain a higher proportion of key grammatical constructs than the lat-ter, which play an unexpectedly important role in Google's search algorithm. This is contrary to the authors' searching intuition, perhaps due to their recent integration of models such as BERT (Devlin et al., 2018) . Although some of the values in Table 3 appear close together, it is notable that the results of the different query formulations land in the same order irrespective of the scoring metric used. Furthermore, the score differences between different query formulations become more substantial when taking into account their weak upper and lower bounds derived from using artificially generated 'target article' and 'random' queries (data not shown).", "cite_spans": [ { "start": 837, "end": 858, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 111, "end": 118, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 892, "end": 899, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "URL Words Metric URLs frequently contain", "sec_num": null }, { "text": "An example entry of the PHEMEPlus dataset can be found in Figure 1 . The number of articles we retrieved using the Preprocessed method can be found in Table 1 . All but two of the rumours have at least one associated evidence article, up to a maximum of 10.", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 66, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 151, "end": 158, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "PHEMEPlus dataset", "sec_num": "3.3" }, { "text": "We explore the overlap between the evidence in our resultant PHEMEPlus dataset and the URLs in the Twitter comments responding to the rumours. Table 4 shows the overlap between the articles retrieved from web search (using the Preprocessed Only strategy) and those from the Twitter comments. We observe little overlap between articles retrieved from web search and articles retrieved from comments responding to rumours. The latter may thus be a substantially different, potentially less useful, source of evidence due to a high density of social media pages and the likelihood that some of the comments may not be directly responding to the source rumour.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 150, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "PHEMEPlus dataset", "sec_num": "3.3" }, { "text": "A relatively large proportion of the articles retrieved from responses are deemed \"empty\", meaning they either have no body-text and/or no title. From this, and manual inspection, we infer that response-URLs are more likely to be social media posts or videos which are prone to missing titles or first paragraphs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PHEMEPlus dataset", "sec_num": "3.3" }, { "text": "The overall:unique ratio being similar for both thread and web suggests that the Google results are indeed sensitive to the content of each thread, as opposed to repeatedly giving the same results for a given rumourous event. There is not much overlap between the search results and the Twitter thread, and a large proportion of existing overlap might be explainable by news websites tweeting their news URLs. This is not attributable to overly stringent overlap criteria as the discrepancy between the overall number of articles and the number of articles without duplicates acts as a positive control to this end.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PHEMEPlus dataset", "sec_num": "3.3" }, { "text": "Similar links nearly always result from the same thread, possibly due to the aforementioned news companies. Investigating further, the vast majority (if not all) of the overlap was news articles. Speculatively, it is plausible that most of this overlap came from news websites tweeting their stories, as there are some examples of this in the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PHEMEPlus dataset", "sec_num": "3.3" }, { "text": "We conduct experiments to evaluate the effectiveness of our retrieved evidence for Twitter rumour veracity classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Effectiveness of Evidence for Rumour Verification", "sec_num": "4" }, { "text": "In our PHEMEPlus dataset, each source tweet is paired with up to 10 most relevant retrieved articles. We follow the typical pipeline fact check-ing approach to further select the 5 most relevant sentences from the articles associated with each source tweet. In order to do this, we use a simple novel approach based on ClausIE (Del Corro and Gemulla, 2013). The idea is to be able to reliably find relevant sentences whilst not being clobbered by the inevitably rare rumour-specific vocabulary which may not be recognised by many approaches. First, we use ClausIE to extract all relevant subject-predicate-object triples from the retrieved information. We assume these to be the words with the most potential for true relevance to the tweet. Any stop-words contained within are filtered out. For each sentence, a score is assigned based on how many of these important words are also contained in the tweet, penalising both overly long (>20 token) and short (<5 token) sentences as are likely to be either uninformative or unconcise and work poorly with the BERT models. In particular, short sentences are ignored, whereas long sentences lose 2% of their score for each additional word. Only rumours with enough evidence to extract 5 sentences as above are used (99% of them) in our experiments. The top 5 such sentences are paired with each source tweet and are fed into a rumour classification model for veracity assessment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Sentence Retrieval", "sec_num": "4.1" }, { "text": "We compare the performance of several veracity classification models in three input scenarios: (1) rumour (i.e., source tweet) alone, (2) evidence (i.e., extracted sentences) alone and (3) rumour concatenated with the evidence (extracted sentences). The classification models chosen include pre-trained language models such as BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) , and a model making use of natural language inference results between a source rumour and its related evidence sentence.", "cite_spans": [ { "start": 332, "end": 353, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" }, { "start": 366, "end": 384, "text": "(Liu et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Veracity Classification Models", "sec_num": "4.2" }, { "text": "BERT-based approaches We train BERT-based models including BERT and RoBERTa followed by a single softmax layer for rumour verification. final predictions were determined by majority voting. These particular models are chosen because flavours of BERT have previously achieved state-ofthe-art results in many natural language processing tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Veracity Classification Models", "sec_num": "4.2" }, { "text": "Self-Attention Network based on Natural Language Inference (NLI-SAN) This method uses not only the representation of rumour and evidence like the previous methods, but also the Natural Language Inference (NLI) relationship between them. First each rumour is paired with each of the evidence sentences and is fed into the RoBERTa-large-MNLI 5 model to generate the NLI relation triplet representing the contradiction, neutrality, and entailment probabilities. The rumour-sentence pair is also fed into the RoBERTa-large 5 model to generate the contextual representation. Both outputs are then combined using a self-attention network in which the NLI relation triplet is used as the query, while the contextual representation is used as the key and value. Afterwards, all the outputs are concatenated into a single output that is passed through a Multi-Layer Perceptron (MLP) and a Softmax layer that generates the final veracity classification value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Veracity Classification Models", "sec_num": "4.2" }, { "text": "Since this approach relies on the inference relationship between rumour and evidence, we will only compare it with the other models if both elements are available, and thus only one result is shown in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 201, "end": 208, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Veracity Classification Models", "sec_num": "4.2" }, { "text": "Experiments were performed using 5-fold leaveone-out-cross-validation with each of PHEME's rumourous events being a fold, as is customary for 5 https://huggingface.co/ this dataset (see Section 3.1). We will release the code used to collect the evidence and to perform experiments on GitHub.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.3" }, { "text": "For the training of the aforementioned models, the inputs are padded and truncated to the longest sequence. Cross-entropy is used as the loss function. The optimizer used is AdamW (Loshchilov and Hutter, 2019) with \u03b2 1 = 0.9, \u03b2 2 = 0.999, and a weight decay of 0.01. For the BERT-based models, the batch size is 20, the learning rate is 3\u00d710 \u22125 , and the training is performed for 25 epochs. For NLI-SAN, the size of the hidden layer is 50, the batch size is 30, the learning rate is 10 \u22124 , and the training is performed for 200 epochs. Table 5 presents the results of our experiments in terms of macro-averaged F1-score. Macro F1 score is a suitable metric to evaluate performance on this dataset due to class and fold size imbalance.", "cite_spans": [ { "start": 180, "end": 209, "text": "(Loshchilov and Hutter, 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 538, "end": 545, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.3" }, { "text": "In these experiments it is not our goal to outperform state-of-the-art results on the PHEME dataset, but to demonstrate the effectiveness of incorporating the evidence for social media rumour verification. State-of-the-art results are obtained by more complex architectures, in which incorporating the evidence and evaluating its effects is a more challenging task. For instance, the VRoC model (Cheng et al., 2020) currently yields state-of-the-art F1 score of 0.484 on this task, it uses Variational Autoencoder for representation of the rumour as well as multitask learning set up incorporating four tasks.", "cite_spans": [ { "start": 395, "end": 415, "text": "(Cheng et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.4" }, { "text": "The results in Table 5 suggest that there is indeed a benefit to using the evidence which we have retrieved for rumour veracity classification. This joint approach outperforms the other two, and the use of the rumour alone generally outperforms the use of evidence alone, fitting with the idea that veracity can be classified to some extent by the writing style of the rumour alone.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.4" }, { "text": "In addition to the improvement in the results obtained by having evidence relevant to each rumour, our work opens the door to the use of more complex veracity classification models that consider additional attributes between both elements. The results obtained in the case of the NLI-SAN model show how this approach can be useful, obtaining better results than using the BERT model, although in this case inferior to the more simple use of RoBERTa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.4" }, { "text": "A more detailed, per-class and per-fold, results breakdown for all of the models can be found in Table 5 . For both BERT and RoBERTa, the combination of rumour together with evidence seems particularly useful for correct classification of the False class, with a mild gain also noted for Unverified. This could be the result of models inferring that there is disagreement between False rumours and their evidence, which would not be possible without the presence of both sources. It is noteworthy that existing rumour veracity classification models using the PHEME dataset have often found the False and Unverified classes to be problematic (Dougrez-Lewis et al., 2021) . True class also benefits from incorporating evidence in RoBERTa model comparing to using rumour only. The results breakdown for the NLI-SAN model can also be found in Table 5 , for which a similar pattern of per-class results can be observed. Most of the perfold results for both BERT and RoBERTa also show the best performance when using a combination of rumour and evidence, only with exception of Germanwings Crash event (dominated by False class) for BERT and Ottawa shooting event (dominated by True class) for RoBERTa.", "cite_spans": [ { "start": 641, "end": 669, "text": "(Dougrez-Lewis et al., 2021)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 97, "end": 104, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 839, "end": 846, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.4" }, { "text": "After experimentation with various searching strategies for retrieving evidence from the web, we have constructed the PHEMEPlus dataset, which will facilitate further work on using evidence from wide range of sources for rumour veracity classification. The best such strategies, according to our evaluation metrics, are those which leave the grammatical structure of the claim relatively intact. There is much potential to improve existing rumour veracity classification systems by augmenting them with, or with a broader range, or better quality of evi-dence. We plan to build upon these findings in the future, working on identifying ways of incorporating the evidence from heterogeneous sources into more complex rumour verification models to maximise the gains from this information and achieve state-of-the-art results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "https://github.com/JohnNLP/PhemePlus", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.politifact.com/ 3 https://www.suggest.com/ 4 https://www.healthnewsreview.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by an EPSRC grant (EP/V048597/1). JDL was funded by the EPSRC Doctoral Training Grant. ML and YH are supported by Turing AI Fellowships (EP/V030302/1, EP/V020579/1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Explainable fact checking with probabilistic answer set programming", "authors": [ { "first": "Naser", "middle": [], "last": "Ahmadi", "suffix": "" }, { "first": "Joohyung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Papotti", "suffix": "" }, { "first": "Mohammed", "middle": [], "last": "Saeed", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naser Ahmadi, Joohyung Lee, Paolo Papotti, and Mo- hammed Saeed. 2019. Explainable fact checking with probabilistic answer set programming.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Where is your evidence: improving factchecking by justification modeling", "authors": [ { "first": "Savvas", "middle": [], "last": "Tariq Alhindi", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Petridis", "suffix": "" }, { "first": "", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the first workshop on fact extraction and verification (FEVER)", "volume": "", "issue": "", "pages": "85--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tariq Alhindi, Savvas Petridis, and Smaranda Mure- san. 2018. Where is your evidence: improving fact- checking by justification modeling. In Proceedings of the first workshop on fact extraction and verifica- tion (FEVER), pages 85-90.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Reem Suwaileh, and Fatima Haouari. 2020. Checkthat! at clef 2020: Enabling the automatic identification and verification of claims in social media", "authors": [ { "first": "Alberto", "middle": [], "last": "Barr\u00f3n-Cedeno", "suffix": "" }, { "first": "Tamer", "middle": [], "last": "Elsayed", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Da San", "suffix": "" }, { "first": "Maram", "middle": [], "last": "Martino", "suffix": "" }, { "first": "", "middle": [], "last": "Hasanain", "suffix": "" } ], "year": null, "venue": "European Conference on Information Retrieval", "volume": "", "issue": "", "pages": "499--507", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alberto Barr\u00f3n-Cedeno, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, and Fatima Haouari. 2020. Checkthat! at clef 2020: Enabling the automatic identification and verification of claims in social media. In European Conference on Information Retrieval, pages 499-507. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "VRoC: Variational Autoencoder-Aided Multi-Task Rumor Classifier Based on Text", "authors": [ { "first": "Mingxi", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Shahin", "middle": [], "last": "Nazarian", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Bogdan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "2892--2898", "other_ids": { "DOI": [ "10.1145/3366423.3380054" ] }, "num": null, "urls": [], "raw_text": "Mingxi Cheng, Shahin Nazarian, and Paul Bog- dan. 2020. VRoC: Variational Autoencoder-Aided Multi-Task Rumor Classifier Based on Text, page 2892-2898. Association for Computing Machinery, New York, NY, USA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Coaid: Covid-19 healthcare misinformation dataset", "authors": [ { "first": "Limeng", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Dongwon", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.00885" ] }, "num": null, "urls": [], "raw_text": "Limeng Cui and Dongwon Lee. 2020. Coaid: Covid-19 healthcare misinformation dataset. arXiv preprint arXiv:2006.00885.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Ginger cannot cure cancer: Battling fake health news with a comprehensive data repository", "authors": [ { "first": "Enyan", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiwei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Suhang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the International AAAI Conference on Web and Social Media", "volume": "14", "issue": "", "pages": "853--862", "other_ids": {}, "num": null, "urls": [], "raw_text": "Enyan Dai, Yiwei Sun, and Suhang Wang. 2020. Ginger cannot cure cancer: Battling fake health news with a comprehensive data repository. In Proceedings of the International AAAI Conference on Web and Social Media, volume 14, pages 853-862.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Clausie: Clause-based open information extraction", "authors": [ { "first": "Luciano", "middle": [], "last": "Del Corro", "suffix": "" }, { "first": "Rainer", "middle": [], "last": "Gemulla", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 22nd International Conference on World Wide Web, WWW '13", "volume": "", "issue": "", "pages": "355--366", "other_ids": { "DOI": [ "10.1145/2488388.2488420" ] }, "num": null, "urls": [], "raw_text": "Luciano Del Corro and Rainer Gemulla. 2013. Clausie: Clause-based open information extraction. In Pro- ceedings of the 22nd International Conference on World Wide Web, WWW '13, page 355-366, New York, NY, USA. Association for Computing Machin- ery.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Drink bleach or do what now? covid-hera: A dataset for risk-informed health decision making in the presence of covid19 misinformation", "authors": [ { "first": "Arkin", "middle": [], "last": "Dharawat", "suffix": "" }, { "first": "Ismini", "middle": [], "last": "Lourentzou", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Morales", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.08743" ] }, "num": null, "urls": [], "raw_text": "Arkin Dharawat, Ismini Lourentzou, Alex Morales, and ChengXiang Zhai. 2020. Drink bleach or do what now? covid-hera: A dataset for risk-informed health decision making in the presence of covid19 misinfor- mation. arXiv preprint arXiv:2010.08743.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning disentangled latent topics for twitter rumour veracity classification", "authors": [ { "first": "John", "middle": [], "last": "Dougrez-Lewis", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Kochkina", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "3902--3908", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Dougrez-Lewis, Maria Liakata, Elena Kochkina, and Yulan He. 2021. Learning disentangled latent topics for twitter rumour veracity classification. In Findings of the Association for Computational Lin- guistics: ACL-IJCNLP 2021, pages 3902-3908.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Covidlies: Detecting covid-19 misinformation on social media", "authors": [ { "first": "Tamanna", "middle": [], "last": "Hossain", "suffix": "" }, { "first": "I", "middle": [ "V" ], "last": "Robert L Logan", "suffix": "" }, { "first": "Arjuna", "middle": [], "last": "Ugarte", "suffix": "" }, { "first": "Yoshitomo", "middle": [], "last": "Matsubara", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Young", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Workshop on NLP for COVID-19", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tamanna Hossain, Robert L Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. 2020. Covidlies: Detecting covid-19 misin- formation on social media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Interpretable rumor detection in microblogs by attending to user interactions", "authors": [ { "first": "Serena", "middle": [], "last": "Ling Min", "suffix": "" }, { "first": "Hai", "middle": [ "Leong" ], "last": "Khoo", "suffix": "" }, { "first": "Zhong", "middle": [], "last": "Chieu", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Qian", "suffix": "" }, { "first": "", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "8783--8790", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ling Min Serena Khoo, Hai Leong Chieu, Zhong Qian, and Jing Jiang. 2020. Interpretable rumor detection in microblogs by attending to user interactions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8783-8790.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Rumour stance and veracity classification in social media conversations", "authors": [ { "first": "Elena", "middle": [], "last": "Kochkina", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Kochkina. 2019. Rumour stance and veracity classification in social media conversations,\".", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Meet the truth: Leverage objective facts and subjective views for interpretable rumor detection", "authors": [ { "first": "Jiawen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shiwen", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Hung-Yu", "middle": [], "last": "Kao", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "705--715", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiawen Li, Shiwen Ni, and Hung-Yu Kao. 2021. Meet the truth: Leverage objective facts and subjective views for interpretable rumor detection. In Find- ings of the Association for Computational Linguis- tics: ACL-IJCNLP 2021, pages 705-715.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Mm-covid: A multilingual and multimodal data repository for combating covid-19 disinformation", "authors": [ { "first": "Yichuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bohan", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2011.04088" ] }, "num": null, "urls": [], "raw_text": "Yichuan Li, Bohan Jiang, Kai Shu, and Huan Liu. 2020. Mm-covid: A multilingual and multimodal data repository for combating covid-19 disinforma- tion. arXiv preprint arXiv:2011.04088.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Ifact: An interactive framework to assess claims from tweets", "authors": [ { "first": "Yong", "middle": [], "last": "Wee", "suffix": "" }, { "first": "Mong", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Wynne", "middle": [], "last": "Li Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Hsu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17", "volume": "", "issue": "", "pages": "787--796", "other_ids": { "DOI": [ "10.1145/3132847.3132995" ] }, "num": null, "urls": [], "raw_text": "Wee Yong Lim, Mong Li Lee, and Wynne Hsu. 2017. Ifact: An interactive framework to assess claims from tweets. In Proceedings of the 2017 ACM on Confer- ence on Information and Knowledge Management, CIKM '17, page 787-796, New York, NY, USA. As- sociation for Computing Machinery.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "End-to-end time-sensitive fact check", "authors": [ { "first": "Yong", "middle": [], "last": "Wee", "suffix": "" }, { "first": "Mong", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Wynne", "middle": [], "last": "Li Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Hsu", "suffix": "" } ], "year": 2019, "venue": "ACM SI-GIR Workshop on Reducing Online Misinformation Exposure (ROME)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wee Yong Lim, Mong Li Lee, and Wynne Hsu. 2019. End-to-end time-sensitive fact check,\". In ACM SI- GIR Workshop on Reducing Online Misinformation Exposure (ROME).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Decoupled weight decay regularization", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Confer- ence on Learning Representations.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Detecting rumors from microblogs with recurrent neural networks", "authors": [ { "first": "Jing", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Prasenjit", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Sejeong", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "Bernard", "middle": [ "J" ], "last": "Jansen", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Meeyoung", "middle": [], "last": "Cha", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16", "volume": "", "issue": "", "pages": "3818--3824", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J. Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of the Twenty-Fifth International Joint Conference on Artifi- cial Intelligence, IJCAI'16, page 3818-3824. AAAI Press.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Detect rumors in microblog posts using propagation structure via kernel learning", "authors": [ { "first": "Jing", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P17-1066" ] }, "num": null, "urls": [], "raw_text": "Jing Ma, Wei Gao, and Kam-Fai Wong. 2017. Detect rumors in microblog posts using propagation struc- ture via kernel learning. In Proceedings of the 55th", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "708--717", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 708-717, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Rumor detection on twitter with tree-structured recursive neural networks", "authors": [ { "first": "Jing", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1980--1989", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Ma, Wei Gao, and Kam-Fai Wong. 2018. Rumor detection on twitter with tree-structured recursive neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1980-1989.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Characterizing covid-19 misinformation communities using a novel twitter dataset", "authors": [ { "first": "Ali", "middle": [], "last": "Shahan", "suffix": "" }, { "first": "Kathleen", "middle": [ "M" ], "last": "Memon", "suffix": "" }, { "first": "", "middle": [], "last": "Carley", "suffix": "" } ], "year": 2020, "venue": "CEUR Workshop Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shahan Ali Memon and Kathleen M Carley. 2020. Char- acterizing covid-19 misinformation communities us- ing a novel twitter dataset. In CEUR Workshop Pro- ceedings, volume 2699.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed representa- tions of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Credeye: A credibility lens for analyzing and explaining misinformation", "authors": [ { "first": "Kashyap", "middle": [], "last": "Popat", "suffix": "" }, { "first": "Subhabrata", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Jannik", "middle": [], "last": "Str\u00f6tgen", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2018, "venue": "Companion Proceedings of the The Web Conference", "volume": "18", "issue": "", "pages": "155--158", "other_ids": { "DOI": [ "10.1145/3184558.3186967" ] }, "num": null, "urls": [], "raw_text": "Kashyap Popat, Subhabrata Mukherjee, Jannik Str\u00f6tgen, and Gerhard Weikum. 2018. Credeye: A credibility lens for analyzing and explaining misinformation. In Companion Proceedings of the The Web Conference 2018, WWW '18, page 155-158, Republic and Can- ton of Geneva, CHE. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Stanza: A Python natural language processing toolkit for many human languages", "authors": [ { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuhui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bolton", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Automatic boolean query formulation for systematic review literature search", "authors": [ { "first": "Harrisen", "middle": [], "last": "Scells", "suffix": "" }, { "first": "Guido", "middle": [], "last": "Zuccon", "suffix": "" }, { "first": "Bevan", "middle": [], "last": "Koopman", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The Web Conference 2020", "volume": "", "issue": "", "pages": "1071--1081", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harrisen Scells, Guido Zuccon, Bevan Koopman, and Justin Clark. 2020. Automatic boolean query formu- lation for systematic review literature search. In Pro- ceedings of The Web Conference 2020, pages 1071- 1081.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Overview of checkthat! 2020 english: Automatic identification and verification of claims in social media", "authors": [ { "first": "Shaden", "middle": [], "last": "Shaar", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Nikolov", "suffix": "" }, { "first": "Nikolay", "middle": [], "last": "Babulkov", "suffix": "" }, { "first": "Firoj", "middle": [], "last": "Alam", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Barr\u00f3n-Cedeno", "suffix": "" }, { "first": "Tamer", "middle": [], "last": "Elsayed", "suffix": "" }, { "first": "Maram", "middle": [], "last": "Hasanain", "suffix": "" }, { "first": "Reem", "middle": [], "last": "Suwaileh", "suffix": "" }, { "first": "Fatima", "middle": [], "last": "Haouari", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Da San", "suffix": "" }, { "first": "", "middle": [], "last": "Martino", "suffix": "" } ], "year": 2020, "venue": "CLEF (Working Notes)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaden Shaar, Alex Nikolov, Nikolay Babulkov, Firoj Alam, Alberto Barr\u00f3n-Cedeno, Tamer Elsayed, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Giovanni Da San Martino, et al. 2020. Overview of checkthat! 2020 english: Automatic identification and verification of claims in social media. In CLEF (Working Notes).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Fakenewsnet: A data repository with news content, social context and spatialtemporal information for studying fake news on social media", "authors": [ { "first": "Kai", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Deepak", "middle": [], "last": "Mahudeswaran", "suffix": "" }, { "first": "Suhang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dongwon", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.01286" ] }, "num": null, "urls": [], "raw_text": "Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dong- won Lee, and Huan Liu. 2018. Fakenewsnet: A data repository with news content, social context and spa- tialtemporal information for studying fake news on social media. arXiv preprint arXiv:1809.01286.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Ced: Credible early detection of social media rumors", "authors": [ { "first": "Changhe", "middle": [], "last": "Song", "suffix": "" }, { "first": "Cunchao", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Changhe Song, Cunchao Tu, Cheng Yang, Zhiyuan Liu, and Maosong Sun. 2018. Ced: Credible early detec- tion of social media rumors.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Reque: A configurable workflow and dataset collection for query refinement", "authors": [ { "first": "Mahtab", "middle": [], "last": "Tamannaee", "suffix": "" }, { "first": "Hossein", "middle": [], "last": "Fani", "suffix": "" }, { "first": "Fattane", "middle": [], "last": "Zarrinkalam", "suffix": "" }, { "first": "Jamil", "middle": [], "last": "Samouh", "suffix": "" }, { "first": "Samad", "middle": [], "last": "Paydar", "suffix": "" }, { "first": "Ebrahim", "middle": [], "last": "Bagheri", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 29th ACM International Conference on Information and Knowledge Management, CIKM '20", "volume": "", "issue": "", "pages": "3165--3172", "other_ids": { "DOI": [ "10.1145/3340531.3412775" ] }, "num": null, "urls": [], "raw_text": "Mahtab Tamannaee, Hossein Fani, Fattane Zarrinkalam, Jamil Samouh, Samad Paydar, and Ebrahim Bagheri. 2020. Reque: A configurable workflow and dataset collection for query refinement. In Proceedings of the 29th ACM International Conference on Informa- tion and Knowledge Management, CIKM '20, page 3165-3172, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "FEVER: a large-scale dataset for fact extraction and VERification", "authors": [ { "first": "James", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Christodoulopoulos", "suffix": "" }, { "first": "Arpit", "middle": [], "last": "Mittal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "809--819", "other_ids": { "DOI": [ "10.18653/v1/N18-1074" ] }, "num": null, "urls": [], "raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Cord-19: The covid-19 open research dataset", "authors": [ { "first": "Lucy", "middle": [ "Lu" ], "last": "Wang", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Yoganand", "middle": [], "last": "Chandrasekhar", "suffix": "" }, { "first": "Russell", "middle": [], "last": "Reas", "suffix": "" }, { "first": "Jiangjiang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Burdick", "suffix": "" }, { "first": "Darrin", "middle": [], "last": "Eide", "suffix": "" }, { "first": "Kathryn", "middle": [], "last": "Funk", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Katsis", "suffix": "" }, { "first": "Rodney", "middle": [ "Michael" ], "last": "Kinney", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Doug Burdick, Darrin Eide, Kathryn Funk, Yannis Katsis, Rodney Michael Kinney, et al. 2020. Cord-19: The covid-19 open research dataset. In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "liar, liar pants on fire\": A new benchmark dataset for fake news detection", "authors": [ { "first": "William", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wang", "middle": [], "last": "", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.00648" ] }, "num": null, "urls": [], "raw_text": "William Yang Wang. 2017. \" liar, liar pants on fire\": A new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evalu- ating text generation with bert.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Recovery: A multimodal repository for covid-19 news credibility research", "authors": [ { "first": "Xinyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Apurva", "middle": [], "last": "Mulay", "suffix": "" }, { "first": "Emilio", "middle": [], "last": "Ferrara", "suffix": "" }, { "first": "Reza", "middle": [], "last": "Zafarani", "suffix": "" } ], "year": 2020, "venue": "International Conference on Information and Knowledge Management, Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinyi Zhou, Apurva Mulay, Emilio Ferrara, and Reza Zafarani. 2020. Recovery: A multimodal repository for covid-19 news credibility research. In Interna- tional Conference on Information and Knowledge Management, Proceedings.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Detection and resolution of rumours in social media: A survey", "authors": [ { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Ahmet", "middle": [], "last": "Aker", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" } ], "year": 2018, "venue": "ACM Comput. Surv", "volume": "51", "issue": "2", "pages": "", "other_ids": { "DOI": [ "10.1145/3161603" ] }, "num": null, "urls": [], "raw_text": "Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018. Detection and reso- lution of rumours in social media: A survey. ACM Comput. Surv., 51(2).", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Analysing how people orient to and spread rumours in social media by looking at conversational threads", "authors": [ { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" }, { "first": "Geraldine", "middle": [], "last": "Wong Sak", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Hoi", "suffix": "" }, { "first": "", "middle": [], "last": "Tolmie", "suffix": "" } ], "year": 2016, "venue": "PloS one", "volume": "11", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie. 2016. Analysing how people orient to and spread rumours in social media by looking at conversational threads. PloS one, 11(3):e0150989.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "The PHEMEPlus dataset consists of labelled Twitter rumours, their conversation thread, and corresponding evidence retrieved from the web. This is an adapted example.", "uris": null, "num": null }, "TABREF1": { "text": "Statistics of the PHEMEPlus dataset by extending the PHEME-5 dataset with retrieved relevant articles. All but 2 rumours have at least 1 associated article.", "content": "", "num": null, "type_str": "table", "html": null }, "TABREF2": { "text": "Original Rumour: MORE: Massacre suspects believed to have taken hostage and holed up in small industrial town northeast of Paris: #CharlieHebdo", "content": "
Query StrategyQuery Text
Preprocessedbefore:2015-01-09 MORE : Massacre suspects believed to have taken hostage
and holed up in small industrial town northeast of Paris :
StanfordNLPbefore:2015-01-09 (Charlie Hebdo) Massacre suspects small industrial town
northeast
ClausIEbefore:2015-01-09 (Charlie Hebdo) Massacre suspects believed to have taken
hostage holed up in small industrial town northeast of Paris
", "num": null, "type_str": "table", "html": null }, "TABREF3": { "text": "", "content": "", "num": null, "type_str": "table", "html": null }, "TABREF5": { "text": "Performance of the search strategies, evaluated by our evaluation metrics.", "content": "
", "num": null, "type_str": "table", "html": null }, "TABREF7": { "text": "Overlap of retrieved articles with articles from rumour responses.", "content": "
", "num": null, "type_str": "table", "html": null }, "TABREF9": { "text": "Per-event and per-fold F1 scores from the BERT, RoBERTa, and NLI-SAN models. The 2-letter column headings abbreviate the names of individual rumourous events in PHEME (as inTable 1).", "content": "
", "num": null, "type_str": "table", "html": null } } } }