{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:42:18.001073Z" }, "title": "Neural Re-rankers for Evidence Retrieval in the FEVEROUS Task", "authors": [ { "first": "Mohammed", "middle": [], "last": "Saeed", "suffix": "", "affiliation": { "laboratory": "", "institution": "EURECOM", "location": { "country": "France" } }, "email": "" }, { "first": "Giulio", "middle": [], "last": "Alfarano", "suffix": "", "affiliation": { "laboratory": "", "institution": "EURECOM", "location": { "country": "France" } }, "email": "" }, { "first": "Khai", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "EURECOM", "location": { "country": "France" } }, "email": "" }, { "first": "Duc", "middle": [], "last": "Pham", "suffix": "", "affiliation": { "laboratory": "", "institution": "EURECOM", "location": { "country": "France" } }, "email": "" }, { "first": "Rapha\u00ebl", "middle": [], "last": "Troncy", "suffix": "", "affiliation": { "laboratory": "", "institution": "EURECOM", "location": { "country": "France" } }, "email": "" }, { "first": "Paolo", "middle": [], "last": "Papotti", "suffix": "", "affiliation": { "laboratory": "", "institution": "EURECOM", "location": { "country": "France" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Computational fact-checking has gained a lot of traction in the machine learning and natural language processing communities. A plethora of solutions have been developed, but methods which leverage both structured and unstructured information to detect misinformation are of particular relevance. In this paper, we tackle the FEVEROUS (Fact Extraction and VERification Over Unstructured and Structured information) challenge which consists of an open source baseline system together with a benchmark dataset containing 87,026 verified claims. We extend this baseline model by improving the evidence retrieval module yielding the best evidence F1 score among the competitors in the challenge leaderboard while obtaining an overall FEVEROUS score of 0.20 (5 th best ranked system).", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Computational fact-checking has gained a lot of traction in the machine learning and natural language processing communities. A plethora of solutions have been developed, but methods which leverage both structured and unstructured information to detect misinformation are of particular relevance. In this paper, we tackle the FEVEROUS (Fact Extraction and VERification Over Unstructured and Structured information) challenge which consists of an open source baseline system together with a benchmark dataset containing 87,026 verified claims. We extend this baseline model by improving the evidence retrieval module yielding the best evidence F1 score among the competitors in the challenge leaderboard while obtaining an overall FEVEROUS score of 0.20 (5 th best ranked system).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The volume of potentially misleading and false claims has surged with the increasing usage of the web and social media. No barriers exist for publishing information which make anyone capable of diffusing false or biased claims while reaching large audiences with ease (Baptista and Gradim, 2020). One approach of dealing with this ordeal is computational fact-checking (Wu et al., 2014) , where the automation of the verification pipeline or parts of it is flourishing due to advances in natural language processing (Nakov et al., 2021; Saeed and Papotti, 2021) . Along these lines, several datasets and fact-evaluation algorithms have been proposed (Kotonya and Toni, 2020) .", "cite_spans": [ { "start": 369, "end": 386, "text": "(Wu et al., 2014)", "ref_id": "BIBREF23" }, { "start": 516, "end": 536, "text": "(Nakov et al., 2021;", "ref_id": "BIBREF14" }, { "start": 537, "end": 561, "text": "Saeed and Papotti, 2021)", "ref_id": "BIBREF21" }, { "start": 650, "end": 674, "text": "(Kotonya and Toni, 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we report on our effort in tackling the FEVEROUS challenge (Aly et al., 2021) . The provided dataset consists in a set of textual claims verified against evidence retrieved from a corpus of English Wikipedia pages. The claims are labeled as supported, refuted or NEI (Not Enough Information). Evidence can be unstructured (such as sentences) or structured (such as general tables, infoboxes, lists, etc.). The task is to return the right label with the correct evidence. The baseline model is divided in two main parts: an evidence retrieval part and a verdict prediction part. The evaluation is performed through the so-called FEVEROUS score which is computed considering both the correct retrieval of the evidence and the correct label predictions. In this paper, we propose an enhanced version of this baseline model that focuses on the retrieval component through a re-ranking process of pages, resulting in a more precise model.", "cite_spans": [ { "start": 74, "end": 92, "text": "(Aly et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the remainder of this paper, we first describe briefly the challenge task and the supplied data, and we detail our extension (Section 2). We then provide experimental results obtained on the development dataset and we discuss observations on analyzed errors (Section 3). We conclude with a discussion of other research directions that can be applied to improve results for the FEVEROUS task (Section 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We begin by reviewing the given baseline, and then propose an extension to it that improved the precision and recall of the page-retrieval module in exchange for more computation time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "In FEVEROUS (Aly et al., 2021) , the aim is to find out the veracity of a claim c. This is done by: (i) acquiring a set of evidence E which could contain sentences extracted from a Wikipedia page, or cell(s) from a Wikipedia table, and (ii) predicting a label y \u2208 {Supports, Refutes, NEI}.", "cite_spans": [ { "start": 12, "end": 30, "text": "(Aly et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "FEVEROUS Baseline", "sec_num": "2.1" }, { "text": "The proposed baseline is simple yet competitive (Aly et al., 2021) . For (i), a combination of entity-matching and TF-IDF scoring are used to identify the most prominent Wikipedia pages (Chen et al., 2017) . k pages are selected by matching entities extracted from the claim to Wikipedia pages.", "cite_spans": [ { "start": 48, "end": 66, "text": "(Aly et al., 2021)", "ref_id": null }, { "start": 186, "end": 205, "text": "(Chen et al., 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "FEVEROUS Baseline", "sec_num": "2.1" }, { "text": "If needed, remaining pages are identified using TF-IDF matching between the claim and the introductory sentence of the page. Given the extracted Wikipedia pages, sentences are scored through a dot product with the claim in the TF-IDF space, where the top l sentences are retrieved. Similarly, the top q tables are extracted where the TF-IDF vector of the table title is used to represent a table. The tables are then linearized, pre-processed to respect the input-size limit of the classifier (Oguz et al., 2020) , and then used, alongside the claim, to fine-tune a RoBERTa model (Liu et al., 2020 ) on a binary token classification task.", "cite_spans": [ { "start": 493, "end": 512, "text": "(Oguz et al., 2020)", "ref_id": "BIBREF18" }, { "start": 580, "end": 597, "text": "(Liu et al., 2020", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "FEVEROUS Baseline", "sec_num": "2.1" }, { "text": "For (ii), given the retrieved evidence, the final verdict is predicted using a RoBERTa model with a sequence-classification layer which is fed with sequentially concatenated claim and evidences as input. The model has been trained on a set of labelled claims (71,291 samples) with their associated evidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FEVEROUS Baseline", "sec_num": "2.1" }, { "text": "It is clear that to enhance the system, evidence retrieval should be a top priority as identifying the correct evidence is crucial for the verdict predictor to function properly. We focus on enhancing the identification of Wikipedia pages by utilizing advances in the information retrieval (IR) community where neural ranking models have been proposed for better data retrieval (Mitra et al., 2016; Hui et al., 2018) .", "cite_spans": [ { "start": 378, "end": 398, "text": "(Mitra et al., 2016;", "ref_id": "BIBREF13" }, { "start": 399, "end": 416, "text": "Hui et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Extension", "sec_num": "2.2" }, { "text": "A simple IR pipeline comprises a two-stage reranking process where: (a) first, a large number of documents to a given query are retrieved from a corpus using a standard mechanism such as TF-IDF or BM25; (b) second, the documents are scored and reranked using a more computationally-demanding method. Given that neural ranking methods have shown success in the IR community (Guo et al., 2019) , we used this method as part of our extension.", "cite_spans": [ { "start": 373, "end": 391, "text": "(Guo et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Extension", "sec_num": "2.2" }, { "text": "For (a), we use the current page-retriever based on entity-matching and TF-IDF to retrieve a higher number of pages. For (b), the re-ranker model provides a score s i indicating how relevant a page p i is to an input claim c. The re-ranker is based on a pre-trained BERT model (Devlin et al., 2019) that is fine-tuned on the passage re-ranking task of the MS MACRO dataset (Nguyen et al., 2016) to minimize the binary cross-entropy loss:", "cite_spans": [ { "start": 277, "end": 298, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 373, "end": 394, "text": "(Nguyen et al., 2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Extension", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = \u2212 i\u2208I + log(s i ) \u2212 i\u2208I \u2212 log(1 \u2212 s i )", "eq_num": "(1)" } ], "section": "Proposed Extension", "sec_num": "2.2" }, { "text": "where I + and I \u2212 are the set of indices of the relevant and non-relevant pages respectively in the top-1,000 MS MACRO documents retrieved with BM25 (Nogueira and Cho, 2019) . We designate a page re-ranker model as a function P R(m) which take a set of relevant pages for a claim, which are usually scored by a less-computationally demanding method such as TF-IDF to limit the set of candidates, scores them, and outputs the top m pages.", "cite_spans": [ { "start": 149, "end": 173, "text": "(Nogueira and Cho, 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Extension", "sec_num": "2.2" }, { "text": "Model. We rely on the cross-encoder model fine-tuned on the MS MARCO Passage Ranking task (Reimers and Gurevych, 2019) provided on the Hugging Face model hub. We feed the claim with every extracted page into the re-ranker model to obtain scores used to re-rank the respective pages. Settings. We set k = 150 and m = 5 where 150 pages are first extracted through entity-matching+TF-IDF, scored with the re-ranker model, and then the top-5 pages for each claim are extracted. The remainder of the pipeline remains intact (l = 5, q = 3). We designate this pipeline by BLpage(150) \u2192 P R(5) \u2192 tf idf (5, 3). Our code can be found at https://gitlab.eurecom.", "cite_spans": [ { "start": 90, "end": 118, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "fr/saeedm1/eurecom-fever.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": ". We see improvement with the page reranker as the coverage of documents has increased compared to the baseline. Hence, for k = 5, the retriever without the re-ranker achieves a document coverage of 69% on the dev set, while the addition of the re-ranker enhances the coverage to around 79%, which, in turn, improves the FEVEROUS and F1 scores, compared to the initial baseline (Table 1, in bold). While the page re-ranker improves the document coverage, we do not observe pronounced improvements on the system as a whole. Even with a better page retriever, an increase in FEVEROUS and F1 scores requires improvements also in the sentence and cell evidence retrievers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "Although more time-demanding, the re-ranker gives more subtle results than an entity-matching approach (Aly et al., 2021) . For example, given the following excerpt of a claim: \"Family Guy is an American animated sitcom that features five main voice actors [...] List of Family Guy cast members, and Family Guy. Finally, we observe that the entity-matching process is brittle and fails to match the sub-string \"Angela Santomero\" to the page Angela C. Santomero as it only performs exact string matching.", "cite_spans": [ { "start": 103, "end": 121, "text": "(Aly et al., 2021)", "ref_id": null }, { "start": 257, "end": 262, "text": "[...]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "There are some cases where entity-matching+TF-IDF outperformed the re-ranker: some of those are cases where the Wikipedia page content is small and does not bring much benefit on a semantic level and this is where TF-IDF works better. We observe that we tend to miss the correct page when there are several pages who share similar semantics. For example, given the claim: \"Seven notable animated television series, including Super Why!, a children's educational show created by Angela C. Santomero and Samantha Freeman Alpert, Phineas and Ferb and WordGirl, were released in September 2007.\", the page re-ranker retrieves TV shows that are produced Angela C. Santomero). However, the correct page Phineas and Ferb does not appear in the top-5 predictions, and other pages take the lead, whereas the baseline can identify the correct page by entity matching, although its predictions are not as coherent as those of the page re-ranker. Other Attempts. We have experimented with varying the number of extracted pages k. We measure also the time taken for re-ranking. Table 2 shows the results. We observe that increasing the number of pages to extract does not always increase the FEVEROUS score, as more candidate pages act as distractors to the other modules in the pipeline.", "cite_spans": [], "ref_spans": [ { "start": 1065, "end": 1072, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "We have attempted to perform other extensions to the system that we describe below (Table 1) .", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 92, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "Firstly, we specified the re-ranking system to extract less pages (50), but it worsened the scores. This configuration is defined as BLpage(50) \u2192 P R(5) \u2192 tf idf (5, 3) .", "cite_spans": [], "ref_spans": [ { "start": 162, "end": 168, "text": "(5, 3)", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "Furthermore, we applied the same re-ranking approach at the sentence level. After obtaining 150 pages from the page re-ranker, we continue to retrieve all sentences from every page and re-rank them using the same passage re-ranking model (Nguyen et al., 2016) , (BLpage(150) \u2192 P R(5) \u2192 SR(5) \u2192 tf idf table (3)). However, despite great outputs of page re-ranker, we could not obtain better results from sentences re-ranker than TF-IDF.", "cite_spans": [ { "start": 238, "end": 259, "text": "(Nguyen et al., 2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "Regarding how relevant sentences and tables are chosen as evidence, apart from TF-IDF + Cosine Similarity, we also experimented with the Okapi BM25 scoring function (Robertson et al., 1995) . This is applied after the pages are re-ranked, (BLpage(150) \u2192 P R(5) \u2192 BM 25(5, 3)). Surprisingly, although BM25 is generally preferred for document retrieval, in our case, it did not lead to better results compared to TF-IDF. One possible cause might lie in text preprocessing, as we did not fully explore different combinations of preprocessing functions.", "cite_spans": [ { "start": 165, "end": 189, "text": "(Robertson et al., 1995)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "Lastly, we attempted to improve the verdict predictor by (i) fine-tuning the verdict classifier on the full training dataset (BL(5, 5, 3) f ull ) and by (ii) utilizing other pre-trained models that are either larger or were pre-finetuned on a NLI dataset. However, we did not observe significant improvements from them since their performance on the dev set was either on par or slightly worse than the baseline model signaling that the focus enhancing of the second part of the system requires more significant changes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "In this work, we have proposed the inclusion of a neural re-ranker model as a refinement step after standard methods such as TF-IDF. While being more intensive on the computational side, we do see improvements on the document-retrieval side where results are more sound. There are of course more directions that are worth exploring to improve the results further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "4" }, { "text": "Sentence retrieval could be improved by incorporating a pre-trained neural network that performs semantic matching between the claims and the sentences. One instance of such models is where the text sequences are encoded, then passed through an alignment layer that computes aligned representations of the encoded input sequences, followed by a matching layer that performs the semantic matching (Nie et al., 2018) . Such models have been applied on the FEVER dataset (Thorne et al., 2018) and have been shown to outperform the TF-IDF approach (Nie et al., 2018) .", "cite_spans": [ { "start": 396, "end": 414, "text": "(Nie et al., 2018)", "ref_id": "BIBREF16" }, { "start": 468, "end": 489, "text": "(Thorne et al., 2018)", "ref_id": "BIBREF22" }, { "start": 544, "end": 562, "text": "(Nie et al., 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "4" }, { "text": "Cell retrieval could be enhanced by utilizing pre-trained models over tables that outperform pretrained models over text (Herzig et al., 2021) . Several systems that exploit table structure have been proposed for the task of fact-checking a claim over a table. However, not all of them can be used in every setting as each system holds different attributes and dimensions that need to be comprehended to better integrate them in certain tasks (Saeed and Papotti, 2021) . For example, some systems such as SCRUTINIZER (Karagiannis et al., 2020) are dependent on the table-schema and would not benefit in the FEVEROUS scenario where tables have varying schema. Yet, other systems such as TAPAS (Herzig et al., 2020) are schema-independent and can be fine-tuned on the available FEVEROUS dataset to provide a score for a given table, thus acting as a table ranker module. Some of these systems can be even directly trained on the data, to get domain-specific models. Once the tables have been identified, a classifier can be trained on top of models that output cell representations of a table, such as TaBERT (Yin et al., 2020) and TURL (Deng et al., 2020) , to extract the key cells for verdict prediction. Also, fine-tuning the re-ranker models on the given data is a viable approach. Finally, more sophisticated entity matching algorithms could have been explored to avoid the \"exact match\" issues that we observed with the baseline's entity matching (C. et al., 2018) .", "cite_spans": [ { "start": 121, "end": 142, "text": "(Herzig et al., 2021)", "ref_id": "BIBREF7" }, { "start": 443, "end": 468, "text": "(Saeed and Papotti, 2021)", "ref_id": "BIBREF21" }, { "start": 517, "end": 543, "text": "(Karagiannis et al., 2020)", "ref_id": "BIBREF10" }, { "start": 692, "end": 713, "text": "(Herzig et al., 2020)", "ref_id": "BIBREF8" }, { "start": 1107, "end": 1125, "text": "(Yin et al., 2020)", "ref_id": "BIBREF24" }, { "start": 1135, "end": 1154, "text": "(Deng et al., 2020)", "ref_id": "BIBREF4" }, { "start": 1452, "end": 1469, "text": "(C. et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "4" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. FEVEROUS: fact extraction and verification over unstructured and structured information", "authors": [ { "first": "Rami", "middle": [], "last": "Aly", "suffix": "" }, { "first": "Zhijiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Michael", "middle": [ "Sejr" ], "last": "Schlichtkrull", "suffix": "" }, { "first": "James", "middle": [], "last": "Thorne", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. FEVEROUS: fact extraction and verification over unstructured and structured information. CoRR, abs/2106.05707.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Understanding fake news consumption: A review", "authors": [ { "first": "Pedro", "middle": [], "last": "Jo\u00e3o", "suffix": "" }, { "first": "Anabela", "middle": [], "last": "Baptista", "suffix": "" }, { "first": "", "middle": [], "last": "Gradim", "suffix": "" } ], "year": 2020, "venue": "Social Sciences", "volume": "", "issue": "10", "pages": "", "other_ids": { "DOI": [ "10.3390/socsci9100185" ] }, "num": null, "urls": [], "raw_text": "Jo\u00e3o Pedro Baptista and Anabela Gradim. 2020. Un- derstanding fake news consumption: A review. So- cial Sciences, 9(10).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Smurf: Self-service string matching using random forests", "authors": [ { "first": "Paul", "middle": [], "last": "Suganthan", "suffix": "" }, { "first": "G", "middle": [ "C" ], "last": "", "suffix": "" }, { "first": "Adel", "middle": [], "last": "Ardalan", "suffix": "" }, { "first": "Anhai", "middle": [], "last": "Doan", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Akella", "suffix": "" } ], "year": 2018, "venue": "Proc. VLDB Endow", "volume": "12", "issue": "3", "pages": "278--291", "other_ids": { "DOI": [ "10.14778/3291264.3291272" ] }, "num": null, "urls": [], "raw_text": "Paul Suganthan G. C., Adel Ardalan, AnHai Doan, and Aditya Akella. 2018. Smurf: Self-service string matching using random forests. Proc. VLDB En- dow., 12(3):278-291.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Reading Wikipedia to answer opendomain questions", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Association for Computa- tional Linguistics (ACL).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Turl: Table understanding through representation learning", "authors": [ { "first": "Xiang", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Alyssa", "middle": [], "last": "Lees", "suffix": "" }, { "first": "You", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Cong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2020, "venue": "Proc. VLDB Endow", "volume": "14", "issue": "", "pages": "307--319", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Deng, Huan Sun, Alyssa Lees, You Wu, and Cong Yu. 2020. Turl: Table understanding through representation learning. Proc. VLDB En- dow., 14:307-319.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In ACL, pages 4171-4186.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A deep look into neural ranking models for information retrieval", "authors": [ { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Yixing", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Qingyao", "middle": [], "last": "Ai", "suffix": "" }, { "first": "Hamed", "middle": [], "last": "Zamani", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "W", "middle": [ "Bruce" ], "last": "Croft", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiafeng Guo, Yixing Fan, Liang Pang, Liu Yang, Qingyao Ai, Hamed Zamani, Chen Wu, W. Bruce Croft, and Xueqi Cheng. 2019. A deep look into neural ranking models for information retrieval. CoRR, abs/1903.06902.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Open domain question answering over tables via dense retrieval", "authors": [ { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Syrine", "middle": [], "last": "Krichene", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Eisenschlos", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "512--519", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.43" ] }, "num": null, "urls": [], "raw_text": "Jonathan Herzig, Thomas M\u00fcller, Syrine Krichene, and Julian Eisenschlos. 2021. Open domain ques- tion answering over tables via dense retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 512-519, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "TaPas: Weakly supervised table parsing via pre-training", "authors": [ { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Krzysztof", "middle": [], "last": "Nowak", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Piccinno", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Eisenschlos", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4320--4333", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.398" ] }, "num": null, "urls": [], "raw_text": "Jonathan Herzig, Pawel Krzysztof Nowak, Thomas M\u00fcller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 4320-4333, Online. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Co-pacrr: A context-aware neural ir model for ad-hoc retrieval", "authors": [ { "first": "Kai", "middle": [], "last": "Hui", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Yates", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Berberich", "suffix": "" }, { "first": "Gerard", "middle": [], "last": "De Melo", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Web Search and Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2018. Co-pacrr: A context-aware neural ir model for ad-hoc retrieval. In Proceedings of Web Search and Data Mining 2018.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Scrutinizer: A mixed-initiative approach to large-scale, datadriven claim verification", "authors": [ { "first": "Georgios", "middle": [], "last": "Karagiannis", "suffix": "" }, { "first": "Mohammed", "middle": [], "last": "Saeed", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Papotti", "suffix": "" }, { "first": "Immanuel", "middle": [], "last": "Trummer", "suffix": "" } ], "year": 2020, "venue": "Proc. VLDB Endow", "volume": "13", "issue": "", "pages": "2508--2521", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgios Karagiannis, Mohammed Saeed, Paolo Pa- potti, and Immanuel Trummer. 2020. Scrutinizer: A mixed-initiative approach to large-scale, data- driven claim verification. Proc. VLDB Endow., 13(11):2508-2521.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Explainable automated fact-checking: A survey", "authors": [ { "first": "Neema", "middle": [], "last": "Kotonya", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "5430--5443", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neema Kotonya and Francesca Toni. 2020. Ex- plainable automated fact-checking: A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5430-5443, Barcelona, Spain (Online). International Committee on Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning to match using local and distributed representations of text for web search", "authors": [ { "first": "Bhaskar", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Diaz", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Craswell", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2016. Learning to match using local and distributed representations of text for web search. CoRR, abs/1610.08136.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automated fact-checking for assisting human fact-checkers", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "P", "middle": [ "A" ], "last": "David", "suffix": "" }, { "first": "Maram", "middle": [], "last": "Corney", "suffix": "" }, { "first": "Firoj", "middle": [], "last": "Hasanain", "suffix": "" }, { "first": "Tamer", "middle": [], "last": "Alam", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Elsayed", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Barr\u00f3n-Cede\u00f1o", "suffix": "" }, { "first": "Shaden", "middle": [], "last": "Papotti", "suffix": "" }, { "first": "Giovanni Da San", "middle": [], "last": "Shaar", "suffix": "" }, { "first": "", "middle": [], "last": "Martino", "suffix": "" } ], "year": 2021, "venue": "IJCAI", "volume": "", "issue": "", "pages": "4826--4832", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov, David P. A. Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barr\u00f3n-Cede\u00f1o, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. 2021. Automated fact-checking for assist- ing human fact-checkers. In IJCAI, pages 4826- 4832. ijcai.org.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "MS MARCO: A human generated machine reading comprehension dataset", "authors": [ { "first": "Tri", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Mir", "middle": [], "last": "Rosenberg", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Song", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Tiwary", "suffix": "" }, { "first": "Rangan", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. CoRR, abs/1611.09268.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Combining fact extraction and verification with neural semantic matching networks", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Haonan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Nie, Haonan Chen, and Mohit Bansal. 2018. Combining fact extraction and verification with neural semantic matching networks. CoRR, abs/1811.07039.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Passage re-ranking with BERT. CoRR", "authors": [ { "first": "Rodrigo", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. CoRR, abs/1901.04085.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Unified open-domain question answering with structured and unstructured knowledge", "authors": [ { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Xilun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Peshterliev", "suffix": "" }, { "first": "Dmytro", "middle": [], "last": "Okhonko", "suffix": "" }, { "first": "Michael", "middle": [ "Sejr" ], "last": "Schlichtkrull", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Sejr Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2020. Unified open-domain question an- swering with structured and unstructured knowledge. CoRR, abs/2012.14610.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Okapi at trec-3", "authors": [ { "first": ",", "middle": [ "S" ], "last": "Stephen Robertson", "suffix": "" }, { "first": "S", "middle": [], "last": "Walker", "suffix": "" }, { "first": "M", "middle": [ "M" ], "last": "Jones", "suffix": "" }, { "first": "M", "middle": [], "last": "Hancock-Beaulieu", "suffix": "" }, { "first": "", "middle": [], "last": "Gatford", "suffix": "" } ], "year": 1995, "venue": "Overview of the Third Text REtrieval Conference (TREC-3)", "volume": "", "issue": "", "pages": "109--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, and M. Gatford. 1995. Okapi at trec-3. In Overview of the Third Text REtrieval Con- ference (TREC-3), pages 109-126. Gaithersburg, MD: NIST.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Factchecking statistical claims with tables", "authors": [ { "first": "Mohammed", "middle": [], "last": "Saeed", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Papotti", "suffix": "" } ], "year": 2021, "venue": "IEEE Data Eng. Bull", "volume": "44", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammed Saeed and Paolo Papotti. 2021. Fact- checking statistical claims with tables. IEEE Data Eng. Bull., 44(3).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "FEVER: a large-scale dataset for fact extraction and VERification", "authors": [ { "first": "James", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Christodoulopoulos", "suffix": "" }, { "first": "Arpit", "middle": [], "last": "Mittal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "809--819", "other_ids": { "DOI": [ "10.18653/v1/N18-1074" ] }, "num": null, "urls": [], "raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Toward computational factchecking", "authors": [ { "first": "You", "middle": [], "last": "Wu", "suffix": "" }, { "first": "K", "middle": [], "last": "Pankaj", "suffix": "" }, { "first": "Chengkai", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Cong", "middle": [], "last": "Yang", "suffix": "" }, { "first": "", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2014, "venue": "Proc. VLDB Endow", "volume": "7", "issue": "", "pages": "589--600", "other_ids": { "DOI": [ "10.14778/2732286.2732295" ] }, "num": null, "urls": [], "raw_text": "You Wu, Pankaj K. Agarwal, Chengkai Li, Jun Yang, and Cong Yu. 2014. Toward computational fact- checking. Proc. VLDB Endow., 7(7):589-600.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "TaBERT: Pretraining for joint understanding of textual and tabular data", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Wen Tau Yih", "suffix": "" }, { "first": "", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2020, "venue": "Annual Conference of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin, Graham Neubig, Wen tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for joint understanding of textual and tabular data. In Annual Conference of the Association for Computa- tional Linguistics (ACL).", "links": null } }, "ref_entries": { "TABREF2": { "html": null, "num": null, "content": "