{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:42:29.571236Z" }, "title": "Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification", "authors": [ { "first": "Neema", "middle": [], "last": "Kotonya", "suffix": "", "affiliation": { "laboratory": "", "institution": "Imperial College London", "location": {} }, "email": "" }, { "first": "Thomas", "middle": [], "last": "Spooner", "suffix": "", "affiliation": { "laboratory": "", "institution": "J.P. Morgan AI Research", "location": {} }, "email": "thomas.spooner@jpmorgan.com" }, { "first": "Daniele", "middle": [], "last": "Magazzeni", "suffix": "", "affiliation": { "laboratory": "", "institution": "J.P. Morgan AI Research", "location": {} }, "email": "daniele.magazzeni@jpmorgan.com" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "", "affiliation": { "laboratory": "", "institution": "Imperial College London", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment with both a multi-task learning paradigm to jointly train a graph attention network for both the task of evidence extraction and veracity prediction, as well as a single objective graph model for solely learning veracity prediction and separate evidence extraction. In both instances, we employ a framework for per-cell linearization of tabular evidence, thus allowing us to treat evidence from tables as sequences. The templates we employ for linearizing tables capture the context as well as the content of table data. We furthermore provide a case study to show the interpretability our approach. Our best performing system achieves a FEVEROUS score of 0.23 and 53% label accuracy on the blind test data. 1 * Work done while the author was an intern at J.P. Morgan AI Research.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment with both a multi-task learning paradigm to jointly train a graph attention network for both the task of evidence extraction and veracity prediction, as well as a single objective graph model for solely learning veracity prediction and separate evidence extraction. In both instances, we employ a framework for per-cell linearization of tabular evidence, thus allowing us to treat evidence from tables as sequences. The templates we employ for linearizing tables capture the context as well as the content of table data. We furthermore provide a case study to show the interpretability our approach. Our best performing system achieves a FEVEROUS score of 0.23 and 53% label accuracy on the blind test data. 1 * Work done while the author was an intern at J.P. Morgan AI Research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Fact checking has become an increasingly important tool to combat misinformation. Indeed the study of automated fact checking in NLP (Vlachos and Riedel, 2014) , in particular, has yielded a number of valuable insights in recent times. These include task formulations such as matching for discovering already fact-checked claims (Shaar et al., 2020) , identifying neural fake news (Zellers et al., 2020) , fact verification in scientific (Wadden et al., 2020) and public health (Kotonya and Toni, 2020b) domains, and end-to-end fact verification (Thorne et al., 2018) , which is the subject of the FEVER-OUS benchmark dataset (Aly et al., 2021) .", "cite_spans": [ { "start": 133, "end": 159, "text": "(Vlachos and Riedel, 2014)", "ref_id": "BIBREF24" }, { "start": 329, "end": 349, "text": "(Shaar et al., 2020)", "ref_id": null }, { "start": 381, "end": 403, "text": "(Zellers et al., 2020)", "ref_id": null }, { "start": 438, "end": 459, "text": "(Wadden et al., 2020)", "ref_id": "BIBREF25" }, { "start": 478, "end": 503, "text": "(Kotonya and Toni, 2020b)", "ref_id": "BIBREF11" }, { "start": 546, "end": 567, "text": "(Thorne et al., 2018)", "ref_id": "BIBREF21" }, { "start": 626, "end": 644, "text": "(Aly et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A majority of automated fact checking studies only consider text as evidence for verifying claims.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, there have been a number of works which look at fact-checking with structured and semistructured data, mainly in the form of tables and knowledge bases (Chen et al., 2020) -but factchecking from both structured and unstructured data has been largely unexplored. Given the sophistication in the presentation of fake news, it is important to develop fact checking tools for assessing evidence from a wide array of evidence sources in order to reach a more accurate verdict regarding the veracity of claims.", "cite_spans": [ { "start": 162, "end": 181, "text": "(Chen et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose a graph-based representation that supports both textual and tabular evidence, thus addressing some of the key limitations of past architectures. This approach allows us to capture relations between evidence items as well as claim-evidence pairs, borrowing from the argumentation and argument mining literature (Cabrio and Villata, 2020; Vecchi et al., 2021) , as well as argument modeling for fact verification (Alhindi et al., 2018) .", "cite_spans": [ { "start": 335, "end": 361, "text": "(Cabrio and Villata, 2020;", "ref_id": "BIBREF4" }, { "start": 362, "end": 382, "text": "Vecchi et al., 2021)", "ref_id": "BIBREF22" }, { "start": 436, "end": 458, "text": "(Alhindi et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We experiment with two formulations for graph learning. For the first, we employ a multi-task learning paradigm to jointly train a graph attention network (Velickovic et al., 2018) for both the task of evidence extraction -which we model as a node selection task -and a graph-level veracity prediction task. In the second, we explicitly separate the verification and extraction tasks, where standard semantic search is used for evidence extraction, and veracity prediction is treated as a graph-level classification problem.", "cite_spans": [ { "start": 155, "end": 180, "text": "(Velickovic et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For veracity prediction we predict a label for each claim, one of SUPPORTS, REFUTES, or NOT-ENOUGH-INFO (NEI), which is conditioned on all relevant evidence, hence the intuition to frame veracity prediction as a graph-level prediction task. In both formulations, we employ context-aware table linearization templates to produce per-cell sequence representations of tabular evidence and thus construct evidence reasoning graphs where nodes have heterogeneous evidence types (i.e., representing sentences and tables on the same evidence reasoning graph).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contributions. The three main contributions of the paper are summarized below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Provide insightful empirical analysis of the new FEVEROUS benchmark dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Propose a novel framework for interpretable fact extraction using templates to derive context-aware per-cell linearizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Present a graph reasoning model for fact verification that supports both structured and unstructured evidence data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Both the joint model and separately trained models exhibit a significant improvement over the FEVEROUS baseline, as well as significant improvements for label accuracy and evidence recall. Our separated approach to fact extraction and verification achieves a FEVEROUS score of 0.23 and label accuracy of 53% on the blind test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Graph Reasoning for Fact Verification. Several works explore graph neural networks (GNN) for fact extraction and verification, both for finegrained evidence modelling (Liu et al., 2020; Zhong et al., 2020) and evidence aggregation for veracity prediction (Zhou et al., 2019) . Furthermore, graph learning has also been leveraged to build fake news detection models which learn from evidence from different contexts; e.g., user-based and content-based data (Liu et al., 2020; Lu and Li, 2020) . There are also non-neural approaches to fake news detection with graphs (Ahmadi et al., 2019; Kotonya and Toni, 2019) . However, to the best of our knowledge, this work is the first to employ a graph structure to jointly reason over both text and tabular evidence data in both single task learning (STL) and multi-task learning (MTL) settings. Table Linearization . A number of approaches have been adopted in NLP for table linearization. For example, Gupta et al. (2020) study natural language inference in the context of table linearizations, in particular they are interested to see if language models can infer entailment relations from table linearizations. The linearization approach employed by Schlichtkrull et al. (2021) is also used for automated fact verification. However, they linearize tables row-and column-wise, whereas we focus on cells as evidence items in the FEVEROUS dataset are annotated at table-cell level.", "cite_spans": [ { "start": 167, "end": 185, "text": "(Liu et al., 2020;", "ref_id": "BIBREF13" }, { "start": 186, "end": 205, "text": "Zhong et al., 2020)", "ref_id": "BIBREF28" }, { "start": 255, "end": 274, "text": "(Zhou et al., 2019)", "ref_id": "BIBREF29" }, { "start": 456, "end": 474, "text": "(Liu et al., 2020;", "ref_id": "BIBREF13" }, { "start": 475, "end": 491, "text": "Lu and Li, 2020)", "ref_id": "BIBREF14" }, { "start": 566, "end": 587, "text": "(Ahmadi et al., 2019;", "ref_id": "BIBREF0" }, { "start": 588, "end": 611, "text": "Kotonya and Toni, 2019)", "ref_id": "BIBREF9" }, { "start": 1196, "end": 1223, "text": "Schlichtkrull et al. (2021)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 838, "end": 857, "text": "Table Linearization", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Further to the FEVEROUS dataset statistics discussed by the task description paper (Aly et al., 2021) , we perform our own data exploration. We present insights from our data analysis of the FEVEROUS dataset, which we use to inform system design choices. Table types . Wikipedia tables can be categorized into one of two classes: infoboxes and general tables. Infoboxes are fixed format tables which typically appear in the top right-hand corner of a Wikipedia article. General tables can convey a wider breadth of information (e.g., election results, sports match scores, the chronology of an event) and typically have more complex structures (e.g., multiple headers). List items can also be considered as a special subclass of tables, where the number of items is analogous to the number of columns and the nests of the list signify table rows.", "cite_spans": [ { "start": 83, "end": 101, "text": "(Aly et al., 2021)", "ref_id": null } ], "ref_spans": [ { "start": 255, "end": 266, "text": "Table types", "ref_id": null } ], "eq_spans": [], "section": "Data Analysis", "sec_num": "3" }, { "text": "Evidence types. The first observation we make is that, similar to the FEVER dataset (Thorne et al., 2018) , a sizeable portion of the training instances rely on evidence items which are extracted from the first few sentences of a Wikipedia article. The most common evidence items are the first and second sentences in a Wikipedia article, which appear in 36% and 18% of evidence sets, respectively. The four most frequent evidence cells all come from the first table, with 49% of first tables listed as evidence in the train and dev data being infoboxes. Further, the vast majority of cell evidence items are non-header cells, but these only account for approximately 5.1% of tabular evidence in the train and dev datasets. A summary of these findings is provided in Table 1 for the most common evidence types in the training data.", "cite_spans": [ { "start": 84, "end": 105, "text": "(Thorne et al., 2018)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 767, "end": 774, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data Analysis", "sec_num": "3" }, { "text": "Evidence item co-occurrences. We investigate the most common evidence pairs, both in individual evidence sets and also in the union of all evidence sets relating to a claim. The most common evidence pair in the training data is (SENTENCE_0, SENTENCE_1), which accounts for 3.2% of evidence co-occurrences. The most common sentence- (CELL_0_2_0, CELL_0_2_1). All of the ten most common co-occurrences either contain one of the first four sentences in an article or evidence from one of the first two tables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Analysis", "sec_num": "3" }, { "text": "NEI label. Lastly, we choose to explore instances of the NEI class. We sample 100 instances of NEI claims from the training data and note their qualitative attributes. We pay particular attention to this label as it is the least represented in the data. Unlike the FEVER score, the FEVEROUS metric requires the correct evidence, as well as the label, to be supplied for an NEI instance for credit to awarded. Our analysis is summarized in Table 2 . We categorize mutations, using the FEVEROUS annotation scheme, as one of three types: entity substitution, including more facts than available in the provided evidence (i.e., including additional propositions), and paraphrasing or generalizing. We use Other to categorize claims with a mutation not captured by one of these three categories.", "cite_spans": [], "ref_spans": [ { "start": 439, "end": 446, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Data Analysis", "sec_num": "3" }, { "text": "Entity Substitution 21% More facts than in evidence 42% Paraphrasing or generalizing 36% Other 1% Table 2 : We sample 100 NEI instances and categorize them according to the type of lexical mutation which results in the claim being unverifiable.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 105, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Mutation Type % Sample", "sec_num": null }, { "text": "We note that a number of NEI examples are mutations of SUPPORTS or REFUTES examples. For example the claim in Table 3 is a mutation of a SUPPORTS instance where entity substitution (humans \u2192 reptiles) has been used to make the first clause unverifiable, hence changing the label to NEI.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Mutation Type % Sample", "sec_num": null }, { "text": "Nucleoporin 153, a protein which in reptiles is encoded by the NUP153 gene, is an essential component of the basket of nuclear pore complexes (NPCs) in vertebrates, and required for the anchoring of NPCs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Claim", "sec_num": null }, { "text": "Evidence Nucleoporin 153 (Nup153) is a protein which in humans is encoded by the NUP153 gene. It is an essential component of the basket of nuclear pore complexes (NPCs) in vertebrates, and required for the anchoring of NPCs. Table 3 : NEI example where the evidence is highlighted according to the part of the claim to which it refers. The text in bold is the substitution which resulted in the label changing from SUPPORTS to NEI.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 233, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Claim", "sec_num": null }, { "text": "Our proposed method for fact verification is an end-to-end system comprising three modules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "(1) A robust document retrieval procedure (see Section 4.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "(2) An evidence graph construction and intermediate evidence filtering process (see Section 4.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "(3) A joint veracity label prediction and evidence selection layer that reasons over the evidence graph (see Section 4.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "An illustration of the complete pipeline is provided in Figure 1 , and details of each processing stage are provided in the following sections.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 64, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "For document retrieval, we employ an entity linking and API search approach similar to that of Hanselowski et al. (2018) . The WikiMedia API 2 is used to query Wikipedia for articles related to the claim, using named entities and noun phrases from the claim as search terms. These retrieved Wikipedia page titles form our candidate document set. Named entities that are not retrieved by the API are then extracted from the claim as a handful of these identify pages which are present in ", "cite_spans": [ { "start": 95, "end": 120, "text": "Hanselowski et al. (2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval", "sec_num": "4.1" }, { "text": "Evidence [\"Wolfgang Niedecken_sentence_0\", \"Wolfgang Niedecken_cell_0_4_1\", \"Wolfgang Niedecken_sentence_1\"]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval", "sec_num": "4.1" }, { "text": "Context-Aware Linearizer Figure 1 : Our fact verification pipeline. We employ two graph reasoning approaches: STL where the evidence extraction and modelled separately, and MTL where further evidence filtering is performed jointly with veracity prediction by the Graph Reasoner.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 33, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Document Retrieval", "sec_num": "4.1" }, { "text": "the Wikipedia dump (e.g., /wiki/Lars_Hjorth is present in the provided Wikipedia evidence dump, but is not returned by the WikiMedia API). In the same vein, we discard titles which are returned by the API, but are not in the Wikipedia dump. TF-IDF and cosine similarity are employed to score and rerank the retrieved Wikipedia articles with respect to their similarity to the claim. As in the approach of Hanselowski et al. (2018) , the seven highest ranked pages are chosen at test time. For completeness, we also experiment with approaches to document retrieval which select pages based on a threshold score (Nie et al., 2019) . Ultimately, we find these methods yield lower precision.", "cite_spans": [ { "start": 405, "end": 430, "text": "Hanselowski et al. (2018)", "ref_id": "BIBREF7" }, { "start": 610, "end": 628, "text": "(Nie et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval", "sec_num": "4.1" }, { "text": "Similar to other fact verification systems (Augenstein et al., 2019; Hidey et al., 2020), we jointly train our model for both the evidence selection and veracity prediction tasks. In contrast to these approaches, however, we employ a graph reasoning module for the joint learning of the two tasks. We choose this approach to exploit the permutation invariance of evidence with respect to a claim, as there is no canonical ordering of evidence. Our graph formulation differs from previous graphbased fact verification systems in that we construct a heterogeneous graph to model both tabular and sequence evidence data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "In the following sections we will describe two specific approaches that are taken for the fact verification task: (1) where we condition the graph model to learn both node-level, fine-grained evi-dence selection and graph-level veracity label prediction simultaneously, and (2) where we only learn graph-level veracity prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "Linearizing Tabular Data. We linearize both table and list evidence data and generate from these linearizations a contextualized sequence representation which captures information about each cell as well as its surrounding page elements. This is accomplished using templates that distinguish explicitly between infoboxes and general tables. For the latter, we engineer the templates to handle two particular complexities that are present only in general tables: (1) nested headers, and (2) table cells which span multiple rows and multiple columns (see Figure 2 ). Furthermore, we also employ templates for producing context-rich representations of item lists (see Table 4 for more details). Graph Structure. We construct a fully connected graph G = (V, E), where each node n i \u2208 V represents a claim-evidence pair, similar to previ- ous evidence graphs for automated fact checking (Zhao et al., 2020; Zhou et al., 2019) . Self-loops are also included in G for each node in order to improve evidence reasoning, so the set of edges for the graph is", "cite_spans": [ { "start": 882, "end": 901, "text": "(Zhao et al., 2020;", "ref_id": "BIBREF27" }, { "start": 902, "end": 920, "text": "Zhou et al., 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 553, "end": 561, "text": "Figure 2", "ref_id": null }, { "start": 665, "end": 672, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "E = {(n i , n j ) | n i , n j \u2208 V }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "At test time, we take the Wikipedia pages output by the document retrieval module, segment each Wikipedia page into its constituent page items (i.e., sentences, table cells, table captions and list items), and refer to these as evidence items. These evidence items are then filtered. Using an ensemble of pre-trained S-BERT sentence embeddings (Reimers and Gurevych, 2019), we perform semantic search with the claim as our query. Cosine similarity is then used to rank the evidence items. For the joint and single training approaches, we select a different number of evidence nodes; in particular, a larger graph is used with the former. For training, we select nodes to occupy the graph according to the following rule-set:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "(1) If gold evidence, include as a node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "(2) For claims that require a single evidence item, include the top four candidates returned using our semantic search approach as nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "(3) For claims with more than one gold evidence item, retrieve the same number of candidates as gold items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "The union of these sets form the collection of nodes, V , that occupy the evidence graph G.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "Node Representations. For the initial node representations, similar to Liu et al. (2020) and Zhao et al. (2020) , we represent evidence nodes with the claim to which they refer as context. The claim is concatenated with a constructed context-rich evidence sequence e i . When constructing the sequences, e i , we consider the unstructured evidence items (i.e, sentences and table captions) and the structured table and list items separately. For sentences and table captions the evidence sequence is generated by concatenating the evidence item with the page title which serves as context. For table cells and list items we perform a per cell linearization, where this linearization forms the evidence sequence for table and list item evidence items (see Table 4 for the templates used). For each evidence item, we feed this claim-evidence sequence pair to a RoBERTa encoder , and each node n i \u2208 V in an evidence graph has the pooled output of the last hidden state of the [CLS] token, h 0 i as its initial state:", "cite_spans": [ { "start": 71, "end": 88, "text": "Liu et al. (2020)", "ref_id": "BIBREF13" }, { "start": 93, "end": 111, "text": "Zhao et al. (2020)", "ref_id": "BIBREF27" }, { "start": 976, "end": 981, "text": "[CLS]", "ref_id": null } ], "ref_spans": [ { "start": 590, "end": 764, "text": "For table cells and list items we perform a per cell linearization, where this linearization forms the evidence sequence for table and list item evidence items (see Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "n i = h 0 i = RoBERTa CLS (c, e i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Reasoning Graph", "sec_num": "4.2" }, { "text": "Training graphs. We train two graph networks, one for joint veracity prediction and evidence extraction, and the second solely for the veracity prediction task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Selection and Veracity Prediction", "sec_num": "4.3" }, { "text": "Oversampling NEI Instances. As discussed in Section 3, the FEVEROUS dataset suffers from a significant class imbalance with respect to the NEI instances. Similar to the baseline approach, we employ techniques for generating new NEI instances in order to address this issue. Concretely, we use two data augmentation strategies in order to increase the number of NEI at train time: (1) evidence set reduction, and (2) claim mutation. For the first case, we randomly sample SUPPORTS and REFUTES instances and drop evidence. Given the distribution of entity substituted and non-entity substituted mutations -as discovered in our data analysis (see Section 3) -we make the choice to include in the training data: 15,000 constructed NEI examples made using the first approach, and 5,946 NEI examples constructed using the second. This means that a total of 92,237 NEI examples were used for model training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Selection and Veracity Prediction", "sec_num": "4.3" }, { "text": "For the first model, we perform the tasks of fact extraction and verification of evidence selection and veracity prediction separately. We make use of an ensemble semantic search method for extracting top evidence items for claims. We employ S-BERT 3 to encode the claim and the evidence items separately. We then compute cosine similarity for the claim evidence pair. The 25 highest ranking tabular evidence items were chosen, and the top-scoring 5 sentences (and captions) for each claim were selected as the nodes of our evidence reasoning graph at test time. This is the evidence limit stated by the FEVEROUS metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "When constructing the evidence graph at test time, we choose to exclude header cells and list items evidence types as nodes as they account for a very small portion of evidence items (see Section 3), and experimentation shows that the evidence extraction model has a bias to favour these evidence elements over sentences. We use two GAT layers in our graph reasoning model, with: a hidden layer size of 128, embeddings size of 1024, and a global attention layer for node aggregation. The logits generated by the model are fed directly to a categorical cross entropy loss function, and the veracity label output probability distribution p i , for each evidence graph G i \u2208 G, is computed using the relation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p i = softmax(MLP(Wo i + b)),", "eq_num": "(2)" } ], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "o i = n i n i \u2208V softmax (h gate (x n )) h \u0398 (x n ). (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "MTL: Joint Verification and Extraction. We also experiment with a joint training or multi-task learning (MTL) approach in order to explore if simultaneously learning the veracity label and evidence items can lead to improvements in the label accuracy metric and also evidence prediction recall and precision. For this approach, we construct larger evidence graphs at test time, including the thirty-five highest ranked evidence items according to the S-BERT evidence extraction module. The intention is for the graph network to learn a binary classification for each claim-evidence pair in the network. For the multi-task learning model, we increase the dimensions of our graph network by feeding our initial input graphs to two separate GAT components (in order to increase the model's capacity for learning the more complex multi-task objective), the outputs of which, h a and h b , are concatenated to form representation h over which we compute global attention, where the combined representation takes the form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "4 h = [h a ; h b ].", "eq_num": "(4)" } ], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "The binary cross entropy loss is then used for the node-level evidence selection task, and, as with the separated model, we use categorical cross entropy to compute the graph-level veracity prediction, as shown in (2) and (3). The resulting joint graph neural network is then trained with the linear-additive objective", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L joint = \u03bbL evidence + L label ,", "eq_num": "(5)" } ], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "taking the form of a Lagrangian with multiplier \u03bb \u2265 0, where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "L evidence = sigmoid(MLP(W i h + b)). (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "As with the previous approach, we feed the model logits to our loss functions and use an Adam optimizer to train the network, and set \u03bb = 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STL: Separate Verification and Extraction.", "sec_num": null }, { "text": "For all models, we make use of a ROBERTA-LARGE model which is pre-trained on a number of NLI datasets including NLI-FEVER (Nie et al., 2020) . We use a maximum sequence length of 512 for encoding all claim-evidence concatenated pairs. We experiment with the following learning rates [1e-5, 5e-5, 1e-4], ultimately choosing the learning rate underlined. Training was performed using batch size of 64. We train the single objective model for 20k steps, choosing the weights with the minimum veracity prediction label loss, and train the joint model for 20k steps, taking the model with highest recall for evidence extraction. The Adam optimizer is used in training for both approaches.", "cite_spans": [ { "start": 122, "end": 140, "text": "(Nie et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Hyper-parameter Settings", "sec_num": "4.4" }, { "text": "We report the results of the entire fact extraction and verification pipeline, as well as the evaluation of the pipeline's performance for intermediate stages of the fact verification system, e.g., document retrieval and evidence selection. Document retrieval. Our method for DR shows significant improvement on the TF-IDF+DrQA approach used by the baseline. In particular we find that our document retrieval module sees gains from querying the Wikipedia dump for pages related to entities which are not retrieved by the WikiMedia API. However, we do note that our approach struggles to retrieve Wikipedia pages in cases relating to specific events which can only be inferred through reasoning over the claim.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "For example, consider the following claim from the development dataset: \"2014 Sky Blue FC season number 18 Lindsi Cutshall (born October 18, 1990 ) played the FW position.\". In this case, the document selection process returns \"Sky Blue FC\", \"Lindsi Cutshall\", and \"2015 Sky Blue FC season\", but does not return the gold evidence page \"2014 Sky Blue FC season\" which is required for verification of the claim.", "cite_spans": [ { "start": 123, "end": 145, "text": "(born October 18, 1990", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We report recall@k for k = {3, 5, 7} where k is the number of Wikipedia page documents retrieved by the module. Our approach shows significant improvements over the baseline (see Table 5 ).", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 186, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Method Rec@3 Rec@5 Rec@7 Baseline 0.58 0.69 -Ours 0.65 0.73 0.80 Table 5 : Document retrieval results measured by Recall@k, where k is the number of documents retrieved. Results reported for the dev set.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Evidence selection and veracity prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "For evidence selection and veracity prediction, we observe that the approach trained for the single objective of veracity prediction marginally outperforms the jointly trained module (see Table 6 ). We hypothesize that the difficulty of learning to select the correct evidence nodes along with predicting veracity might be the cause of this. It is possible that performance of the joint model could be improved with better evidence representation or through the use of a different graph structure, e.g., by incorporating edge attributes. Table 6 : System performance of the dev set for evidence recall and label accuracy.", "cite_spans": [], "ref_spans": [ { "start": 188, "end": 195, "text": "Table 6", "ref_id": null }, { "start": 538, "end": 545, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Finally, we submitted our blind test results for STL, which is our best performing method, to the after-competition FEVEROUS leaderboard. Our system outperforms the baseline significantly on both the FEVEROUS metric and also label accuracy as reported in Table 7 . Furthermore, our results on the blind test data show almost no degradation from development to test set with respect to the evidence recall which remains at 37%. So the cause of our reduced FEVEROUS score between the development and test data is mainly due to a decrease in label accuracy from 63% on the development data to 53% for the test data. We are confident that this could be improved with better label accuracy for the NEI class. Table 7 : Results for label accuracy (LA) and FEVER-OUS score (FS) for the full pipeline on both the development and blind test datasets.", "cite_spans": [], "ref_spans": [ { "start": 255, "end": 262, "text": "Table 7", "ref_id": null }, { "start": 704, "end": 711, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "We present an example of a claim from the development dataset, which requires both tabular and textual evidence to be verified. We show how it is labelled by our pipeline (see Table 8 ). For this example, our evidence selection module correctly identifies all three evidence items required to fact-check the claim. Furthermore, two of the three evidence items receive the highest relevance scores from our evidence selection module. Of the irrelevant evidence items retrieved for this claim, eleven out of twenty-two come from an unrelated Wikipedia page (\"Scomadi Turismo Leggera\"). The correct label of SUPPORTS is also predicted for this instance. In order to explore the interpretability system predictions, for this same instance, we analyse the node attention weights for the first GAT layer, they are shown in parenthesis for each predicted evidence item in Table 8 . We can see that the two evidence nodes with the highest values both correspond to items in the gold evidence set. However the third gold evidence item, SCO-MADI_SENTENCE_15, has a much lower weight than a number of items which are not in the gold evidence set.", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 183, "text": "Table 8", "ref_id": null }, { "start": 865, "end": 872, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Case Study and System Interpretability", "sec_num": "5.1" }, { "text": "In this work, we have demonstrated two novel approaches for fact extraction and verification that support both structured and unstructured evidence. These architectures were motivated by literature in argumentation, and also by the empirical analysis presented in Section 3. Our results show significant improvement over the shared task baseline for Claim \"In 2019, Scomadi, a private limited company with limited liability, was bought by a British owner which changed Scomadi's management structure.\" Evidence Scomadi_cell_0_0_1, Scomadi_sentence_14, Scomadi_sentence_15. Predicted Evidence (1) Scomadi_cell_0_0_1(0.1794),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "(2) Scomadi_sentence_14 (0.1203), (3) Scomadi_table_caption_0(0.0871), (4) Scomadi_cell_0_3_1 (0.0685), (5) Scomadi_cell_0_7_1(0.0561), (6) Scomadi_cell_0_2_1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "(0.0472) (7) Scomadi_cell_0_8_1 (0.0405) (8) Scomadi_sentence_15", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "(0.0360), (9) Scomadi_sentence_11 (0.0324), (10) Scomadi_sentence_0(0.0292), (11) Scomadi_cell_0_6_1(0.0266), (12) Scomadi_cell_0_5_1(0.0243), (13) Scomadi_cell_0_1_1(0.0224), (14) Scomadi_cell_0_4_1(0.0208).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Label SUPPORTS Predicted Label SUPPORTS Table 8 : Example claim from the development dataset which requires extracting both tabular and textual evidence in order for it to be verified. For brevity we only show the top fourteen (out of twenty-five) extracted evidence items, correctly predicted evidence is highlighted .", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 47, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "both the joint and separated models, with the latter generating a marginal improvement on the FEVER-OUS metric compared with the former. Overall, we conclude that the use of graph-based reasoning in fact verification systems could hold great promise for future lines of work. We hypothesize that exploring varied task formulations could potentially yield strong improvements in model performance, for example: constructing reasoning graphs on an evidence set level, or using the FEVER dataset to augment the NEI claims used during training, or further fine-tuning sentence embeddings on the FEVEROUS dataset. Furthermore, we believe further insights could be gained by evaluating our table linearization approach on other datasets related to fact verification over tabular data. In addition to this, we hope to conduct further experiments with our graph based approach using structured and unstructured evidence independently, to further investigate which aspect of our approach led to the improvement on the FEVER-OUS score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Incorporating prior knowledge or constraints into the training procedure would also be an interesting direction. Finally, we believe that our graph-based approach lends itself well to the extraction of veracity prediction explanations (Kotonya and Toni, 2020a) , obtained from evidence extracted from our underpinning graphs as justifications for claims. The ability to provide evidence for a claim, and to justify this, would better enable the integration of these techniques in practical systems.", "cite_spans": [ { "start": 235, "end": 260, "text": "(Kotonya and Toni, 2020a)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "This system was not submitted to the shared task competition, but instead to the after competition leader board under the name CARE (Context Aware REasoner).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.mediawiki.org/wiki/API", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the 'msmarco-distilbert-base-v4' and 'paraphrasempnet-base-v2\" pretrained models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We denote the concatenation of vectors x and y, by [x; y].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Disclaimer This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co and its affiliates (\"J.P. Morgan\"), and is not a product of the Research Department of J.P. Morgan. J.P. Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. \u00a9 2021 JPMorgan Chase & Co. All rights reserved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Explainable fact checking with probabilistic answer set programming", "authors": [ { "first": "Naser", "middle": [], "last": "Ahmadi", "suffix": "" }, { "first": "Joohyung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Papotti", "suffix": "" }, { "first": "Mohammed", "middle": [], "last": "Saeed", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Truth and Trust Online Conference (TTO 2019)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naser Ahmadi, Joohyung Lee, Paolo Papotti, and Mo- hammed Saeed. 2019. Explainable fact checking with probabilistic answer set programming. In Pro- ceedings of the 2019 Truth and Trust Online Confer- ence (TTO 2019), London, UK, October 4-5, 2019.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Where is your evidence: Improving factchecking by justification modeling", "authors": [ { "first": "Savvas", "middle": [], "last": "Tariq Alhindi", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Petridis", "suffix": "" }, { "first": "", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)", "volume": "", "issue": "", "pages": "85--90", "other_ids": { "DOI": [ "10.18653/v1/W18-5513" ] }, "num": null, "urls": [], "raw_text": "Tariq Alhindi, Savvas Petridis, and Smaranda Mure- san. 2018. Where is your evidence: Improving fact- checking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VER- ification (FEVER), pages 85-90, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. Feverous: Fact extraction and verification over unstructured and structured information", "authors": [ { "first": "Rami", "middle": [], "last": "Aly", "suffix": "" }, { "first": "Zhijiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schlichtkrull", "suffix": "" }, { "first": "James", "middle": [], "last": "Thorne", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rami Aly, Zhijiang Guo, Michael Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. Feverous: Fact extraction and verifica- tion over unstructured and structured information.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims", "authors": [ { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" }, { "first": "Christina", "middle": [], "last": "Lioma", "suffix": "" }, { "first": "Dongsheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lucas", "middle": [ "Chaves" ], "last": "Lima", "suffix": "" }, { "first": "Casper", "middle": [], "last": "Hansen", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Hansen", "suffix": "" }, { "first": "Jakob", "middle": [ "Grue" ], "last": "Simonsen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4685--4697", "other_ids": { "DOI": [ "10.18653/v1/D19-1475" ] }, "num": null, "urls": [], "raw_text": "Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Chris- tian Hansen, and Jakob Grue Simonsen. 2019. MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 4685-4697, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Proceedings of the 7th Workshop on Argument Mining", "authors": [ { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Cabrio and Serena Villata, editors. 2020. Pro- ceedings of the 7th Workshop on Argument Mining. Association for Computational Linguistics, Online.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Tabfact : A largescale dataset for table-based fact verification", "authors": [ { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hongmin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yunkai Zhang Jianshu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shiyang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenhu Chen, Hongmin Wang, Yunkai Zhang Jian- shu Chen, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020. Tabfact : A large- scale dataset for table-based fact verification. In International Conference on Learning Representa- tions (ICLR), Addis Ababa, Ethiopia.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "INFOTABS: Inference on tables as semi-structured data", "authors": [ { "first": "Vivek", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Maitrey", "middle": [], "last": "Mehta", "suffix": "" }, { "first": "Pegah", "middle": [], "last": "Nokhiz", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Srikumar", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2309--2324", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.210" ] }, "num": null, "urls": [], "raw_text": "Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, and Vivek Srikumar. 2020. INFOTABS: Inference on tables as semi-structured data. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 2309-2324, Online. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "UKP-athene: Multi-sentence textual entailment for claim verification", "authors": [ { "first": "Andreas", "middle": [], "last": "Hanselowski", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zile", "middle": [], "last": "Li", "suffix": "" }, { "first": "Daniil", "middle": [], "last": "Sorokin", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)", "volume": "", "issue": "", "pages": "103--108", "other_ids": { "DOI": [ "10.18653/v1/W18-5516" ] }, "num": null, "urls": [], "raw_text": "Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018. UKP-athene: Multi-sentence textual entailment for claim verification. In Pro- ceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 103-108, Brus- sels, Belgium. Association for Computational Lin- guistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "DeSePtion: Dual sequence prediction and adversarial examples for improved fact-checking", "authors": [ { "first": "Christopher", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "Tuhin", "middle": [], "last": "Chakrabarty", "suffix": "" }, { "first": "Tariq", "middle": [], "last": "Alhindi", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Varia", "suffix": "" }, { "first": "Kriste", "middle": [], "last": "Krstovski", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8593--8606", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.761" ] }, "num": null, "urls": [], "raw_text": "Christopher Hidey, Tuhin Chakrabarty, Tariq Alhindi, Siddharth Varia, Kriste Krstovski, Mona Diab, and Smaranda Muresan. 2020. DeSePtion: Dual se- quence prediction and adversarial examples for im- proved fact-checking. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 8593-8606, Online. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Gradual argumentation evaluation for stance aggregation in automated fake news detection", "authors": [ { "first": "Neema", "middle": [], "last": "Kotonya", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 6th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "156--166", "other_ids": { "DOI": [ "10.18653/v1/W19-4518" ] }, "num": null, "urls": [], "raw_text": "Neema Kotonya and Francesca Toni. 2019. Gradual argumentation evaluation for stance aggregation in automated fake news detection. In Proceedings of the 6th Workshop on Argument Mining, pages 156- 166, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Explainable automated fact-checking: A survey", "authors": [ { "first": "Neema", "middle": [], "last": "Kotonya", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "5430--5443", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.474" ] }, "num": null, "urls": [], "raw_text": "Neema Kotonya and Francesca Toni. 2020a. Ex- plainable automated fact-checking: A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5430-5443, Barcelona, Spain (Online). International Committee on Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Explainable automated fact-checking for public health claims", "authors": [ { "first": "Neema", "middle": [], "last": "Kotonya", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "7740--7754", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.623" ] }, "num": null, "urls": [], "raw_text": "Neema Kotonya and Francesca Toni. 2020b. Ex- plainable automated fact-checking for public health claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 7740-7754, Online. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Fine-grained fact verification with kernel graph attention network", "authors": [ { "first": "Zhenghao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chenyan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7342--7351", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.655" ] }, "num": null, "urls": [], "raw_text": "Zhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2020. Fine-grained fact verification with kernel graph attention network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7342-7351, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "GCAN: Graph-aware co-attention networks for explainable fake news detection on social media", "authors": [ { "first": "Yi-Ju", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Cheng-Te", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "505--514", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.48" ] }, "num": null, "urls": [], "raw_text": "Yi-Ju Lu and Cheng-Te Li. 2020. GCAN: Graph-aware co-attention networks for explainable fake news de- tection on social media. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 505-514, Online. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Revealing the importance of semantic retrieval for machine reading at scale", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Songhe", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Nie, Songhe Wang, and Mohit Bansal. 2019. Re- vealing the importance of semantic retrieval for ma- chine reading at scale.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adversarial NLI: A new benchmark for natural language understanding", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4885--4901", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.441" ] }, "num": null, "urls": [], "raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3982--3992", "other_ids": { "DOI": [ "10.18653/v1/D19-1410" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Joint verification and reranking for open fact checking over tables", "authors": [ { "first": "Vladimir", "middle": [], "last": "Michael Sejr Schlichtkrull", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Yih", "suffix": "" }, { "first": "", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "6787--6799", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.529" ] }, "num": null, "urls": [], "raw_text": "Michael Sejr Schlichtkrull, Vladimir Karpukhin, Bar- las Oguz, Mike Lewis, Wen-tau Yih, and Sebastian Riedel. 2021. Joint verification and reranking for open fact checking over tables. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6787-6799, Online. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "That is a known lie: Detecting previously fact-checked claims", "authors": [ { "first": "Martino", "middle": [], "last": "Da San", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3607--3618", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.332" ] }, "num": null, "urls": [], "raw_text": "Da San Martino, and Preslav Nakov. 2020. That is a known lie: Detecting previously fact-checked claims. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguis- tics, pages 3607-3618, Online. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "FEVER: a large-scale dataset for fact extraction and VERification", "authors": [ { "first": "James", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Christodoulopoulos", "suffix": "" }, { "first": "Arpit", "middle": [], "last": "Mittal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "809--819", "other_ids": { "DOI": [ "10.18653/v1/N18-1074" ] }, "num": null, "urls": [], "raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Towards argument mining for social good: A survey", "authors": [ { "first": "Maria", "middle": [], "last": "Eva", "suffix": "" }, { "first": "Neele", "middle": [], "last": "Vecchi", "suffix": "" }, { "first": "Iman", "middle": [], "last": "Falk", "suffix": "" }, { "first": "Gabriella", "middle": [], "last": "Jundi", "suffix": "" }, { "first": "", "middle": [], "last": "Lapesa", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1338--1352", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.107" ] }, "num": null, "urls": [], "raw_text": "Eva Maria Vecchi, Neele Falk, Iman Jundi, and Gabriella Lapesa. 2021. Towards argument mining for social good: A survey. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1338-1352, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Graph attention networks", "authors": [ { "first": "Petar", "middle": [], "last": "Velickovic", "suffix": "" }, { "first": "Guillem", "middle": [], "last": "Cucurull", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Casanova", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Romero", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Li\u00f2", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph attention networks. In 6th Inter- national Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenRe- view.net.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Fact checking: Task definition and dataset construction", "authors": [ { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science", "volume": "", "issue": "", "pages": "18--22", "other_ids": { "DOI": [ "10.3115/v1/W14-2508" ] }, "num": null, "urls": [], "raw_text": "Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Lan- guage Technologies and Computational Social Sci- ence, pages 18-22, Baltimore, MD, USA. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Fact or fiction: Verifying scientific claims", "authors": [ { "first": "David", "middle": [], "last": "Wadden", "suffix": "" }, { "first": "Shanchuan", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Lucy", "middle": [ "Lu" ], "last": "Wang", "suffix": "" }, { "first": "Madeleine", "middle": [], "last": "Van Zuylen", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "7534--7550", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.609" ] }, "num": null, "urls": [], "raw_text": "David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verify- ing scientific claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 7534-7550, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Franziska Roesner, and Yejin Choi. 2020. Defending against neural fake news", "authors": [ { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2020. Defending against neural fake news.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Transformer-xh: Multi-evidence reasoning with extra hop attention", "authors": [ { "first": "Chen", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chenyan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Corby", "middle": [], "last": "Rosset", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Song", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Tiwary", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, and Saurabh Tiwary. 2020. Transformer-xh: Multi-evidence reasoning with ex- tra hop attention. In International Conference on Learning Representations.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Reasoning over semantic-level graph for fact checking", "authors": [ { "first": "Wanjun", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Zenan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jiahai", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Yin", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6170--6180", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.549" ] }, "num": null, "urls": [], "raw_text": "Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2020. Reasoning over semantic-level graph for fact checking. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6170-6180, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "GEAR: Graph-based evidence aggregating and reasoning for fact verification", "authors": [ { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Changcheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "892--901", "other_ids": { "DOI": [ "10.18653/v1/P19-1085" ] }, "num": null, "urls": [], "raw_text": "Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: Graph-based evidence aggregating and rea- soning for fact verification. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 892-901, Florence, Italy. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF2": { "content": "
ClubSeasonLeague
Division Apps Goals
Santa Cruz2019S\u00e9rie C71
Athletico Paranaense2020 2021Serie A0 00 0
Total00
Guarani (loan)2020S\u00e9rie B50
Figure 2: Example of a complex general table taken
from /wiki/Elias_Carioca.
", "html": null, "text": "This table contains both multi-row cells and multi-column cells, some of which are headers. They are shown highlighted .", "type_str": "table", "num": null }, "TABREF4": { "content": "", "html": null, "text": "Templates for encoding tabular evidence. CELL_I_0, SUBHEADER_0, SUBHEADER_J, SUBHEADERS,TABLE, TITLE and PAGE are all context elements. The content of the evidence item is highlighted . In each case ITEM_I_J denotes list item content and CELL_I_J denotes table cell content.", "type_str": "table", "num": null } } } }