{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:42:31.947137Z" }, "title": "Heterogeneous-Graph Reasoning and Fine-Grained Aggregation for Fact Checking", "authors": [ { "first": "Hongbin", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Software Engineering Shenzhen University Shenzhen", "location": { "country": "China" } }, "email": "" }, { "first": "Xianghua", "middle": [], "last": "Fu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shenzhen Technology University", "location": { "settlement": "Shenzhen", "country": "China" } }, "email": "fuxianghua@sztu.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Fact checking is a challenging task that requires corresponding evidences to verify the property of a claim based on reasoning. Previous studies generally i) construct the graph by treating each evidence-claim pair as node which is a simple way that ignores to exploit their implicit interaction, or building a fully-connected graph among claim and evidences where the entailment relationship between claim and evidence would be considered equal to the semantic relationship among evidences; ii) aggregate evidences equally without considering their different stances towards the verification of fact. Towards the above issues, we propose a novel heterogeneous-graph reasoning and finegrained aggregation model, with two following modules: 1) a heterogeneous graph attention network module to distinguish different types of relationships within the constructed graph; 2) fine-grained aggregation module which learns the implicit stance of evidences towards the prediction result in details. Extensive experiments on the benchmark dataset demonstrate that our proposed model achieves much better performance than state-of-the-art methods.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Fact checking is a challenging task that requires corresponding evidences to verify the property of a claim based on reasoning. Previous studies generally i) construct the graph by treating each evidence-claim pair as node which is a simple way that ignores to exploit their implicit interaction, or building a fully-connected graph among claim and evidences where the entailment relationship between claim and evidence would be considered equal to the semantic relationship among evidences; ii) aggregate evidences equally without considering their different stances towards the verification of fact. Towards the above issues, we propose a novel heterogeneous-graph reasoning and finegrained aggregation model, with two following modules: 1) a heterogeneous graph attention network module to distinguish different types of relationships within the constructed graph; 2) fine-grained aggregation module which learns the implicit stance of evidences towards the prediction result in details. Extensive experiments on the benchmark dataset demonstrate that our proposed model achieves much better performance than state-of-the-art methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Today, social media is considered as the biggest platform to share news and seek information. However, misinformation is spreading at increasing rates and may cause great impact to society. The reach of fake news was best highlighted during the critical months of the 2016 U.S. presidential election generated millions of shares and comments on Facebook (Zafarani et al., 2019) . Therefore, automatic detection of fake news on social media has become a significant and beneficial problem. We pay more attention on fact checking task, which utilizes external knowledge to determine the claim veracity when given a claim.", "cite_spans": [ { "start": 354, "end": 377, "text": "(Zafarani et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Verifying the truthfulness of a claim with respect to evidence can be regarded as a special case of recognizing textual entailment (RTE) (Dagan et al., 2005) or natural language inference (NLI) (Bowman et al., 2015) . Typically, existing approaches contain the representation learning process and evidence aggregation process. Representation process tries to enhance the semantic expression of claim and evidence via sequence structure methods (Hanselowski et al., 2018a; Soleimani et al., 2020) or graph based neural networks where they utilize simple combination methods such as just dealing with claim-evidence pair as graph nodes. The evidence aggregation process aims to find out the most important evidence which contributes more to claim verification with different methods like mean pooling, attention-based aggregation, etc.", "cite_spans": [ { "start": 137, "end": 157, "text": "(Dagan et al., 2005)", "ref_id": "BIBREF2" }, { "start": 194, "end": 215, "text": "(Bowman et al., 2015)", "ref_id": "BIBREF0" }, { "start": 444, "end": 471, "text": "(Hanselowski et al., 2018a;", "ref_id": "BIBREF5" }, { "start": 472, "end": 495, "text": "Soleimani et al., 2020)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, existing approaches such as establish a semantic-based graph, which ignore the difference between relationships among nodes in reasoning graph. For example in Figure 1, given the claim \"Al Jardine is an American rhythm guitarist.\" and the retrieved evidence sentences (i.e., E1-E5), making the correct prediction requires model to reason that \"Al Jardine\" is the person mentioned in E2 and \"rhythm guitarist\" is occurred in E1 based on the entailment interaction of claim with the evidences. Furthermore, we also expect the semantical coherence of multiple evidences from E1 to E5 to automatically filter unrelated evidence such as E3-E5. We believe it's crucial for verification to mine distinct relationships within the reasoning graph.", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 174, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Besides, in previous methods , stance of evidences towards claim are aggregated equally or some irrelevant evidences are prevented from predicting the veracity of claim roughly via simple attention mechanism. However, each piece of evidence has a different impact on the claim, which needs to be exploited on fine-grained perspective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To alleviate above issues, we propose a novel Heterogeneous-Graph Reasoning and Fine- Figure 1 : A motivating example for fact checking and the FEVER task. The purple solid line denotes the semantical coherence between each piece of evidence. The purple dotted line denotes entailment consistence between claim and evidences. Verifying the fact requires exploiting these two different implicit relationships during reasoning process.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 94, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Grained Aggregation Model (HGRGA), which not only enhances the representation learning for claim and evidences by capturing different types of relationships within the constucted graph but also aggregating stances of evidences towards claim concretely. More specifically, we construct a heterogeneous evidence-evidence-claim graph based on graph attention network to enhance the representation of claim and evidences. Besides, we utilize an capsule network to further aggregate evidences with different implicit stances towards the claim, and learn the weights via dynamic routing which indicate how each of evidence attributes the veracity of claim.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conduct experiments on the real-world benchmark dataset. Extensive experimental results demonstrate the effectiveness of our model. HGRGA boosts the performance for fact checking and the main contributions of this work are summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 To our best knowledge, this is the first study of representing reasoning structure as a heterogeneous graph. The graph attention based heterogeneous interaction achieves significant improvements over state-of-the-art methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We incorporate the capsule network structure into our proposed model to learn implicit stances of evidences towards the claim on finegrained perspective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Experimental results show that our model achieves superior performance on the largescale benchmark dataset for fact verification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Background and related work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The input of our task is a claim and a collection of Wikipedia articles D. The goal is to extract a set of evidence sentences from D and assign a veracity relation label y \u2208 Y = {S, R, N} to a claim with respect to the evidence set, where S = SUPPORTED, R = REFUTED, and N = NOTENOUGHINFO(NEI).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem fomulation", "sec_num": "2.1" }, { "text": "The process of evidence-based fact checking involves the following three subtasks: document retrieval, evidence sentence selection and claim verification. In the document retrieval phrase, researchers use a hybrid approach that combines search results from the MediaWiki API 1 and the results on the basic of the term frequency-inverse document frequency (TF-IDF) model (Hanselowski et al., 2018b (Thorne et al., 2018a; Nie et al., 2019; Hanselowski et al., 2018b) , concatenated all sentence (Stammbach and Neumann, 2019) . Recently, there are some approaches related to graph-based neural networks (Kipf and Welling, 2016). For example, Zhou et al. (2019) build a fully-connected evidence graph where each node indicates a piece of evidence while conduct fine-grained evidence propagation in the graph. Zhong et al. (2019) use semantic role labeling (SRL) to build a graph structure, where a node can be a word or a phrase depending on the SRL's outputs.", "cite_spans": [ { "start": 370, "end": 396, "text": "(Hanselowski et al., 2018b", "ref_id": "BIBREF6" }, { "start": 397, "end": 419, "text": "(Thorne et al., 2018a;", "ref_id": "BIBREF16" }, { "start": 420, "end": 437, "text": "Nie et al., 2019;", "ref_id": "BIBREF11" }, { "start": 438, "end": 464, "text": "Hanselowski et al., 2018b)", "ref_id": "BIBREF6" }, { "start": 493, "end": 522, "text": "(Stammbach and Neumann, 2019)", "ref_id": "BIBREF15" }, { "start": 805, "end": 824, "text": "Zhong et al. (2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Fact checking", "sec_num": "2.2" }, { "text": "Pre-trained language representation models such as GPT (Radford et al., 2018) , BERT (Devlin et al., 2018) are proven to be effective on many NLP tasks. These models employ well-designed pretraining tasks to fuse context information and train on rich data. Each BERT layer transforms an input token sequence (one or two sentences) by using self-attention mechanism. Hence, we use BERT as the sentence encoder in our framework to encode better semantic representation.", "cite_spans": [ { "start": 55, "end": 77, "text": "(Radford et al., 2018)", "ref_id": "BIBREF12" }, { "start": 85, "end": 106, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained language models", "sec_num": "2.3" }, { "text": "A recent method called capsule network explored by Sabour et al. (2017) introduces an iterative routing process to learn a hierarchy of feature detectors which send low-level features to high-level capsules only when there is a strong agreement of their predictions to high-level capsules. Researchers recently apply capsule network into NLP task such as text classification (Zhao et al., 2018) , slot filling , etc.", "cite_spans": [ { "start": 51, "end": 71, "text": "Sabour et al. (2017)", "ref_id": "BIBREF13" }, { "start": 375, "end": 394, "text": "(Zhao et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Capsule network", "sec_num": "2.4" }, { "text": "In this section, we present an overview of the architecture of the proposed framework HGRGA for fact verification. As shown in Figure 2 , given a claim and the retrieved evidence, we first utilize a sentence encoder to obtain representations for the claim and the evidences. Then we build a heterogeneous evidence-evidence-claim graph to propagate information among claim and evidence. Finally, we use the capsule network to model the implicit stances of evidences towards claim on fine-grained perspective.", "cite_spans": [], "ref_spans": [ { "start": 127, "end": 135, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Proposed method", "sec_num": "3" }, { "text": "Given an input sentence, we employ BERT (Devlin et al., 2018) as our sentence encoder by extracting the final hidden state of the [CLS] token as the representation, where [CLS] is the special classification embedding in BERT.", "cite_spans": [ { "start": 40, "end": 61, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Encoder", "sec_num": "3.1" }, { "text": "Specifically, given a claim c and N pieces of retrieved evidence {e 1 , e 2 , . . . , e N }, we feed each sentence into BERT to obtain the claim representation c and the evidence representation e i , where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Encoder", "sec_num": "3.1" }, { "text": "i \u2208 {1, ..., N }. That is, c = BERT(c), e i = BERT(e i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Encoder", "sec_num": "3.1" }, { "text": "( 1)We thus denote the utterance as a matrix, i.e., X = [c, e 1 , e 2 , ..., e N ] T , where c, e i \u2208 R d respectively denotes the d-dimensional embedding of the claim and each relative evidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Encoder", "sec_num": "3.1" }, { "text": "This section describes how to incorporate the heterogeneous graph attention network into our model. Based on the observation as illustrated in Figure 1, we assume that given a claim, the evidence should be semantically coherent with each other while the claim should be entailment consistent with the relevant evidence. Therefore, we decompose the evidence-evidence-claim graph into claim-evidence subgraph and evidence-evidence subgraph.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 149, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "Claim-Evidence Subgraph Considering that the neighbors of each node in subgraphs have different importance to learn node embedding for fact checking task, we use graph attention network (GAT) (Veli\u010dkovi\u0107 et al., 2017) to generate the sentence representation of claim and the retrieved evidence.", "cite_spans": [ { "start": 192, "end": 217, "text": "(Veli\u010dkovi\u0107 et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "We use", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "H l ce = [h l 0 , h l 1 , h l 2 , ..., h l N ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "T to represent the hidden states of nodes at layer l and initially, H 0 ce = X. In order to encode structural contexts to improve the sentence-level representation by adaptively learning different contributions of neighbors to each node, we perform self-attention mechanism on the nodes to model the interactions between each node and its neighbors. The attention coefficient can be computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 l i,j = Atten(h l i , h l j ) = exp(\u03d5(a T [W l h l i ||W l h l j ])) j\u2208N i exp(\u03d5(a T [W l h l i ||W l h l j ])) ,", "eq_num": "(2)" } ], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "where \u03b1 l i,j indicates the importance of node i to j at layer l, a is a weight vector, W l is a layerspecific trainable transformation matrix, || means \"concatenate\" operation, N i contains node i's onehop neighbors and node i itself, \u03d5 denotes the activation function, such as LeakyReLU (Girshick et al., 2014) . Here, we use the adjacency matrix A ce to denotes the relationship between each node, which is defined as: ", "cite_spans": [ { "start": 289, "end": 312, "text": "(Girshick et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A ce i,j = \uf8f1 \uf8f2 \uf8f3 1 i/j \u2208 {claim}, j/i \u2208 {claim, e 1 , ..., e N } 0 otherwise ,", "eq_num": "(3)" } ], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "then the layer-wise propagation rule is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h l+1 i = ReLU ( j\u2208N i \u03b1 l i,j W l h l j ).", "eq_num": "(4)" } ], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "After that, multi-head attention (Vaswani et al., 2017) is utilized to stabilize the learning process of self-attention and extend attention mechanism. Thus Eq. 4 would be extended to the multi-head attention process of concatenating M attention heads:", "cite_spans": [ { "start": 33, "end": 55, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h l+1 i = M || m=1 ReLU ( j\u2208N i \u03b1 l,m i,j W l m h l j ),", "eq_num": "(5)" } ], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "where || represents concatenation, \u03b1 l,m i,j is a normalized attention coefficient computed by the m-th head at the l-th layer, and W l m is the corresponding input linear transformation's weight matrix. By stacking L layers of GAT, the output embedding in the final layer is calculated using averaging, instead of the concatenation operation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h L i = ReLU ( 1 M M m=1 j\u2208N i \u03b1 L\u22121,m i,j W L\u22121 m h L\u22121 j ).", "eq_num": "(6)" } ], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "Through aforementioned operations, we get the final layer of claim-evidence subgraph result", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "H L ce = [h L 0 , h L 1 , h L 2 , ..., h L N ] T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "Evidence-Evidence Subgraph Similarly to the claim-evidence subgraph in Section 3.2, we enhance the semantical coherence of each evidence via GAT method. More concretely, we use", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "H l ee = [h l 0 ,h l 1 ,h l 2 , ...,h l N ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "T to represent the hidden states of nodes at layer l and initially, H 0 ee = X. Besides, the relationship between nodes within subgraph is different and we utilize the adjacency matrix A ee to denotes the relationship between each node, which is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A ee i,j = \uf8f1 \uf8f2 \uf8f3 1 i \u2208 {e 1 , ..., e N }, j \u2208 {e 1 , ..., e N } 0 otherwise .", "eq_num": "(7)" } ], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "Finally, the output of evidence-evidence subgraph can be updated via", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "H L ee = [h L 0 ,h L 1 ,h L 2 , ...,h L N ] T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Reasoning Network", "sec_num": "3.2" }, { "text": "To fuse the information contained in two subgraphs, we concatenate H L ce and H L ee to form implicit representation of claim and evidences, denoted as H L . Then, we propose a slice operation to extract claim and evidence feature separately from H L , denoted as s c \u2208 R 2d\u00d71 and s e \u2208 R 2d\u00d7N . Consequently, we tile s c N times and concatenate them with s e to construct a new feature matrix as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fusion of Subgraphs", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s = concat(s c , s e ), p = tanh(W s s + b s ),", "eq_num": "(8)" } ], "section": "Fusion of Subgraphs", "sec_num": null }, { "text": "where W s \u2208 R d\u00d74d and b s \u2208 R d\u00d71 are the weight and bias matrix for dimensionality reduction op-eration. p \u2208 R d\u00d7N denotes the implicit stance of evidences towards final class prediction. The reason we use the concatenation operation is that we think the evidence nodes in the following aggregation process need the information from the claim to guide the routing agreement process among them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fusion of Subgraphs", "sec_num": null }, { "text": "To model the fine-grained stances of evidences towards class prediction, we incorporate the capsule network (Sabour et al., 2017) into our model. We regard p as the primary capsule", "cite_spans": [ { "start": 108, "end": 129, "text": "(Sabour et al., 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Stance Aggregator", "sec_num": "3.3" }, { "text": "p i | N i=1 \u2208 R d , Let v k | K", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance Aggregator", "sec_num": "3.3" }, { "text": "k=1 \u2208 R dc denote the high-level class capsules, where K denotes the number of classes and d c means the dimension of class capsules' representation. The capsule model learns a hierarchy of feature detectors via a routing-by-agreement mechanism, which define the different contributions of stances of evidences towards prediction result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance Aggregator", "sec_num": "3.3" }, { "text": "Dynamic Routing-by-agreement We denote p k|i as the resulting prediction vector of the i-th stance capsule when being recognized as the k-th class:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance Aggregator", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p k|i = \u03c3(W k p T i + b k ),", "eq_num": "(9)" } ], "section": "Stance Aggregator", "sec_num": "3.3" }, { "text": "where k \u2208 {1, 2, ..., K} denotes the class type and i \u2208 {1, 2, ..., N }. \u03c3 is the activation function such as tanh. W k \u2208 R dc\u00d7d and b k \u2208 R dc\u00d71 are the weight and bias matrix for the k-th capsule. The dynamic routing-by-agreement learns an agreement value c k,i that determines how likely the i-th stance capsule agrees to be routed to the k-th class capsule. c k,i is calculated by the dynamic routing-by-agreement algorithm (Sabour et al., 2017) , which is briefly recalled in Algorithm 1.", "cite_spans": [ { "start": 428, "end": 449, "text": "(Sabour et al., 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Stance Aggregator", "sec_num": "3.3" }, { "text": "The algorithm determines the agreement value c k,i between stance capsules and class capsules while learning the class representations v k in an unsupervised, iterative fashion. c i is a vector that consists of all c k,i where k \u2208 K. b k,i is the logit (initialized as zero) representing the log prior probability that the i-th stance capsule agrees to be routed to the k-th class capsule. During each iteration (Line 4), each class representation v k is calculated by aggregating all the prediction vectors, weighted by the agreement values c k,i obtained from b k,i (Line 6-7):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance Aggregator", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s k = N i c k,i p k|i , v k = g(s k ),", "eq_num": "(10)" } ], "section": "Stance Aggregator", "sec_num": "3.3" }, { "text": "Algorithm 1 Dynamic routing-by-aggrement 1: procedure DYNAMIC ROUTING(p k|i , iter) 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance Aggregator", "sec_num": "3.3" }, { "text": "for each stance capsule i and class capsule k: b k,i \u2190 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance Aggregator", "sec_num": "3.3" }, { "text": "for iter iterations do 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3: 4:", "sec_num": null }, { "text": "for all stance capsule i: ci \u2190 sof tmax(bi) 6: for all class capsule k:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3: 4:", "sec_num": null }, { "text": "s k \u2190 r c k,i p k|i 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3: 4:", "sec_num": null }, { "text": "for all class capsule k: v k = squash(s k ) 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3: 4:", "sec_num": null }, { "text": "for all stance capsule i and class capsule k:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3: 4:", "sec_num": null }, { "text": "b k,i \u2190 b k,i + p k|i \u2022 v k 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3: 4:", "sec_num": null }, { "text": "end for 10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3: 4:", "sec_num": null }, { "text": "Return v k 11: end procedure In the above algorithm, g is a non-linear squashing function which limits the length of v k to [0, 1]. Once we updated the class representation v k during iteration, the logit b k,i becomes larger when the dot product p k|i \u2022 v k is large, which means representation of stance capsule p k|i is more similar to class representation v k . In our scenario, that is, stance of evidences contributes more to a certain category. Meanwhile, we can observe the fine-grained distributions towards prediction result of different stances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3: 4:", "sec_num": null }, { "text": "Max-margin Loss for Class Detection Based on the capsule theory (Sabour et al., 2017) , the orientation of the activation vector v k represents class properties while its length indicates the activation probability. The loss function considers a max-margin loss on each labeled utterance: ", "cite_spans": [ { "start": 64, "end": 85, "text": "(Sabour et al., 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "3: 4:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = K k=1 {[[y = v k ]] \u2022 max(0, m + \u2212 ||v k ||) 2 + \u03bb[[y \u0338 = v k ]] \u2022 max(0, ||v k || \u2212 m \u2212 ) 2 },", "eq_num": "(11)" } ], "section": "3: 4:", "sec_num": null }, { "text": "We conduct experiments on the dataset FEVER (Thorne et al., 2018a et al., 2018b). Table 1 shows the statistics of the dataset. We evaluated performance by using the label accuracy (LA) and FEVER score (F-score). LA measures the 3-way classification accuracy of class prediction without considering the retrieved evidence. The F-score reflects the performance of both evidence sentence selection and veracity relation prediction, where a complete set of true evidence sentences is present in the selected sentences, and the claim is correctly labeled.", "cite_spans": [ { "start": 44, "end": 65, "text": "(Thorne et al., 2018a", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 82, "end": 89, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Dataset and Evaluation Metrics", "sec_num": "4.1" }, { "text": "The baselines include sota models on FEVER1.0 task, BERT based models and graph-based models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "4.2" }, { "text": "Three top models (Athene (Hanselowski et al., 2018b) , UNC NLP (Nie et al., 2019) , UCL MRG (Yoneda et al., 2018) ) in FEVER1.0 shared task are compared in our experiment.", "cite_spans": [ { "start": 25, "end": 52, "text": "(Hanselowski et al., 2018b)", "ref_id": "BIBREF6" }, { "start": 63, "end": 81, "text": "(Nie et al., 2019)", "ref_id": "BIBREF11" }, { "start": 92, "end": 113, "text": "(Yoneda et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "4.2" }, { "text": "As BERT (Devlin et al., 2018) has achieved promising performance on several NLP tasks, we use BERT-pair, BERT-concat from previous work as our baselines.", "cite_spans": [ { "start": 8, "end": 29, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "4.2" }, { "text": "Other baselines are following like GEAR , KGAT and DREAM (Zhong et al., 2019) .", "cite_spans": [ { "start": 57, "end": 77, "text": "(Zhong et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "4.2" }, { "text": "We employ a three-step pipeline with components for document retrieval, sentence selection and claim verification to solve the task. More details can be found in Appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "We utilize BERT BASE (Devlin et al., 2018 ) in our proposed model. Besides, some experiments of hyper-parameters such as the size of pre-trained model, the number of graph attention layer, can be found in Appendix B.", "cite_spans": [ { "start": 21, "end": 41, "text": "(Devlin et al., 2018", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "In this section, we first present the overall performance of our model HGRGA compared with other approaches. Then we conduct an ablation study to explore the effectiveness of the heterogeneous graph structure and the fine-grained capsule net- work. Finally, we present a case study to demonstrate the effectiveness of our framework. Table 2 shows the performance of our proposed method versus all the compared methods on FEVER dataset, where the best result of each column is bolded to indicate the significant improvement over all baselines.", "cite_spans": [], "ref_spans": [ { "start": 333, "end": 340, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "As shown in Table 2 , in terms of LA, our model significantly outperforms BERT-based models with 80.67% and 74.26% on both development and test sets respectively. It is worth noting that, our approach, which exploits distinct types of relationships between nodes within reasoning graph, outperforms GEAR and KGAT, both of which regard claim-evidence pair as node and ignore different implicit interactions among them. However, in terms of LA, DREAM outperforms our approach with 76.85% on the test set. One possible reason is that DREAM incorporates graph-level semantic structure of evidence obtained by Semantic Role Labeling (SRL) which may contain more external information. Despite this, in terms of FEVER score, which is a kind of more comprehensive metrics, our method outperforms it.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Overall Performance", "sec_num": "5.1" }, { "text": "Effect of Heterogeneous Graph We observe how the model performs when some critical components are removed. The specific results are shown in and H ee denotes the node' representation learned via evidence-evidence subgraph. Besides, Homo denotes the reasoning graph is regarded as the homogenous graph which ignores different types of relationships between claim and evidence, evidence and evidence. As expected, with the removal of important components, the performance of model gradually decrease, especially when the reasoning graph is trained as the homogeneous structure, the LA score drops by nearly 2%, which also shows the strong effectiveness of heterogeneous graph. We will attempts to explore the effective result of heterogeneous structure in Section 5.2. Besides, it's worth noting that, when H ce is removed, model still has a proper result, where it's investigated in previous study (Hansen et al., 2021) and an important problem is highlighted that whether models for automatic fact verification have the ability of reasoning.", "cite_spans": [ { "start": 897, "end": 918, "text": "(Hansen et al., 2021)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "5.2" }, { "text": "We explore the effectiveness of the capsule network aggregation by comparing it with other different aggregation methods, such as mean-aggregator, max-aggregator and attention-aggregator. The mean aggregator performs the element-wise Mean operation among stances' representation while the max aggregator performs the element-wise Max operation. The attention aggregator is followed from , where the dot-product attention operation is used among evidence representation. As shown in Table 3 , we can find that our approach using capsule network performs better than other aggregation methods. Furthermore, when capsule network is trained, we can easily observe the distribution of stance of evidences towards predicted class during iterations. We will show an example in Section 5.2.", "cite_spans": [], "ref_spans": [ { "start": 482, "end": 489, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Effect of Capsule Layer", "sec_num": null }, { "text": "Claim: One host of Weekly Idol is a comedian.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Capsule Layer", "sec_num": null }, { "text": "Evidence: E1: The show is hosted by comedian Jeong Hyeong-don and rapper Defconn. E2: Defconn, one host of Weekly Idol, is a rapper used to perform several songs on the show. E3: Weekly Idol is a South Korean variety show, which airs Wednesdays, 6PM KST, on MBC Every1, MBC's cable and satellite network for comedy and variety shows. E4: Many comics achieve a cult following while touring famous comedy hubs such as the Just for Laughs festival in Montreal, the Edinburgh Fringe, and Melbourne Comedy Festival in Australia. E5: However, a comic's stand-up success does not guarantee a film's critical or box office success.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Capsule Layer", "sec_num": null }, { "text": "Label: SUPPORTED Table 4 : A case of the claim that requires integrating multiple evidence to verify. Facts shared across the claim and the evidences are highlighted with different colors. Table 4 shows an example in our experiments which needs multiple pieces of evidence to make the right inference. There are some noisy evidences such as E4-E5, which are not semantically coherent with E1-E3, and a confusing evidence E2 which may introduce spurious information and mislead the model to predict the label incorrectly. In order to observe the difference between homogenous graph structure and heterogeneous graph structure, we plot the claim-evidence attention map from the model learned under these two settings.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 4", "ref_id": null }, { "start": 189, "end": 196, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Effect of Capsule Layer", "sec_num": null }, { "text": "As shown in Figure 3a , when the reasoning graph is constructed as homogenous structure, the model would consider the entailment relationship between claim and evidence equally to another relationship, semantic coherence among each evidence. With high similarity between claim and E2 on semantic perspective, the proposed method tends to attend E2, which leads to a prediction error. In contrast, when the inference relationship between claim and evidence is explicitly exploited, the ability of reasoning would be further enhanced. Making the correct prediction requires model to reason based on the understanding that \"comedian\" is occurred in E1 and \"Weekly Idol\" is a show mentioned in E3. Based on the observation as illustrated in Figure 3b , our approach pays more Table 4. attention on E1 and E3, which provide the most useful information in this case, and the label is correctly detected as SUPPORTED. Table 4 . Left: after the first iteration. Right: after the second iteration.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 21, "text": "Figure 3a", "ref_id": "FIGREF3" }, { "start": 737, "end": 746, "text": "Figure 3b", "ref_id": "FIGREF3" }, { "start": 772, "end": 780, "text": "Table 4.", "ref_id": null }, { "start": 911, "end": 918, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Case Study", "sec_num": null }, { "text": "The dynamically learned agreement values within capsule aggregation layer naturally reflect how stance of evidences are collectively aggregated into class capsules for each input utterance. We visualize the agreement values between each stance capsule and each class capsule. The left part of Figure 4 shows that after the first iteration, since the model improperly recognize E2 as a whole, the REFUTED capsule contribute significantly to the final result. From the right part of Figure 4 , we found that with the entailment relationship between claim and evidence being captured in claim-evidence subgraph, evidence E1 and E3 contribute more to the correct class capsule SUPPORTED, which leads to a reasonable result.", "cite_spans": [], "ref_spans": [ { "start": 293, "end": 301, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 481, "end": 489, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Case Study", "sec_num": null }, { "text": "We randomly select 200 incorrectly predicted instances and summarize the primary types of errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "The first type of errors is caused by failing to match the semantic meaning of some phrases on some complex cases. For example, the claim \"Philomena is a film nominated for seven awards.\" is supported by the evidence \"It was also nominated for four BAFTA Awards and three Golden Globe Awards.\" The model needs to understand that four plus three equals seven in this case. Another case is that the claim states \"Winter's Tale is a book\", while the evidence states \"Winter's Tale is a 1983 novel by Mark Helprin\". The model fails to understand the relationship between novel and book. Solving this type of problem requires the incorporation of additional knowledge, such as math logic and common sense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "The second type of errors is due to the failure of retrieving relevant evidences. For example, the claim states \"Lyon is a city in Southwest France.\", and the ground-truth evidence states \"Lyon had a population of 506,615 in 2014 and is France's third-largest city after Paris and Marseille.\", which gives not enough information to help model make a true judgement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "https://www.mediawiki.org/wiki/API", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "In this work, we present a novel heterogeneousgraph reasoning and fine-grained aggregation framework on the claim verification subtask of FEVER. We propose heterogeneous graph attention network to better exploit different types of relationships between nodes within reasoning graph. Furthermore, the capsule network is used to observe fine-grained distributions of stances towards claim from multiple pieces of evidence. The framework is proven to be effective and achieve significant and explainable performance. In the future, we would like to explore a fine-grained reasoning mechanism within graph and jointly learn evidence selection and claim verification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Consultion", "sec_num": "7" }, { "text": "In the document retrieval and sentence selection stages, we simply follow the method from Hanselowski et al. (2018b) since their method has the highest score on evidence recall in the former FEVER shared task and we focus on the claim verification task. We describe our implementation details in this section.", "cite_spans": [ { "start": 90, "end": 116, "text": "Hanselowski et al. (2018b)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "A Implementation Details", "sec_num": null }, { "text": "We adopt the entity linking approach from Hanselowski et al. (2018b) , which uses entities as search queries and find relevant Wikipedia pages through the online MediaWiki API 2 . Then related sentences are selected from retrieval document. We follow the previous method from Zhao et al. (2020) and use BERT as sentence retrieval model. We use the [CLS] hidden state to represent claim and evidence sentence pair. Then a rank layer is trained to rank score via pairwise loss. Sentences with top-5 relevance scores are selected to form the final evidence set in our experiments.Claim Verification In our HGRGA, we set the batch size to 256, the number of evidences N to 5 and the dimension of features d to 768. The number of class capsules K is 3, the dimension of class capsules d c is 10. We set the number L of the graph attention layer as 2, and the head number M as 4. The model is trained to minimize the capsule loss (Sabour et al., 2017) using the Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 3e-5. In the loss function, the downweighting coefficient \u03bb is 0.5, margins m + and m \u2212 are set to 0.8 and 0.2. We use an early stopping strategy on the label accuracy of the validation set, with a patience of 10 epochs.", "cite_spans": [ { "start": 42, "end": 68, "text": "Hanselowski et al. (2018b)", "ref_id": "BIBREF6" }, { "start": 276, "end": 294, "text": "Zhao et al. (2020)", "ref_id": "BIBREF23" }, { "start": 924, "end": 945, "text": "(Sabour et al., 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval and Sentence Selection", "sec_num": null }, { "text": "Effect of Pre-trained Models Table 5 shows the results of different pre-trained models on the test set in detail. When the size of pre-trained model becomes larger, the performance of proposed method could be improved. We can also discover from the 2 https://www.mediawiki.org/wiki/API ", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "B Additional results on different hyper-parameters", "sec_num": null }, { "text": "We conduct additional experiments to check the effect of the number of GAT layers and attention head, which could be important and sensitive to our proposed method. Table 6 shows the result of parameter-tuning experiment and we choose L = 2 and M = 4 as hyper-parameters settings.", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 172, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Effect of GAT Layers and Attention Head", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "Gabor", "middle": [], "last": "Samuel R Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.05326" ] }, "num": null, "urls": [], "raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Enhanced lstm for natural language inference", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhenhua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.06038" ] }, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2016. Enhanced lstm for natural language inference. arXiv preprint arXiv:1609.06038.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The pascal recognising textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2005, "venue": "Machine Learning Challenges Workshop", "volume": "", "issue": "", "pages": "177--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment chal- lenge. In Machine Learning Challenges Workshop, pages 177-190. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "authors": [ { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Donahue", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "Jitendra", "middle": [], "last": "Malik", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "580--587", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ross Girshick, Jeff Donahue, Trevor Darrell, and Ji- tendra Malik. 2014. Rich feature hierarchies for ac- curate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580-587.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A retrospective analysis of the fake news challenge stance detection task", "authors": [ { "first": "Andreas", "middle": [], "last": "Hanselowski", "suffix": "" }, { "first": "Pvs", "middle": [], "last": "Avinesh", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Caspelherr", "suffix": "" }, { "first": "Debanjan", "middle": [], "last": "Chaudhuri", "suffix": "" }, { "first": "M", "middle": [], "last": "Christian", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Meyer", "suffix": "" }, { "first": "", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.05180" ] }, "num": null, "urls": [], "raw_text": "Andreas Hanselowski, Avinesh PVS, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M Meyer, and Iryna Gurevych. 2018a. A retrospective analysis of the fake news challenge stance detection task. arXiv preprint arXiv:1806.05180.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Ukp-athene: Multi-sentence textual entailment for claim verification", "authors": [ { "first": "Andreas", "middle": [], "last": "Hanselowski", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zile", "middle": [], "last": "Li", "suffix": "" }, { "first": "Daniil", "middle": [], "last": "Sorokin", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.01479" ] }, "num": null, "urls": [], "raw_text": "Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018b. Ukp-athene: Multi-sentence textual entailment for claim verification. arXiv preprint arXiv:1809.01479.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic fake news detection: Are models learning to reason?", "authors": [ { "first": "Casper", "middle": [], "last": "Hansen", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Hansen", "suffix": "" }, { "first": "Lucas Chaves", "middle": [], "last": "Lima", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2105.07698" ] }, "num": null, "urls": [], "raw_text": "Casper Hansen, Christian Hansen, and Lucas Chaves Lima. 2021. Automatic fake news detection: Are models learning to reason? arXiv preprint arXiv:2105.07698.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.02907" ] }, "num": null, "urls": [], "raw_text": "Thomas N Kipf and Max Welling. 2016. Semi- supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Fine-grained fact verification with kernel graph attention network", "authors": [ { "first": "Zhenghao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chenyan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.09796" ] }, "num": null, "urls": [], "raw_text": "Zhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2019. Fine-grained fact verification with kernel graph attention network. arXiv preprint arXiv:1910.09796.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Combining fact extraction and verification with neural semantic matching networks", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Haonan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "6859--6866", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neu- ral semantic matching networks. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6859-6866.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Dynamic routing between capsules", "authors": [ { "first": "Sara", "middle": [], "last": "Sabour", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Frosst", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.09829" ] }, "num": null, "urls": [], "raw_text": "Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. arXiv preprint arXiv:1710.09829.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Bert for evidence retrieval and claim verification", "authors": [ { "first": "Amir", "middle": [], "last": "Soleimani", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Marcel", "middle": [], "last": "Worring", "suffix": "" } ], "year": 2020, "venue": "Advances in Information Retrieval", "volume": "12036", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amir Soleimani, Christof Monz, and Marcel Worring. 2020. Bert for evidence retrieval and claim verifica- tion. Advances in Information Retrieval, 12036:359.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Team domlin: Exploiting evidence enhancement for the fever shared task", "authors": [ { "first": "Dominik", "middle": [], "last": "Stammbach", "suffix": "" }, { "first": "Guenter", "middle": [], "last": "Neumann", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)", "volume": "", "issue": "", "pages": "105--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominik Stammbach and Guenter Neumann. 2019. Team domlin: Exploiting evidence enhancement for the fever shared task. In Proceedings of the Sec- ond Workshop on Fact Extraction and VERification (FEVER), pages 105-109.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Christos Christodoulopoulos, and Arpit Mittal", "authors": [ { "first": "James", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" } ], "year": 2018, "venue": "Fever: a large-scale dataset for fact extraction and verification", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.05355" ] }, "num": null, "urls": [], "raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018a. Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Christos Christodoulopoulos, and Arpit Mittal. 2018b. The fact extraction and verification (fever) shared task", "authors": [ { "first": "James", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Oana", "middle": [], "last": "Cocarascu", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.10971" ] }, "num": null, "urls": [], "raw_text": "James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2018b. The fact extraction and verification (fever) shared task. arXiv preprint arXiv:1811.10971.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Graph attention networks", "authors": [ { "first": "Petar", "middle": [], "last": "Veli\u010dkovi\u0107", "suffix": "" }, { "first": "Guillem", "middle": [], "last": "Cucurull", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Casanova", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Romero", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Lio", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.10903" ] }, "num": null, "urls": [], "raw_text": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Ucl machine reading group: Four factor framework for fact finding (hexaf)", "authors": [ { "first": "Takuma", "middle": [], "last": "Yoneda", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Welbl", "suffix": "" }, { "first": "Pontus", "middle": [], "last": "Stenetorp", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)", "volume": "", "issue": "", "pages": "97--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Ucl machine reading group: Four factor framework for fact finding (hexaf). In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 97-102.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Fake news research: Theories, detection strategies, and open problems", "authors": [ { "first": "Reza", "middle": [], "last": "Zafarani", "suffix": "" }, { "first": "Xinyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "3207--3208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reza Zafarani, Xinyi Zhou, Kai Shu, and Huan Liu. 2019. Fake news research: Theories, detection strate- gies, and open problems. In Proceedings of the 25th ACM SIGKDD International Conference on Knowl- edge Discovery & Data Mining, pages 3207-3208.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Joint slot filling and intent detection via capsule neural networks", "authors": [ { "first": "Chenwei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yaliang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Du", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Philip S", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1812.09471" ] }, "num": null, "urls": [], "raw_text": "Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip S Yu. 2018. Joint slot filling and intent de- tection via capsule neural networks. arXiv preprint arXiv:1812.09471.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Transformer-xh: Multi-evidence reasoning with extra hop attention", "authors": [ { "first": "Chen", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chenyan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Corby", "middle": [], "last": "Rosset", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Song", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Tiwary", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, and Saurabh Tiwary. 2020. Transformer-xh: Multi-evidence reasoning with extra hop attention.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Investigating capsule networks with dynamic routing for text classification", "authors": [ { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Jianbo", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Min", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zeyang", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Suofei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.00538" ] }, "num": null, "urls": [], "raw_text": "Wei Zhao, Jianbo Ye, Min Yang, Zeyang Lei, Suofei Zhang, and Zhou Zhao. 2018. Investigating capsule networks with dynamic routing for text classification. arXiv preprint arXiv:1804.00538.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Reasoning over semantic-level graph for fact checking", "authors": [ { "first": "Wanjun", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Zenan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jiahai", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Yin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.03745" ] }, "num": null, "urls": [], "raw_text": "Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2019. Reasoning over semantic-level graph for fact checking. arXiv preprint arXiv:1909.03745.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Gear: Graph-based evidence aggregating and reasoning for fact verification", "authors": [ { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Changcheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.01843" ] }, "num": null, "urls": [], "raw_text": "Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. Gear: Graph-based evidence aggregating and reasoning for fact verification. arXiv preprint arXiv:1908.01843.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "The pipeline of our method. The HGRGA framework is illustrated in the proposed method section." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "where ||v k || is the norm of v k and [[]] is an indicator function, y is the ground truth label. \u03bb is the weighting coefficient, and m + and m \u2212 are margins.The prediction of the utterance can be easily determined by choosing the activation vector with the largest norm\u0177 = arg max k\u2208{1,2,...,K} ||v k ||." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "(a) Homogenous graph structure. Predicted label: REFUTED. (b) Heterogeneous graph structure. Predicted label: SUP-PORTED." }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "Attention map of claim-evidence subgraph with different kinds of graph structure for the case in" }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "The learned agreement values between class capsules (y-axis) and stance capsules (x-axis) for the case in" }, "TABREF2": { "type_str": "table", "text": "Statistics of FEVER dataset.", "num": null, "content": "", "html": null }, "TABREF4": { "type_str": "table", "text": "", "num": null, "content": "
(%).: Overall performance on the FEVER dataset
", "html": null }, "TABREF5": { "type_str": "table", "text": ", where H ce represents the node' representation updated via claim-evidence subgraph", "num": null, "content": "
ModelsLAF-score
Our Model80.6777.54
-w/o H ce -w/o H ee Homo75.64 77.68 78.8970.32 73.52 75.93
max77.3375.23
Aggregationmean attention 77.92 77.5474.97 75.10
", "html": null }, "TABREF6": { "type_str": "table", "text": "Ablation analysis in the development set of FEVER.", "num": null, "content": "", "html": null } } } }