{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:42:35.784446Z" }, "title": "Team Papelo at FEVEROUS: Multi-hop Evidence Pursuit", "authors": [ { "first": "Christopher", "middle": [], "last": "Malon", "suffix": "", "affiliation": { "laboratory": "NEC Laboratories America Princeton", "institution": "", "location": { "postCode": "08540", "region": "NJ" } }, "email": "malon@nec-labs.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We develop a system for the FEVEROUS fact extraction and verification task that ranks an initial set of potential evidence and then pursues missing evidence in subsequent hops by trying to generate it, with a \"next hop prediction module\" whose output is matched against page elements in a predicted article. Seeking evidence with the next hop prediction module continues to improve FEVEROUS score for up to seven hops. Label classification is trained on possibly incomplete extracted evidence chains, utilizing hints that facilitate numerical comparison. The system achieves .281 FEVEROUS score and .658 label accuracy on the development set, and finishes in second place with .259 FEVEROUS score and .576 label accuracy on the test set.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We develop a system for the FEVEROUS fact extraction and verification task that ranks an initial set of potential evidence and then pursues missing evidence in subsequent hops by trying to generate it, with a \"next hop prediction module\" whose output is matched against page elements in a predicted article. Seeking evidence with the next hop prediction module continues to improve FEVEROUS score for up to seven hops. Label classification is trained on possibly incomplete extracted evidence chains, utilizing hints that facilitate numerical comparison. The system achieves .281 FEVEROUS score and .658 label accuracy on the development set, and finishes in second place with .259 FEVEROUS score and .576 label accuracy on the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The 2021 FEVEROUS (Fact Extraction and VERification Over Unstructured and Structured Information) task (Aly et al., 2021) introduces several challenges not seen in the 2018 FEVER task (Thorne et al., 2018) . Tabular information, lists, and captions now appear as evidence, in addition to natural text sentences. Most claims now require multiple pieces of supporting evidence to support or refute them. Even claims that cannot be fully verified now require the submission of supporting evidence for aspects of the claim that can be verified. Counting and numerical reasoning skills are needed to verify many claims.", "cite_spans": [ { "start": 103, "end": 121, "text": "(Aly et al., 2021)", "ref_id": null }, { "start": 184, "end": 205, "text": "(Thorne et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Annotators for FEVEROUS differed in their interpretation of what constituted necessary evidence, and often added duplicate evidence that should be in an alternative reasoning chain to a main reasoning chain. For this reason it is dangerous to target a precise, minimal set of evidence as in FEVER for high evidence F1 (Malon, 2018) , and we instead fill the full set of five sentences and 25 table cells permitted for submission.", "cite_spans": [ { "start": 318, "end": 331, "text": "(Malon, 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thus we focus on solving the evidence retrieval problem and first assemble a set of preliminary set of relevant facts. Several of these facts may be combined to determine the veracity of the claim. Yang et al. (2018) define multi-hop reasoning as reasoning with information taken from more than one document to arrive at an answer, so using the preliminary evidence set could already be multihop reasoning, but from the perspective of retrieval we consider retrieving the initial evidence set to be a first \"hop.\" Where multi-hop reasoning is required, it may be necessary to retrieve additional documents after reading the preliminary evidence, which could not be searched for using the claim alone. We support this functionality by predicting whether evidence chains are complete and generating additional search queries based on the preliminary evidence. This next hop prediction module can be applied as many as seven times to update the evidence chains, each time improving the FEVER-OUS score.", "cite_spans": [ { "start": 198, "end": 216, "text": "Yang et al. (2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the final evidence chains, the label (\"supports\", \"refutes\", or \"not enough information\") is predicted by a module trained on extracted evidence chains. Because \"not enough information\" (NEI) labels are scarce, we alternatively can decide whether to give an NEI label based on whether the next hop prediction module is still seeking more evidence for the claim. Inputs are carefully represented to facilitate numerical comparisons for the final label decision and to allow the use of other contextual information by every module. The described system attains a FEVEROUS score of .281 on the development set with label accuracy of .658.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Downstream classifiers usually classify page elements in isolation, but the meaning of these elements sometimes is not clear without contextual information. In the FEVER task, attaching a prefix to each sentence consisting of the page title in brackets improved performance (Malon, 2018) , for example by providing hints about what pronouns might refer to. We continue this practice for FEVEROUS.", "cite_spans": [ { "start": 274, "end": 287, "text": "(Malon, 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Context and structured information", "sec_num": "2" }, { "text": "For list elements, we take the page element immediately preceding the list as context. This often is a sentence indicating what is in the list. Then the list element is represented by \"[ title ] CON-TEXT context VALUE list item\", so that the list element and what the list is about may be seen simultaneously.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context and structured information", "sec_num": "2" }, { "text": "For table cells, we represent the entire row containing the cell. If a cell in a row above has an is_header attribute, the cells are prefixed with \"KEY header\". This is followed by the actual value from the current row, in the form \"VALUE header\". Thus each cell in a row looks like a combination of key/value pairs (or simply values if there is no header). This representation is similar to the one used by Schlichtkrull et al. (2020) . All the cells in a row would look alike if we simply followed this procedure, so we distinguish the key/value pair corresponding to the current cell by enclosing it in double braces. Finally, the title is prepended, and if there is a caption, it is prepended as \"CAPTION caption\". Examples of the table cell, list element, and sentence formats are shown in Table 1 .", "cite_spans": [ { "start": 408, "end": 435, "text": "Schlichtkrull et al. (2020)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 795, "end": 802, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Context and structured information", "sec_num": "2" }, { "text": "We follow the baseline system (Aly et al., 2021) to select an initial set of documents for downstream analysis. This module retrieves documents whose titles match named entities that appear in the claim, plus documents with a high TF-IDF score against the claim, up to five total documents.", "cite_spans": [ { "start": 30, "end": 48, "text": "(Aly et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preliminary evidence retrieval", "sec_num": "3" }, { "text": "Following Thorne and Vlachos (2021) , we also considered the use of GENRE (Cao et al., 2021) to identify more Wikipedia page titles from entities that were not quite exact matches. (We preferred an exact match if present.) The use of these entities actually drove FEVEROUS score down, perhaps by crowding out the TF-IDF documents, so we reverted to the baseline approach.", "cite_spans": [ { "start": 10, "end": 35, "text": "Thorne and Vlachos (2021)", "ref_id": "BIBREF11" }, { "start": 68, "end": 92, "text": "GENRE (Cao et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preliminary evidence retrieval", "sec_num": "3" }, { "text": "Given a set of documents, we rank page elements using models trained to predict the set of evidence elements. One model is trained on sentences, list elements, and table captions, and the other is trained on table cells. We use a RoBERTa base model (Liu et al., 2019) and follow a training approach similar to the Dense Passage Retriever . Given a positive training pair consisting of a claim c and a piece of evidence e, we collect six negative pairs (c, x i ). For four of the negatives we take x i to be the highest TF-IDF matches returned by the baseline system that are not part of the gold evidence. For the other two negatives we take x i to be part of the gold evidence for a different claim, randomly chosen. The multiple choice classification head of RoBERTa outputs a scalar f (c, x) for each pair, and the batch of seven pairs is trained as one example with the cross-entropy loss \u2212 log e f (c,e) e f (c,e) + 6 i=1 e f (c, (top 25 cells and five non-cell page elements) to these modules. This is computed by taking the union of all page elements (cells or non-cells) in all evidence chains in all claims, and considering the fraction that belong to one of our predicted evidence sets for the corresponding claims. We recall more relevant table cells, but surprisingly, fewer relevant sentences. In development, we mistakenly benchmarked the ranking models on a set which made gold evidence available for ranking even if it was not in a retrieved document, and on this basis, it appeared that ranking the sentences was advantageous. Therefore we used not only the table cell ranking module but also the sentence ranking module in our submitted system.", "cite_spans": [ { "start": 249, "end": 267, "text": "(Liu et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminary evidence retrieval", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x i )", "eq_num": "(1)" } ], "section": "Preliminary evidence retrieval", "sec_num": "3" }, { "text": "The use of the evidence ranking model is not sufficient to solve problems that require more difficult kinds of multi-hop reasoning. Though evidence chains are typically rooted in entities and concepts that appear in the claim, as one progresses down the chain it may be necessary to retrieve information about an entity mentioned in a previous piece of evidence. Such information would be difficult to query based on the claim alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Next hop prediction", "sec_num": "4" }, { "text": "To support this scenario, we introduce a next hop prediction module, as shown in Figure 1 . Hop 1 consists of the evidence retrieved by the evidence ranking module. Given an evidence set produced in hop n, the next hop prediction module attempts to imagine information that is still needed but not retrieved yet. It generates a string consisting of the title of the needed article and the sentence or table cell (in the same format as before) that it wants to retrieve from that article. If available, the article with that title is retrieved; otherwise, sentences from previously retrieved articles will be searched. Then we choose one sentence and two table cells with the best word overlap against the imagined evidence. The bottom ranked elements of the evidence set for hop n are pushed out, and these chosen elements are pushed to the top of the evidence set for hop n + 1. The evidence ranking module was found not to be helpful in ranking newly retrieved evidence, often because it strayed too far from the original claim. 1 The next hop prediction module is implemented by a T5 base sized model (Raffel et al., 2020) . T5 consists of a text-to-text encoder-decoder transformer architecture, and its pre-training mixes mul-tiple unsupervised objectives on the Colossal Clean Crawled Corpus with supervised NLU tasks including abstractive summarization, question answering, GLUE text classification, and translation, cast into a text to text format. We train the model for three epochs on maximum sequence length 512, using Huggingface default parameters (Wolf et al., 2020) . In our task, each input begins with the task identifier \"missing: \" and a list of the pages retrieved already, followed by the string [HYP] and then the claim being classified. Then the elements of the current evidence set (each beginning with a page title in brackets) are concatenated.", "cite_spans": [ { "start": 1031, "end": 1032, "text": "1", "ref_id": null }, { "start": 1104, "end": 1125, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF8" }, { "start": 1562, "end": 1581, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 81, "end": 89, "text": "Figure 1", "ref_id": null }, { "start": 1718, "end": 1723, "text": "[HYP]", "ref_id": null } ], "eq_spans": [], "section": "Next hop prediction", "sec_num": "4" }, { "text": "Training is based on the gold evidence chains in the training set, and the set of documents retrieved by the baseline model. Every example with evidence from a missing document is used as an example, with the current evidence set being the gold evidence in the retrieved documents and the target evidence being the first piece of evidence from a missing document. For half of the remaining examples (those with no missing documents) including all NEI examples with multiple pieces of evidence, a piece of evidence is randomly left out from the current evidence set, and that evidence is to be predicted as the target. In the other examples, the word \"none\" is to be predicted, indicating that the evidence chain is complete.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Next hop prediction", "sec_num": "4" }, { "text": "The target output strings are the word \"supports\" or \"refutes,\" followed by the target evidence in the usual format or \"none.\" For NEI examples, \"supports\" is to be predicted, indicating a partial evidence chain with no contradictions yet. Thus the log likelihood objective on the target output string amounts to a multi-task objective, combining a prediction of missing evidence with a prediction of the label based on partial information. Because missing evidence should be helpful for label prediction, we hope that co-training on the task of label prediction improves the features used to generate the missing evidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Next hop prediction", "sec_num": "4" }, { "text": "The existence of distracting evidence distinguishes the training setting from the testing setting. At test time, the module is always queried with a full set of five sentences and 25 cells, some of which may be irrelevant. For comparison, we trained a model with extracted evidence instead of gold evidence, but the model trained on gold chains achieved more complete chains in fewer hops. Table 3 describes the performance of the next hop predictor on the development set. \"Improved,\" \"Same,\" and \"Worse\" count the number of examples where the number of pieces of gold evidence successfully predicted increased, stayed the same, or decreased compared to the previous hop. \"Complete\" indicates the number of examples for which a complete evidence set is predicted. \"FEVEROUS score\" is the downstream result of the label classification module (see next section) based on the evidence predicted. Each subsequent hop (up to five) improves the fraction of evidence retrieved, and the FEVEROUS score is monotonically improving up to at least seven hops. This implies that the module knows when to stop and output \"none,\" or else its predictions would eventually overwrite needed evidence from the initial retrieval.", "cite_spans": [], "ref_spans": [ { "start": 390, "end": 397, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Next hop prediction", "sec_num": "4" }, { "text": "An example of next hop prediction is given in the appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Next hop prediction", "sec_num": "4" }, { "text": "After the next hop predictor has been run for seven hops, our system uses a label classification module to predict the final label. Another T5 base model is used for this problem, but here we train on the extracted evidence sets (including irrelevant evidence, and missing some gold evidence) that are collected for the training set. Input strings are the same as for the next hop predictor module. The target strings are just \"supports,\" \"refutes,\" or \"neutral.\" As NEI instances only make up 3% of the training set, this label is never learned and the outputs are either \"supports\" or \"refutes.\" 2 The label accuracy of this approach on the development set is compared to other approaches that are trained with gold evidence or a RoBERTa model in Table 4 . We see that a RoBERTa model has trouble learning in the presence of irrelevant evidence, but is confused by the distractions if only trained on gold evidence chains. In contrast, a T5 model can train and perform successfully on real extracted evidence chains. Consistent with our observations, Jiang et al. (2021) recently established a new state of the art on FEVER using T5 trained on lists of real extracted evidence.", "cite_spans": [ { "start": 598, "end": 599, "text": "2", "ref_id": null }, { "start": 1053, "end": 1072, "text": "Jiang et al. (2021)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 749, "end": 756, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Label classification", "sec_num": "5" }, { "text": "Math hints. As numbers are represented as (possibly several) strings of digits, each with its own pre-trained embedding, it is difficult for the model to answer numerical comparison questions. Also, the model may not precisely know the relationship between a number as a word (\"fourteen\") and its numerical form (\"14\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label classification", "sec_num": "5" }, { "text": "We attach hints to the beginning of each premise (list of concatenated evidence) as follows. Numbers in the claim or premise appearing in word form (up to twenty, and multiples of ten, one hundred, and one thousand) are converted to their numerical form, and we attach strings such as \"four equals 4\" for each conversion. Then we collect all numbers (including decimals and integers with commas) with a regular expression, and sort them (along with the number words) from least to greatest, forming a string such as \"LEAST 0 less than 1 less than 30 less than 2017 GREATEST\". After these prefixes, the original premise begins. It can be clearly recognized because it begins with a title inside brackets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label classification", "sec_num": "5" }, { "text": "The NEI class. The NEI class did not have enough examples to be learned reliably in the standard training procedure, but represents 19% of examples in the final test set. To address this, the baseline system upsampled the NEI class by leaving out sentences or entire tables from gold evidence chains to create more NEI examples. For our system, our training data consists of extracted evidence chains rather than gold evidence chains. In addition to the natural NEI examples, we labeled any extracted chain that was still missing information as NEI, gave other extracted chains that were complete their original \"supports\" or \"refutes\" label, and trained a T5 base model with the resulting labels. In the resulting training set, 58% of examples were NEI, 20% were refutes, and 23% were supports.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label classification", "sec_num": "5" }, { "text": "As seen in the confusion matrix of Table 5 , the T5 model could not learn the NEI class well and was biased towards NEI even on supporting or refuting examples. Even if 19% of true labels were NEI, as in the test set, the decrease in accuracy on supporting and refuting classes is too great to justify trying to predict this label. Therefore our submitted system is trained to predict only \"supports\" or \"refutes\" and never NEI.", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 42, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Label classification", "sec_num": "5" }, { "text": "An interesting alternative would be to use the ex-istence of an evidence prediction from the next hop predictor after the final hop to indicate whether an example should be NEI. Following this approach, only 4.4% of NEI examples would be predicted as NEI, compared to 2.8% of supporting and 2.9% or refuting examples, so again including the NEI predictions would yield a net loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label classification", "sec_num": "5" }, { "text": "Team Papelo's system for FEVEROUS achieves .281 FEVEROUS score on the development set, with .658 label accuracy and .348 evidence recall. The largest increase in performance over the baseline comes from the label classifier, which uses a different model architecture and is trained on extracted evidence chains including irrelevant evidence. We also achieve better evidence recall through our table cell ranking module, which was trained with a multiple choice cross entropy loss similar to DPR. Additional gains are achieved by our multi-hop evidence retrieval. These modules can only be effective when given good representations of the context of sentences, list items and table cells, which we have carefully constructed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "On the test set we achieve a slightly lower .259 FEVEROUS score. This is largely due to the decrease of label accuracy to .576, reflecting an introduction of an additional 13% of NEI examples compared to the development set (Aly et al., 2021 ), which our system will always misclassify. The evidence recall of .346 is comparable to the development set.", "cite_spans": [ { "start": 224, "end": 241, "text": "(Aly et al., 2021", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Already the next hop predictor establishes a beneficial enhancement to the original evidence and can be safely run for many hops. The use of word overlap to match the imagined evidence to actual page elements was a compromise for faster and easier development. We believe the same basic method could be made stronger if a new ranking module, with a similar architecture and training procedure to the preliminary evidence retriever, were trained to match imagined evidence to actually missing evidence. The potential for improvement here is suggested by the number of attempted changes in Table 3 , which is always several times the number of evidence sets that were improved. Additional work is needed to improve performance on particular kinds of examples. Many claims require a system to count certain pieces of retrieved evidence. This skill is taught by datasets such as DROP (Dua et al., 2019) and until recently, neural module networks have needed a stronger form of supervision to learn it (Gupta et al., 2020) . A recent alternative (Saha et al., 2021 ) learns a neural module network with weaker supervision, but instead relies on dependency parsing of the query. To address discrete reasoning examples in FEVEROUS, it may be necessary to integrate models trained on external datasets. Table 6 shows an example where a complete evidence chain is retrieved after 7 hops. The large number of hops is needed because the top-ranked supplementary evidence does not contain the missing information. The imagined needed evidence stays the same until satisfactory evidence is retrieved (after exhausting the higher-ranked evidence) in hop 6. Then the next imagined evidence addresses another part of the reasoning chain. With that, contradictory supplementary evidence is retrieved successfully (northwest versus southwest) and the label for the whole claim is fully supported. Although all five initially retrieved sentences have been replaced before this hop, they are not needed.", "cite_spans": [ { "start": 880, "end": 898, "text": "(Dua et al., 2019)", "ref_id": "BIBREF2" }, { "start": 997, "end": 1017, "text": "(Gupta et al., 2020)", "ref_id": "BIBREF3" }, { "start": 1041, "end": 1059, "text": "(Saha et al., 2021", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 588, "end": 595, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1295, "end": 1302, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Once an example has a complete reasoning chain, its retrieval usually stops long before the seventh hop, by predicting no imagined evidence. Table 7 gives an example of a claim correctly classified with math hints but not without. Although math hints improved some examples, overall label accuracy decreased slightly, perhaps because the length of the hints could push necessary evidence beyond the 512 tokens read by the label classifier.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "A Example of next hop prediction", "sec_num": null }, { "text": "Cann River, a river that descends 1,080 metres (3,540 ft) over its 102 kilometres (63 mi) course rises northwest of Granite Mountain and is traversed by the Monaro Highway (which also parallels the former Bombala railway line in several locations) in its upper reaches. Label REFUTES Ground Truth Evidence [ Cann River ] The Cann River rises southwest of Granite Mountain in remote country on the eastern boundary of the Errinundra National Park and flows generally east, then south, then east, then south through the western edge of the Coopracambra National Park and through the Croajingolong National Park, joined by seventeen minor tributaries before reaching its mouth with Bass Strait, at the Tamboon [ Cann River ] The Cann River rises southwest of Granite Mountain in remote country on the eastern boundary of the Errinundra National Park and flows generally east, then south, then east, then south through the western edge of the Coopracambra National Park and through the Croajingolong National Park, joined by seventeen minor tributaries before reaching its mouth with Bass Strait, at the Tamboon Inlet in the Shire of East Gippsland.", "cite_spans": [], "ref_spans": [ { "start": 699, "end": 706, "text": "Tamboon", "ref_id": null } ], "eq_spans": [], "section": "Claim", "sec_num": null }, { "text": "(also two cell retrievals) Table 6 : An example where full evidence is retrieved in seven hops.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 34, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Claim", "sec_num": null }, { "text": "We also tried running the evidence ranking model after locating a bridge sentence based on overlap and prepending it to the candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the training set we assign \"supports\" labels to NEI instances. See below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Lamba Kheda recorded a total population of less than 3,000 with 1,100 scheduled castes in the 2011 census. Label REFUTES Premise LEAST 0.4 less than 2.6 less than 2.7 less than 6 less than 6.25 less than 7.4 less than 8.5 less than 8.7 less than 19.7 less than 28.8 less than 43.1 less than 61 less than 62 less than 82.5 less than 89.5 less than 123 less than 235 less than 289 less than 524 less than 540 less than 560 less than 1100 less than 1850 less than 1977 less than 1981 less than 2011 less than 2058 less than 3000 less than 3166 less than 3908 less than 482365 GREATEST Table 7 : An example correctly classified using math hints that was misclassified without them.", "cite_spans": [], "ref_spans": [ { "start": 582, "end": 589, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Claim", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. FEVEROUS: fact extraction and verification over unstructured and structured information", "authors": [ { "first": "Rami", "middle": [], "last": "Aly", "suffix": "" }, { "first": "Zhijiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Michael", "middle": [ "Sejr" ], "last": "Schlichtkrull", "suffix": "" }, { "first": "James", "middle": [], "last": "Thorne", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. FEVEROUS: fact extraction and verification over unstructured and structured information. CoRR, abs/2106.05707.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Autoregressive entity retrieval", "authors": [ { "first": "Nicola", "middle": [], "last": "De Cao", "suffix": "" }, { "first": "Gautier", "middle": [], "last": "Izacard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In International Conference on Learning Represen- tations.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs", "authors": [ { "first": "Dheeru", "middle": [], "last": "Dua", "suffix": "" }, { "first": "Yizhong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Stanovsky", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2368--2378", "other_ids": { "DOI": [ "10.18653/v1/N19-1246" ] }, "num": null, "urls": [], "raw_text": "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requir- ing discrete reasoning over paragraphs. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368-2378, Min- neapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural module networks for reasoning over text", "authors": [ { "first": "Nitish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, and Matt Gardner. 2020. Neural module networks for reasoning over text. In International Conference on Learning Representations.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Exploring listwise evidence reasoning with t5 for fact verification", "authors": [ { "first": "Kelvin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ronak", "middle": [], "last": "Pradeep", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "402--410", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-short.51" ] }, "num": null, "urls": [], "raw_text": "Kelvin Jiang, Ronak Pradeep, and Jimmy Lin. 2021. Exploring listwise evidence reasoning with t5 for fact verification. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 402-410, Online. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Dense passage retrieval for open-domain question answering", "authors": [ { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ledell", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6769--6781", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.550" ] }, "num": null, "urls": [], "raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Roberta: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Team papelo: Transformer networks at FEVER", "authors": [ { "first": "Christopher", "middle": [], "last": "Malon", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)", "volume": "", "issue": "", "pages": "109--113", "other_ids": { "DOI": [ "10.18653/v1/W18-5517" ] }, "num": null, "urls": [], "raw_text": "Christopher Malon. 2018. Team papelo: Trans- former networks at FEVER. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 109-113, Brussels, Belgium. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Exploring the limits of transfer learning with a unified text-totext transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Weakly supervised neuro-symbolic module networks for numerical reasoning", "authors": [ { "first": "Amrita", "middle": [], "last": "Saha", "suffix": "" }, { "first": "R", "middle": [], "last": "Shafiq", "suffix": "" }, { "first": "Steven", "middle": [ "C H" ], "last": "Joty", "suffix": "" }, { "first": "", "middle": [], "last": "Hoi", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amrita Saha, Shafiq R. Joty, and Steven C. H. Hoi. 2021. Weakly supervised neuro-symbolic mod- ule networks for numerical reasoning. CoRR, abs/2101.11802.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Joint verification and reranking for open fact checking over tables", "authors": [ { "first": "Vladimir", "middle": [], "last": "Michael Sejr Schlichtkrull", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Yih", "suffix": "" }, { "first": "", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Sejr Schlichtkrull, Vladimir Karpukhin, Bar- las Oguz, Mike Lewis, Wen-tau Yih, and Sebas- tian Riedel. 2020. Joint verification and rerank- ing for open fact checking over tables. CoRR, abs/2012.15115.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Evidencebased factual error correction", "authors": [ { "first": "James", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "3298--3309", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.256" ] }, "num": null, "urls": [], "raw_text": "James Thorne and Andreas Vlachos. 2021. Evidence- based factual error correction. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 3298-3309, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "FEVER: a large-scale dataset for fact extraction and VERification", "authors": [ { "first": "James", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Christodoulopoulos", "suffix": "" }, { "first": "Arpit", "middle": [], "last": "Mittal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "809--819", "other_ids": { "DOI": [ "10.18653/v1/N18-1074" ] }, "num": null, "urls": [], "raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "", "middle": [], "last": "Drame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.6" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2369--2380", "other_ids": { "DOI": [ "10.18653/v1/D18-1259" ] }, "num": null, "urls": [], "raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answer- ing. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "content": "
1930 }}
Table cell [ L-arabinose operon ] CAPTION Catabolism of arabinose in E. coli {{ KEY
Substrate VALUE L-arabinose }} KEY Enzyme(s) VALUE AraA KEY Function
VALUE Isomerase KEY Reversible VALUE Yes KEY Product VALUE L-ribulose
", "html": null, "text": "Type Example Sentence [ Mississippi River ] When measured from its traditional source at Lake Itasca, the Mississippi has a length of 2,320 miles (3,730 km). List item [ Temple Tower ] LIST CONTEXT Cast VALUE Marceline Day as Patricia Verney Table cell [ Temple Tower ] VALUE Release date {{ KEY Temple Tower VALUE April 13,", "type_str": "table" }, "TABREF1": { "num": null, "content": "", "html": null, "text": "Example representations of various page elements.", "type_str": "table" }, "TABREF2": { "num": null, "content": "
Evidence 1
Hop nEvidence 2 ...Next Hop PredictorImagined Title
Evidence k
Retrieve
Predicted LabelDocument
Imagined EvidenceSentence
Matcher
New Evidence
Evidence 1
Hop n+1...New Evidence
Evidence k-1
Figure 1: Applying the next hop prediction module to update evidence.
SystemRecall
Baseline sentences .5265
Ranking sentences .3875
Baseline cells.2741
Random cells.2808
Ranking cells.5028
", "html": null, "text": "These outputs are ranked across all potential evidence to collect five sentences and 25 table cells. Every sentence in the retrieved documents is ranked, but only the top three tables retrieved by the baseline TF-IDF ranker are considered for extracting table cells.The baseline system extracts sentences and other non-cell elements by TF-IDF similarity to the claim, and table cells with a RoBERTa base sized model that performs sequence tagging on linearized tables.Table 2 compares the recall of our system", "type_str": "table" }, "TABREF3": { "num": null, "content": "", "html": null, "text": "", "type_str": "table" }, "TABREF5": { "num": null, "content": "
ModelTrain/DevLabel accuracy
RoBERTa Gold on Gold.829
RoBERTa Gold on Extracted.550
RoBERTa Extracted on Extracted.495
T5Gold on Gold.848
T5Gold on Extracted.572
T5Extracted on Extracted.661
T5Extracted+Math on Extracted+Math .658
", "html": null, "text": "Performance of the next hop prediction module. FEVEROUS score is based on applying the downstream label classification module after the given hop.", "type_str": "table" }, "TABREF6": { "num": null, "content": "
TruthSupports NEIRefutes
Supports .3403.5179 .1418
NEI.0918.7146 .1936
Refutes.0822.4559 .4619
Supports .6471.0000 .3529
NEI.4431.0000 .5569
Refutes.2341.0000 .7659
", "html": null, "text": "Label classification models.", "type_str": "table" }, "TABREF7": { "num": null, "content": "", "html": null, "text": "Confusion (development set) when training with (top) and without (bottom) extracted NEI labels.", "type_str": "table" }, "TABREF8": { "num": null, "content": "
Inlet in the Shire of
East Gippsland.
Hop 2 Imagined[ Monaro Highway ] The Monaro Highway parallels the former Bombala
railway line in several locations.
Hop 2 Retrieved[ Monaro Highway ] (also two cell retrievals)
Hop 3 Imagined[ Monaro Highway ] The Monaro Highway parallels the former Bombala
railway line in several locations.
Hop 4 Imagined[ Monaro Highway ] The Monaro Highway parallels the former Bombala
railway line in several locations.
Hop 5 Imagined[ Monaro Highway ] The Monaro Highway parallels the former Bombala
railway line in several locations.
Hop 6 Imagined[ Monaro Highway ] The Monaro Highway parallels the former Bombala
railway line in several locations.
Hop 6 Retrieved[ Monaro Highway ] The road also parallels the former Bombala railway
line in several locations.
(also two cell retrievals)
Hop 7 Imagined[ Cann River ]
", "html": null, "text": "In 1958, it was named the Monaro Highway in both NSW and the ACT, though the same name had been in use by the Snowy Mountains Highway until 1955. The Cann River rises northwest of Granite Mountain and is traversed by the Monaro Highway in its upper reaches.Hop 7 Retrieved", "type_str": "table" } } } }