{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:33:32.463262Z" }, "title": "Table Retrieval May Not Necessitate Table-specific Model Design", "authors": [ { "first": "Zhiruo", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "zhiruow@cs.cmu.edu" }, { "first": "Zhengbao", "middle": [], "last": "Jiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "zhengbaj@cs.cmu.edu" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "gneubig@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Tables are an important form of structured data for both human and machine readers alike, providing answers to questions that cannot, or cannot easily, be found in texts. Recent work has designed special models and training paradigms for table-related tasks such as tablebased question answering and table retrieval. Though effective, they add complexity in both modeling and data acquisition compared to generic text solutions and obscure which elements are truly beneficial. In this work, we focus on the task of table retrieval, and ask: \"is table-specific model design necessary for table retrieval, or can a simpler text-based model be effectively used to achieve a similar result?\" First, we perform an analysis on a table-based portion of the Natural Questions dataset (NQtable), and find that structure plays a negligible role in more than 70% of the cases. Based on this, we experiment with a general Dense Passage Retriever (DPR) based on text and a specialized Dense Table Retriever (DTR) that uses table-specific model designs. We find that DPR performs well without any table-specific design and training, and even achieves superior results compared to DTR when fine-tuned on properly linearized tables. We then experiment with three modules to explicitly encode table structures, namely auxiliary row/column embeddings, hard attention masks, and soft relation-based attention biases. However, none of these yielded significant improvements, suggesting that table-specific model design may not be necessary for table retrieval. 1", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Tables are an important form of structured data for both human and machine readers alike, providing answers to questions that cannot, or cannot easily, be found in texts. Recent work has designed special models and training paradigms for table-related tasks such as tablebased question answering and table retrieval. Though effective, they add complexity in both modeling and data acquisition compared to generic text solutions and obscure which elements are truly beneficial. In this work, we focus on the task of table retrieval, and ask: \"is table-specific model design necessary for table retrieval, or can a simpler text-based model be effectively used to achieve a similar result?\" First, we perform an analysis on a table-based portion of the Natural Questions dataset (NQtable), and find that structure plays a negligible role in more than 70% of the cases. Based on this, we experiment with a general Dense Passage Retriever (DPR) based on text and a specialized Dense Table Retriever (DTR) that uses table-specific model designs. We find that DPR performs well without any table-specific design and training, and even achieves superior results compared to DTR when fine-tuned on properly linearized tables. We then experiment with three modules to explicitly encode table structures, namely auxiliary row/column embeddings, hard attention masks, and soft relation-based attention biases. However, none of these yielded significant improvements, suggesting that table-specific model design may not be necessary for table retrieval. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Tables are a valuable form of data that organize information in a structured way for easy storage, browsing, and retrieval (Cafarella et al., 2008; Jauhar et al., 2016; Zhang and Balog, 2020) . They often contain data that is organized in a more accessible manner than in unstructured texts, or even not 1 The code and data are available at https://github.com/ zorazrw/nqt-retrieval", "cite_spans": [ { "start": 123, "end": 147, "text": "(Cafarella et al., 2008;", "ref_id": "BIBREF4" }, { "start": 148, "end": 168, "text": "Jauhar et al., 2016;", "ref_id": "BIBREF18" }, { "start": 169, "end": 191, "text": "Zhang and Balog, 2020)", "ref_id": "BIBREF48" }, { "start": 304, "end": 305, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Who is the highest paid baseball player in the major leagues? Table: Figure 1: A correct table can be identified by matching key phrases in question to those in the table title and header cells.", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 68, "text": "Table:", "ref_id": null } ], "eq_spans": [], "section": "Question:", "sec_num": null }, { "text": "available in text at all (Chen et al., 2020a) . Therefore, tables are widely used in question answering (QA) (Pasupat and Liang, 2015; Zhong et al., 2017; Yu et al., 2018) . For open-domain QA, the ability to retrieve relevant tables with target answers is crucial to the performance of end-to-end QA systems (Herzig et al., 2021) . For example, in the Natural Questions (Kwiatkowski et al., 2019) dataset, 13.2% of the answerable questions can be addressed by tables and 74.4% by texts.", "cite_spans": [ { "start": 25, "end": 45, "text": "(Chen et al., 2020a)", "ref_id": "BIBREF7" }, { "start": 109, "end": 134, "text": "(Pasupat and Liang, 2015;", "ref_id": "BIBREF29" }, { "start": 135, "end": 154, "text": "Zhong et al., 2017;", "ref_id": "BIBREF49" }, { "start": 155, "end": 171, "text": "Yu et al., 2018)", "ref_id": "BIBREF43" }, { "start": 309, "end": 330, "text": "(Herzig et al., 2021)", "ref_id": "BIBREF15" }, { "start": 371, "end": 397, "text": "(Kwiatkowski et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Question:", "sec_num": null }, { "text": "Because tables are intuitively different from unstructured text, most previous works consider text-based methods to be functionally incapable of processing tables effectively and create specialpurpose models with table-specific architectures and training methods, adding auxiliary structureindicative parameters (Herzig et al., 2020; Wang et al., 2021b; Deng et al., 2020; Yang et al., 2022) , enforcing structure-aware attention (Yin et al., 2020; Wang et al., 2021b; Zayats et al., 2021) , and table-oriented pre-training objectives (Deng et al., 2020; Yin et al., 2020; Wang et al., 2021b; . Though effective in many tasks, these special-purpose models are more complex than generic solutions for textual encod-ing, and must be intentionally built for and trained on tabular data. In addition, because these methods modify both the model design and the training data, it is difficult to measure the respective contributions of each of these elements.", "cite_spans": [ { "start": 312, "end": 333, "text": "(Herzig et al., 2020;", "ref_id": "BIBREF16" }, { "start": 334, "end": 353, "text": "Wang et al., 2021b;", "ref_id": "BIBREF37" }, { "start": 354, "end": 372, "text": "Deng et al., 2020;", "ref_id": "BIBREF10" }, { "start": 373, "end": 391, "text": "Yang et al., 2022)", "ref_id": "BIBREF40" }, { "start": 430, "end": 448, "text": "(Yin et al., 2020;", "ref_id": "BIBREF41" }, { "start": 449, "end": 468, "text": "Wang et al., 2021b;", "ref_id": "BIBREF37" }, { "start": 469, "end": 489, "text": "Zayats et al., 2021)", "ref_id": "BIBREF44" }, { "start": 535, "end": 554, "text": "(Deng et al., 2020;", "ref_id": "BIBREF10" }, { "start": 555, "end": 572, "text": "Yin et al., 2020;", "ref_id": "BIBREF41" }, { "start": 573, "end": 592, "text": "Wang et al., 2021b;", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Question:", "sec_num": null }, { "text": "Particularly for question-based table retrieval, we hypothesize that content matching is paramount, and little, if any, structural understanding may be required. For example, given a question \"Who is the highest paid baseball player in the major leagues?\" in Figure 1 , a correct table can be retrieved by simply identifying the phrase \"highest-paid\", \"major league\", and \"baseball player\" in the table title, and matching the semantic type of \"Who\" to the \"Name\" header. Hence, any benefit demonstrated by table-based models may well come from good training data while table-specific model design has a limited influence.", "cite_spans": [], "ref_spans": [ { "start": 259, "end": 267, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Question:", "sec_num": null }, { "text": "In this paper, we specifically ask: \"Does table retrieval require table-specific model design, or can properly trained generic text retrievers be exploited to achieve similar performance with less added complexity?\" Our work centers around the tablebased open-domain QA dataset, NQ-table (Herzig et al., 2021) , a subset of the Natural Questions (NQ) dataset (Kwiatkowski et al., 2019) where each question can be answered by part(s) of a Wikipedia table. We start with manual analysis of 100 random samples from NQ-table and observe that consideration of table structure seems largely unnecessary in over 70% of the cases, while the remaining 30% of cases only require simple structure understanding such as row/column alignment without structure-dependent complex reasoning chains ( \u00a7 2). With this observation, we experiment with two strong retrieval models: a generalpurpose text-based retriever (DPR; Karpukhin et al. (2020) ) and a special-purpose table-based retriever (DTR; Herzig et al. (2021) ). We find that DPR, without any table-specific model design or training, achieves similar accuracy as the state-of- theart table retriever DTR, and further fine-tuning on NQ-table yields significantly superior performance, casting doubt on the necessity of table-specific model design in table retrieval ( \u00a7 3) . Using DPR as the base model, we then thoroughly examine the effectiveness of both encoding structure implicitly with structure-preserving table linearization ( \u00a7 4) and encoding structure explicitly with tablespecific model design, such as auxiliary embeddings and specialized attention mechanisms ( \u00a7 5).", "cite_spans": [ { "start": 288, "end": 309, "text": "(Herzig et al., 2021)", "ref_id": "BIBREF15" }, { "start": 359, "end": 385, "text": "(Kwiatkowski et al., 2019)", "ref_id": "BIBREF22" }, { "start": 905, "end": 928, "text": "Karpukhin et al. (2020)", "ref_id": "BIBREF19" }, { "start": 981, "end": 1001, "text": "Herzig et al. (2021)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 1119, "end": 1316, "text": "theart table retriever DTR, and further fine-tuning on NQ-table yields significantly superior performance, casting doubt on the necessity of table-specific model design in table retrieval ( \u00a7 3)", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Question:", "sec_num": null }, { "text": "We find that models can already achieve a degree of structure awareness using properly linearized tables as inputs, and additionally adding explicit structure encoding model designs does not yield a further improvement. In sum, the results reveal that a strong text-based model is competitive for table retrieval, and table-specific model designs may have limited additional benefit. This indicates the potential to directly apply future improved text retrieval systems for table retrieval, a task where they were previously considered less applicable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question:", "sec_num": null }, { "text": "Structure Does Table Retrieval Require?", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 30, "text": "Table Retrieval", "ref_id": "TABREF15" } ], "eq_spans": [], "section": "NQ-table Analysis: How Much", "sec_num": "2" }, { "text": "The NQ-table dataset (Herzig et al., 2021) is a subset of the Natural Questions (NQ) dataset (Kwiatkowski et al., 2019) which contains questions from real users that can be answered by Wikipedia articles. Previous works on textbased QA extract the text portion from source Wikipedia articles that can answer around 71k questions, while NQ-table extract tables that contain answers for 12k questions. Unless otherwise specified, we use NQ-text to denote the commonly referred NQ dataset that can be answered by texts.", "cite_spans": [ { "start": 21, "end": 42, "text": "(Herzig et al., 2021)", "ref_id": "BIBREF15" }, { "start": 93, "end": 119, "text": "(Kwiatkowski et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "NQ-table Analysis: How Much", "sec_num": "2" }, { "text": "To better understand to what extent (if any) is structure understanding required by table retrieval, we perform a manual analysis on the NQ-table dataset. Specifically, we randomly sample 100 questions and their relevant tables then categorize their matching patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NQ-table Analysis: How Much", "sec_num": "2" }, { "text": "Keyword Matching Without Structural Concern Aligning with the insight that retrieval often emphasizes content matching rather than complex reasoning (Rogers et al., 2021) , we find that 71 out of the 100 samples only require simple keyword matching, where 18 questions fully match with table titles (Figure 2 (a) ) and the other 53 questions further match with table headers (Figure 2 (b) ).", "cite_spans": [ { "start": 149, "end": 170, "text": "(Rogers et al., 2021)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 299, "end": 312, "text": "(Figure 2 (a)", "ref_id": "FIGREF0" }, { "start": 375, "end": 388, "text": "(Figure 2 (b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "NQ-table Analysis: How Much", "sec_num": "2" }, { "text": "Retrieval that Requires Row/Column Alignment For the other 29 samples, understanding table structure is helpful but only simple row/column alignment is needed. 21 of them require locating content cells in a specific column and combining the information from headers. For example in Figure 2 (c), under the general header \"Population\", one should locate the \"Total\" field by their structural relation to confirm that the 'total number' measure of 'population' exists. In addition, 7 of the samples are some- what ambiguous and may require external knowledge or question clarification .", "cite_spans": [], "ref_spans": [ { "start": 282, "end": 290, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "NQ-table Analysis: How Much", "sec_num": "2" }, { "text": "In summary, our analysis reveals that understanding table structure is not necessary in the majority of cases, and even for cases where structural information is useful, they merely require aligning the rows/columns instead of building complex chains of reasoning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NQ-table Analysis: How Much", "sec_num": "2" }, { "text": "Given the previous analysis, we hypothesize that general-purpose text-based retrievers without tablespecific designs might not be necessarily worse than special-purpose table-based retrievers, contradictory to what most previous work has assumed (Herzig et al., 2021 (Herzig et al., , 2020 Yin et al., 2020; Wang et al., 2021b) . Properly trained text-based retrievers might even outperform table-based retrievers because the strong content matching ability learned on text retrieval datasets can transfer to the table retrieval task.", "cite_spans": [ { "start": 246, "end": 266, "text": "(Herzig et al., 2021", "ref_id": "BIBREF15" }, { "start": 267, "end": 289, "text": "(Herzig et al., , 2020", "ref_id": "BIBREF16" }, { "start": 290, "end": 307, "text": "Yin et al., 2020;", "ref_id": "BIBREF41" }, { "start": 308, "end": 327, "text": "Wang et al., 2021b)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Text Retrieval vs Table Retrieval", "sec_num": "3" }, { "text": "To validate these assumptions, we examine two representative retrieval systems: the text-based Dense Passage Retriever (DPR) and the table-based Dense Table Retriever (DTR). We first briefly introduce their input formats and model architectures ( \u00a7 3.1, \u00a7 3.2), then conduct experiments in both zeroshot and fine-tuning settings and compare their table retrieval performance ( \u00a7 3.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Retrieval vs Table Retrieval", "sec_num": "3" }, { "text": "We choose DPR (Karpukhin et al., 2020) as a representative text retrieval model, mainly because of (1) its impressive performance across many textrelated retrieval tasks, and (2) its similarity with DTR from both training and modeling perspectives, which make it easy to make fair comparisons.", "cite_spans": [ { "start": 14, "end": 38, "text": "(Karpukhin et al., 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Text Retriever: DPR", "sec_num": "3.1" }, { "text": "DPR comprises a question-context bi-encoder built on BERT (Devlin et al., 2018) , which includes three types of input embeddings as summarized in Table 1 . The question encoder BERT q encodes each question q and outputs its dense representation using the representation of [CLS] token, denoted as h q = BERT q (q) [CLS] . The context encoder works similarly. To enable tables for sequential context inputs, we linearize each table into a token sequence T , which is then fed into the context encoder BERT c to obtain its dense representation h T = BERT c (T ) [CLS] . The similarity score between a question q and a table T is computed as the dot product of two vectors sim(q,", "cite_spans": [ { "start": 58, "end": 79, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF11" }, { "start": 314, "end": 319, "text": "[CLS]", "ref_id": null }, { "start": 560, "end": 565, "text": "[CLS]", "ref_id": null } ], "ref_spans": [ { "start": 146, "end": 153, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Text Retriever: DPR", "sec_num": "3.1" }, { "text": "T ) = h q \u2022 h T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Retriever: DPR", "sec_num": "3.1" }, { "text": "DPR has been trained only on sequential text contexts. For each question in the NQ-text training set, the model is trained to select the correct context that contains the answer from a curated batch of contexts including both the annotated correct contexts and mined hard negative contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Retriever: DPR", "sec_num": "3.1" }, { "text": "To convert tables into the DPR input format, we linearize tables into token sequences. We concatenate the title, the header row, and subsequent content rows using a period '.' (row delimiter). Within each header or content row, we concatenate adjacent cell strings using a vertical bar '|' (cell delimiter). A template table linearization reads as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Retriever: DPR", "sec_num": "3.1" }, { "text": "[title].[header].[content 1 ]. \u2022 \u2022 \u2022 .[content n ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Retriever: DPR", "sec_num": "3.1" }, { "text": "Although the BERT encoder has the capacity for a maximum of 512 tokens, DPR is only exposed to contexts no longer than 100 words during training and testing. To avoid potential discrepancies between its original training and our inference procedure, we shorten long tables by selecting the first few rows that fit into the 100-word window.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Retriever: DPR", "sec_num": "3.1" }, { "text": "Dense Table Retriever (DTR) (Herzig et al., 2021) is the current state-of-the-art table retrieval model on the NQ-table dataset.", "cite_spans": [ { "start": 28, "end": 49, "text": "(Herzig et al., 2021)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Table Retriever: DTR", "sec_num": "3.2" }, { "text": "Model Architecture DTR largely follows the biencoder structure of DPR, but differs from it in the embedding layer. As shown in Table 1 , DTR utilizes the existing embeddings in alternative ways and introduces new types of embeddings specifically designed to encode tables.", "cite_spans": [], "ref_spans": [ { "start": 127, "end": 134, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Table Retriever: DTR", "sec_num": "3.2" }, { "text": "Both models use the BERT vocabulary index for token embedding. For the segment index, DPR assigns all tokens in a sequence to index 0, while DTR distinguishes the title from table content by assigning 0 and 1, respectively. For positions, DPR inherits from BERT the sequence-wise order index [0, 1, 2, ..., sequence length\u22121]; DTR adopts a cellwise reset strategy that records the index of a token within its located cell [0, 1, ..., cell length \u2212 1].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Retriever: DTR", "sec_num": "3.2" }, { "text": "Most importantly, DTR introduces row and column embeddings to encode the structural position of each token in the cell that it appears. This explicit join of three positional embeddings is potentially more powerful than the BERT-style flat index. Besides, concerning the high frequency of numerical values in tables, DTR adds a ranking index for each token if it is part of a number. First, model parameters, except for those extra table-specific embeddings, are initialized with BERT weights. The model is then pre-trained on all Wikipedia tables using the Masked LM (MLM) (Devlin et al., 2018) task, yielding the TAPAS (Herzig et al., 2020) model. Second, to leverage TAPAS to the retrieval task, it is further pre-trained using the Inverse Cloze task (ICT) introduced by ORQA , again, on all Wikipedia tables. Third, the model is trained on the specific NQ-table dataset, similar to the way that DPR is trained on text retrieval datasets: for each question in the NQ-table training set, DTR uses the annotated table as the positive context and self-mined tables without answers as hard negative (HN) contexts.", "cite_spans": [ { "start": 574, "end": 595, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF11" }, { "start": 621, "end": 642, "text": "(Herzig et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Table Retriever: DTR", "sec_num": "3.2" }, { "text": "To evaluate the benefit on table retrieval from training on in-domain text retrieval datasets, we compare the performance of DPR and BERT (Devlin et al., 2018) after fine-tuning on NQ-table.", "cite_spans": [ { "start": 138, "end": 159, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Text Retrieval Benefits Table Retrieval", "sec_num": "3.3" }, { "text": "As shown in Table 3 , BERT-table significantly underperforms DPR-table, indicating that training on in-domain text retrieval datasets benefits the table retrieval task. We conjecture that the large gap is essentially because (1) NQ-text and NQ-table questions share similar characteristics hence are agnostic to the format of answer source (Wolfson et al., 2020) , and (2) NQ-text has a larger size than NQ-table (71k versus 12k).", "cite_spans": [ { "start": 340, "end": 362, "text": "(Wolfson et al., 2020)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Text Retrieval Benefits Table Retrieval", "sec_num": "3.3" }, { "text": "To verify if table-specific model designs in DTR are necessary, we start with comparing the original DPR with DTR to evaluate their off-the-shelf performance, then proceed to fine-tune DPR on NQtable to examine the how much improvement can be brought by training data. We evaluate both models on NQ-table test set and measure the retrieval accuracy by computing the portion of questions where the top-k retrieved tables contain the answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DPR vs DTR", "sec_num": "3.4" }, { "text": "For DPR experiments, we use the latest published checkpoint 2 where the hard-negative text passages are mined using the DPR checkpoint saved in the previous round. To reproduce the DTR performance, we use the published checkpoints 3 and run the retrieval inference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DPR vs DTR", "sec_num": "3.4" }, { "text": "To curate training samples for questions in the NQ-table training set, we take the same positive table used in DTR training. For negative contexts, we use the original DPR checkpoint to retrieve the top-100 table candidates for each question, from which we take the highest-ranked tables without answers as the hard negatives. We train with a batch size of 16 and a learning rate of 2e\u22125. Experiments are finished on four NVIDIA Tesla V100 GPUs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DPR vs DTR", "sec_num": "3.4" }, { "text": "Note that the published DPR and DTR checkpoints are not strictly comparable, since the size of DPR base falls between the DTR medium and DTR large with respect to the number of parameters. We report the performance of DTR in both medium and large size to approximate the lower and upper bounds for the DTR base model. Table 2 shows the configurations of BERTvariants in different sizes. As can be seen from the hyper-parameter values, models of medium size have the smallest capacity, base is an intermediate configuration, and large size is the biggest.", "cite_spans": [], "ref_spans": [ { "start": 318, "end": 325, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "DPR vs DTR", "sec_num": "3.4" }, { "text": "As reported in Table 3 , DPR is able to achieve a zero-shot retrieval accuracy (DPR) on NQ-table that is fairly close to the state-of-the-art DTR model, even without any table-specific model design and training. Further, simply fine-tuning DPR on NQ-table (DPR-table) using the same annotated positive and mined hard-negative tables as DTR increases the performance by a large margin, achieving superior performance than DTR, especially at top ranking positions (i.e., small k).", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "DPR vs DTR", "sec_num": "3.4" }, { "text": "Retrieval Accuracy @1 @5 @10 @20 @50 These observations question the necessity of both table-specific model designs listed in Table 1 and table-specific pre-training listed in Figure 3 . Given the task analysis in \u00a7 2 that table retrieval only requires simple structure understanding, we hypothesize that DPR, trained with table inputs linearized from top-to-bottom and left-to-right, is functionally capable of implicitly encoding simple table structure such as row/column alignment, and the benefit of extra table-specific model designs is minimal. To thoroughly and rigorously verify our hypothesis, we first examine the effect of different ordering in table linearization in \u00a7 4, then experiment with three widely-used structure injection model designs by adding them on DPR in \u00a7 5.", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 133, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 176, "end": 184, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "The simplest way to encode table structure is to linearize the table following the top-to-bottom leftto-right order and insert delimiters between cells and rows, from which the sequence-oriented transformer models should also be able to recover the two-dimensional table structure. We hypothesize that this type of implicit structure encoding is sufficient for table retrieval, which only requires simple structure understanding. To verify this, we manipulate linearized tables by randomly shuffling their rows/columns ( \u00a7 4.1) or removing the delimiters ( \u00a7 4.2), and examine how these perturbation affect the final performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implicit Structure Encoding from Linearized Tables", "sec_num": "4" }, { "text": "Our first experiment focuses on the order of table linearization: if DPR relies on a proper linearization to capture table structure, randomly shuffling the table contents should corrupt the structure information and hurt the representation quality, leading to lower retrieval accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shuffling Rows and Columns", "sec_num": "4.1" }, { "text": "To verify this, we shuffle table cells within each row, each column, or both. Cells in the same row often describe the same entity from multiple properties according to their column headers, therefore shuffling the order of multiple cells in the same row corrupts their alignment with header cells. Meanwhile, cells in the same column are often of the same semantic type but are attributes to different entities in different rows, shuffling the order of cells in the same column breaks their alignment with entities. We also examine shuffling on both dimensions, which completely removes the order information from the table linearizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shuffling Rows and Columns", "sec_num": "4.1" }, { "text": "Since models trained on properly linearized tables might be prone to the train-test discrepancy when tested on shuffled tables, we conjecture that the gap between testing on proper tables and shuffled tables cannot be fully attributed to the loss of order information. We therefore conduct a more rigorous experiment by fine-tuning DPR on shuffled tables in both dimensions (DPR-table w/ shuffle) and test it on both proper and shuffled tables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shuffling Rows and Columns", "sec_num": "4.1" }, { "text": "Retrieval Accuracy Model Shuffle @1 @5 @10 @20 @50 As shown in Table 4 , on the original DPR model, all table shuffling strategies result in minor variations in retrieval accuracy, which is intuitive because DPR has never been trained on linearized tables and it is not sensitive to cell orders.", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "The performance of fine-tuned DPR (DPR-table) drops significantly when tested on shuffled tables, similar to the previous finding that T5 model is also sensitive to the ordering of structured knowledge (Xie et al., 2022) . Besides the potential discrepancy in table layout between training and test inputs, this may indicate that DPR, although without explicit structure encoding modules, also learns to implicitly capture structures by training on linearized table inputs.", "cite_spans": [ { "start": 202, "end": 220, "text": "(Xie et al., 2022)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "To ablate out the influence of train-test discrepancy, we also fine-tune DPR on shuffled positive and negative tables. As expected, DPR-table w/ shuffle does not suffer from train-test discrepancy. While DPR fine-tuned on shuffled tables still outperforms the original DPR (57.04\u219262.94@1), the improvement is not as significant as the im-provement obtained by fine-tuning on proper tables (57.04\u219267.91@1), indicating that DPR is able to utilize structure-preserving table linearizations to encode structures during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "Comparing different shuffling dimensions, we notice that in-row shuffling hurts the performance more than in-column shuffling, indicating that preserving semantic type alignment within each column is more important than preserving entity alignment within each row for table retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "In this section, we study the impact of delimiters in helping models to encode table structures. If delimiters are not included, it is theoretically impossible to recover the table structure even from properly linearized tables, because the boundaries between different cells and rows are unknown. To verify if delimiters can serve as effective indicators of table structure, we study the usefulness of both inserting delimiter ('|') between cells and inserting delimiter ('.') between rows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Removing Delimiters Between Rows/Cells", "sec_num": "4.2" }, { "text": "Similarly to the previous experiment, we evaluate (1) the original DPR model (DPR), (2) the DPR fine-tuned on tables with delimiters (DPR-table), and (3) the one fine-tuned on linearized tables without delimiters (DPR-table w/o delimiter).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Removing Delimiters Between Rows/Cells", "sec_num": "4.2" }, { "text": "Retrieval Accuracy Model Delimiter @1 @5 @10 @20 @50 As shown in Table 5 , for DPR, although the overall performance drop is small without delimiters, separating cells is more important than separating rows, which is intuitive because the number of cells is larger than the number of rows. On DPR-table that learns from properly delimited tables, the influence is more significant, and the extent of dropping is similar to that of table structure shuffling in Table 4 . Also similar to the previous findings, training on non-delimited tables (DPR-table w/o delimiter) improves over the original DPR, but the improvement is not as significant as the improvement obtained by fine-tuning on delimited tables, suggesting that cell and row delimiters help models encode table structure. In this section, we examine three widely used table-specific modules to explicitly encode table structure information by adding these modules on top of the DPR architecture. As summarized in Table 6 and illustrated in Figure 4 , we categorize existing methods for table-specific structure encoding into three representative types: (1) auxiliary table-specific embeddings, (2) restricted hard attention mask to enforce structure-aware attention, and (3) soft attention bias based on the structural relations of cell pairs. For each component, we add it onto the DPR architecture and fine-tune under the same setting as for DPR-table.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 460, "end": 467, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 973, "end": 980, "text": "Table 6", "ref_id": "TABREF11" }, { "start": 1000, "end": 1008, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "Method Papers auxiliary embeddings TAPAS (Herzig et al., 2020) MATE TUTA (Wang et al., 2021b) TABBIE (Iida et al., 2021) hard attention mask TURL (Deng et al., 2020) SAT ETC (Ainslie et al., 2020) DoT MATE TUTA (Wang et al., 2021b) soft attention bias RAT-SQL TableFormer (Yang et al., 2022) ", "cite_spans": [ { "start": 41, "end": 62, "text": "(Herzig et al., 2020)", "ref_id": "BIBREF16" }, { "start": 73, "end": 93, "text": "(Wang et al., 2021b)", "ref_id": "BIBREF37" }, { "start": 101, "end": 120, "text": "(Iida et al., 2021)", "ref_id": "BIBREF17" }, { "start": 146, "end": 165, "text": "(Deng et al., 2020)", "ref_id": "BIBREF10" }, { "start": 174, "end": 196, "text": "(Ainslie et al., 2020)", "ref_id": "BIBREF0" }, { "start": 211, "end": 231, "text": "(Wang et al., 2021b)", "ref_id": "BIBREF37" }, { "start": 272, "end": 291, "text": "(Yang et al., 2022)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Explicit Structure Encoding with", "sec_num": "5" }, { "text": "We first examine if adding table-specific embedding parameters would bring additional improvement. Specifically, we add row and column embeddings into the DPR to encode the row and column indices of tokens, which is denoted as DPR-table w/ emb. Both row and column indices are 1-indexed, and 0 is used for tokens that are not part of the table (e.g., title). We initialize row/column embeddings with zero to allow smooth continual learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auxiliary Row and Columns Embeddings", "sec_num": "5.1" }, { "text": "Another approach is to enforce structure-aware attention using hard attention mask that only allows attention between elements within their mutual structural proximity, with the assumption that elements are only semantically relevant to elements in their structural proximity. Specifically, ; ; Deng et al. (2020) sparsify the attention mask such that each token is only visible to other tokens that are either within the same row or the same column. We apply this masking strategy when fine-tuning DPR and denote this setting as DPR-table w/ mask.", "cite_spans": [ { "start": 295, "end": 313, "text": "Deng et al. (2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Hard Attention Mask", "sec_num": "5.2" }, { "text": "The third method is to bias the attention weight between two tokens based on their structural relation, which is a more fine-grained way to enforce structure-aware attention than hard mask. Specifically, different bias scalars are added to the attention scores based on the relation between two cells. categorize relations by columns, while Yang et al. (2022) defines 13 relations based on which component the token belongs to: sentence, header, and cell. A more concrete example is illustrated in Figure 4 . Relational bias is invariant to the numerical indices of rows and columns, which is more robust to answer-invariant structure perturbation. We follow Yang et al. (2022) to add soft attention bias on DPR with 13 relations.", "cite_spans": [ { "start": 341, "end": 359, "text": "Yang et al. (2022)", "ref_id": "BIBREF40" }, { "start": 659, "end": 677, "text": "Yang et al. (2022)", "ref_id": "BIBREF40" } ], "ref_spans": [ { "start": 498, "end": 506, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Soft Relation-based Attention Bias", "sec_num": "5.3" }, { "text": "As shown in from linearized tables, the benefit of using specialpurpose structure encoding modules is minimal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.4" }, { "text": "Retrieval Accuracy @1 @5 @10 @20 @50 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "To encode the relational structure of tables, CNNs (Chen et al., 2019a) , RNNs (Gol et al., 2019) , LSTMs (Fetahu et al., 2019) , and their combinations (Chen et al., 2019b) are explored. In addition, graph neural networks (GNN) are used, especially for tables with complex structures (Koci et al., 2018; Zayats et al., 2021; Vu et al., 2021; Bhagavatula et al., 2015) . With the recent advances in pre-trained language models, table encoders adapt pre-trained language models with additional table-specific modules encoding structure (Herzig et al., 2020; Yin et al., 2020; Wang et al., 2021b) and numeracy (Wang et al., 2021b; Herzig et al., 2020) . These methods are intentionally built for tables, but their necessity in each task remains unknown. Our work exploits a generic model to show that content-emphasized tasks like retrieval do not require such specific designs. et al., 2008 , 2009 Balakrishnan et al., 2015; Pimplikar and Sarawagi, 2012) or a seed table (Sarmad et al., 2012) . Many of them use the 60 keywords and relevant web tables collected by Zhang and Balog (2018) . Tables are modeled by aggregating mul-tiple fields (Zhang et al., 2019) , contexts (Trabelsi et al., 2019) , and synthesized schema labels (Chen et al., 2020b) . More recently, Chen et al. (2020c) ; Wang et al. (2021a) use structure-augmented BERT for retrieval. These works largely treat the retrieval task on its own account and target similarity under the traditional Information Retrieval (IR).", "cite_spans": [ { "start": 51, "end": 71, "text": "(Chen et al., 2019a)", "ref_id": "BIBREF5" }, { "start": 74, "end": 97, "text": "RNNs (Gol et al., 2019)", "ref_id": null }, { "start": 106, "end": 127, "text": "(Fetahu et al., 2019)", "ref_id": "BIBREF13" }, { "start": 153, "end": 173, "text": "(Chen et al., 2019b)", "ref_id": "BIBREF6" }, { "start": 285, "end": 304, "text": "(Koci et al., 2018;", "ref_id": "BIBREF20" }, { "start": 305, "end": 325, "text": "Zayats et al., 2021;", "ref_id": "BIBREF44" }, { "start": 326, "end": 342, "text": "Vu et al., 2021;", "ref_id": "BIBREF34" }, { "start": 343, "end": 368, "text": "Bhagavatula et al., 2015)", "ref_id": "BIBREF2" }, { "start": 535, "end": 556, "text": "(Herzig et al., 2020;", "ref_id": "BIBREF16" }, { "start": 557, "end": 574, "text": "Yin et al., 2020;", "ref_id": "BIBREF41" }, { "start": 575, "end": 594, "text": "Wang et al., 2021b)", "ref_id": "BIBREF37" }, { "start": 608, "end": 628, "text": "(Wang et al., 2021b;", "ref_id": "BIBREF37" }, { "start": 629, "end": 649, "text": "Herzig et al., 2020)", "ref_id": "BIBREF16" }, { "start": 877, "end": 889, "text": "et al., 2008", "ref_id": "BIBREF4" }, { "start": 890, "end": 896, "text": ", 2009", "ref_id": "BIBREF3" }, { "start": 897, "end": 923, "text": "Balakrishnan et al., 2015;", "ref_id": null }, { "start": 924, "end": 953, "text": "Pimplikar and Sarawagi, 2012)", "ref_id": "BIBREF30" }, { "start": 970, "end": 991, "text": "(Sarmad et al., 2012)", "ref_id": null }, { "start": 1064, "end": 1086, "text": "Zhang and Balog (2018)", "ref_id": "BIBREF47" }, { "start": 1140, "end": 1160, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF46" }, { "start": 1172, "end": 1195, "text": "(Trabelsi et al., 2019)", "ref_id": "BIBREF33" }, { "start": 1228, "end": 1248, "text": "(Chen et al., 2020b)", "ref_id": "BIBREF8" }, { "start": 1266, "end": 1285, "text": "Chen et al. (2020c)", "ref_id": "BIBREF9" }, { "start": 1288, "end": 1307, "text": "Wang et al. (2021a)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Table Understanding", "sec_num": null }, { "text": "Given the importance of finding relevant tables when answering questions in the NQ-table dataset, we study the task of table retrieval and find that ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://github.com/facebookresearch/DPR 3 https://github.com/google-research/tapas/blob/master/ DENSE_TABLE_RETRIEVER.md", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Frank F. Xu and Kaixin Ma for the helpful discussions and anonymous reviewers for their valuable suggestions on this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Etc: Encoding long and structured inputs in transformers", "authors": [ { "first": "Joshua", "middle": [], "last": "Ainslie", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Ontanon", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Vaclav", "middle": [], "last": "Cvicek", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Fisher", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Anirudh", "middle": [], "last": "Ravula", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Sanghai", "suffix": "" }, { "first": "Qifan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "268--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshua Ainslie, Santiago Ontanon, Chris Alberti, Va- clav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. Etc: Encoding long and structured inputs in transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 268-284.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Tabel: Entity linking in web tables", "authors": [ { "first": "Chandra", "middle": [], "last": "Sekhar Bhagavatula", "suffix": "" }, { "first": "Thanapon", "middle": [], "last": "Noraset", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Downey", "suffix": "" } ], "year": 2015, "venue": "International Semantic Web Conference", "volume": "", "issue": "", "pages": "425--441", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chandra Sekhar Bhagavatula, Thanapon Noraset, and Doug Downey. 2015. Tabel: Entity linking in web tables. In International Semantic Web Conference, pages 425-441. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Data integration for the relational web", "authors": [ { "first": "J", "middle": [], "last": "Michael", "suffix": "" }, { "first": "", "middle": [], "last": "Cafarella", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the VLDB Endowment", "volume": "2", "issue": "", "pages": "1090--1101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J Cafarella, Alon Halevy, and Nodira Khous- sainova. 2009. Data integration for the relational web. Proceedings of the VLDB Endowment, 2(1):1090- 1101.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Webtables: exploring the power of tables on the web", "authors": [ { "first": "J", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Cafarella", "suffix": "" }, { "first": "Daisy", "middle": [ "Zhe" ], "last": "Halevy", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the VLDB Endowment", "volume": "1", "issue": "", "pages": "538--549", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J Cafarella, Alon Halevy, Daisy Zhe Wang, Eugene Wu, and Yang Zhang. 2008. Webtables: ex- ploring the power of tables on the web. Proceedings of the VLDB Endowment, 1(1):538-549.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Colnet: Embedding the semantics of web tables for column type prediction", "authors": [ { "first": "Jiaoyan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ernesto", "middle": [], "last": "Jim\u00e9nez-Ruiz", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Horrocks", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "29--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiaoyan Chen, Ernesto Jim\u00e9nez-Ruiz, Ian Horrocks, and Charles Sutton. 2019a. Colnet: Embedding the se- mantics of web tables for column type prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 29-36.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning semantic annotations for tabular data", "authors": [ { "first": "Jiaoyan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ernesto", "middle": [], "last": "Jim\u00e9nez-Ruiz", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Horrocks", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.00781" ] }, "num": null, "urls": [], "raw_text": "Jiaoyan Chen, Ernesto Jim\u00e9nez-Ruiz, Ian Horrocks, and Charles Sutton. 2019b. Learning seman- tic annotations for tabular data. arXiv preprint arXiv:1906.00781.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Hybridqa: A dataset of multi-hop question answering over tabular and textual data", "authors": [ { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hanwen", "middle": [], "last": "Zha", "suffix": "" }, { "first": "Zhiyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wenhan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "William", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.07347" ] }, "num": null, "urls": [], "raw_text": "Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Wang. 2020a. Hybridqa: A dataset of multi-hop question answering over tabular and textual data. arXiv preprint arXiv:2004.07347.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Leveraging schema labels to enhance dataset search", "authors": [ { "first": "Zhiyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Haiyan", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Heflin", "suffix": "" }, { "first": "Brian", "middle": [ "D" ], "last": "Davison", "suffix": "" } ], "year": 2020, "venue": "Advances in Information Retrieval", "volume": "12035", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiyu Chen, Haiyan Jia, Jeff Heflin, and Brian D Davi- son. 2020b. Leveraging schema labels to enhance dataset search. Advances in Information Retrieval, 12035:267.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Table search using a deep contextualized language model", "authors": [ { "first": "Zhiyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohamed", "middle": [], "last": "Trabelsi", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Heflin", "suffix": "" }, { "first": "Yinan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Brian", "middle": [ "D" ], "last": "Davison", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "589--598", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiyu Chen, Mohamed Trabelsi, Jeff Heflin, Yinan Xu, and Brian D Davison. 2020c. Table search using a deep contextualized language model. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 589-598.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Turl: Table understanding through representation learning", "authors": [ { "first": "Xiang", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Alyssa", "middle": [], "last": "Lees", "suffix": "" }, { "first": "You", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Cong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.14806" ] }, "num": null, "urls": [], "raw_text": "Xiang Deng, Huan Sun, Alyssa Lees, You Wu, and Cong Yu. 2020. Turl: Table understanding through repre- sentation learning. arXiv preprint arXiv:2006.14806.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Mate: Multi-view attention for table transformer efficiency", "authors": [ { "first": "Julian", "middle": [], "last": "Eisenschlos", "suffix": "" }, { "first": "Maharshi", "middle": [], "last": "Gor", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "7606--7619", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian Eisenschlos, Maharshi Gor, Thomas Mueller, and William Cohen. 2021. Mate: Multi-view attention for table transformer efficiency. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7606-7619.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Tablenet: An approach for determining finegrained relations for wikipedia tables", "authors": [ { "first": "Besnik", "middle": [], "last": "Fetahu", "suffix": "" }, { "first": "Avishek", "middle": [], "last": "Anand", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Koutraki", "suffix": "" } ], "year": 2019, "venue": "The World Wide Web Conference", "volume": "", "issue": "", "pages": "2736--2742", "other_ids": {}, "num": null, "urls": [], "raw_text": "Besnik Fetahu, Avishek Anand, and Maria Koutraki. 2019. Tablenet: An approach for determining fine- grained relations for wikipedia tables. In The World Wide Web Conference, pages 2736-2742.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Tabular cell classification using pre-trained cell embeddings", "authors": [ { "first": "Jay", "middle": [], "last": "Majid Ghasemi Gol", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Pujara", "suffix": "" }, { "first": "", "middle": [], "last": "Szekely", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE International Conference on Data Mining (ICDM)", "volume": "", "issue": "", "pages": "230--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Majid Ghasemi Gol, Jay Pujara, and Pedro Szekely. 2019. Tabular cell classification using pre-trained cell embeddings. In 2019 IEEE International Confer- ence on Data Mining (ICDM), pages 230-239. IEEE.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Open domain question answering over tables via dense retrieval", "authors": [ { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Syrine", "middle": [], "last": "Krichene", "suffix": "" }, { "first": "Julian", "middle": [ "Martin" ], "last": "Eisenschlos", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.12011" ] }, "num": null, "urls": [], "raw_text": "Jonathan Herzig, Thomas M\u00fcller, Syrine Krichene, and Julian Martin Eisenschlos. 2021. Open domain ques- tion answering over tables via dense retrieval. arXiv preprint arXiv:2103.12011.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Tapas: Weakly supervised table parsing via pre-training", "authors": [ { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Krzysztof", "middle": [], "last": "Nowak", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Piccinno", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Eisenschlos", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4320--4333", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Mueller, Francesco Piccinno, and Julian Eisensch- los. 2020. Tapas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 4320-4333.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Tabbie: Pretrained representations of tabular data", "authors": [ { "first": "Hiroshi", "middle": [], "last": "Iida", "suffix": "" }, { "first": "Dung", "middle": [], "last": "Thai", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Manjunatha", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "3446--3456", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroshi Iida, Dung Thai, Varun Manjunatha, and Mohit Iyyer. 2021. Tabbie: Pretrained representations of tabular data. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3446-3456.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Tables as semi-structured knowledge for question answering", "authors": [ { "first": "Peter", "middle": [], "last": "Sujay Kumar Jauhar", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Turney", "suffix": "" }, { "first": "", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "474--483", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sujay Kumar Jauhar, Peter Turney, and Eduard Hovy. 2016. Tables as semi-structured knowledge for ques- tion answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 474-483.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dense passage retrieval for opendomain question answering", "authors": [ { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ledell", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6769--6781", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Table recognition in spreadsheets via a graph representation", "authors": [ { "first": "Elvis", "middle": [], "last": "Koci", "suffix": "" }, { "first": "Maik", "middle": [], "last": "Thiele", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Lehner", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "Romero", "suffix": "" } ], "year": 2018, "venue": "2018 13th IAPR International Workshop on Document Analysis Systems (DAS)", "volume": "", "issue": "", "pages": "139--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elvis Koci, Maik Thiele, Wolfgang Lehner, and Oscar Romero. 2018. Table recognition in spreadsheets via a graph representation. In 2018 13th IAPR Inter- national Workshop on Document Analysis Systems (DAS), pages 139-144. IEEE.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Dot: An efficient double transformer for nlp tasks with tables", "authors": [ { "first": "Syrine", "middle": [], "last": "Krichene", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Eisenschlos", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "3273--3283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Syrine Krichene, Thomas Mueller, and Julian Eisensch- los. 2021. Dot: An efficient double transformer for nlp tasks with tables. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3273-3283.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Natural questions: a benchmark for question answering research", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Jennimaria", "middle": [], "last": "Palomaki", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Redfield", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Danielle", "middle": [], "last": "Epstein", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "453--466", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453- 466.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Latent retrieval for weakly supervised open domain question answering", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open do- main question answering. In Proceedings of the 57th", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "6086--6096", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 6086-6096.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Tapex: Table pre-training via learning a neural sql executor", "authors": [ { "first": "Qian", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jiaqi", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Zeqi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jianguang", "middle": [], "last": "Lou", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2107.07653" ] }, "num": null, "urls": [], "raw_text": "Qian Liu, Bei Chen, Jiaqi Guo, Zeqi Lin, and Jian- guang Lou. 2021. Tapex: Table pre-training via learning a neural sql executor. arXiv preprint arXiv:2107.07653.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Open domain question answering with a unified knowledge interface", "authors": [ { "first": "Kaixin", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2110.08417" ] }, "num": null, "urls": [], "raw_text": "Kaixin Ma, Hao Cheng, Xiaodong Liu, Eric Nyberg, and Jianfeng Gao. 2021. Open domain question an- swering with a unified knowledge interface. arXiv preprint arXiv:2110.08417.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Ambigqa: Answering ambiguous open-domain questions", "authors": [ { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "5783--5797", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. Ambigqa: Answering am- biguous open-domain questions. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 5783- 5797.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Unik-qa: Unified representations of structured and unstructured knowledge for open-domain question answering", "authors": [ { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Xilun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Peshterliev", "suffix": "" }, { "first": "Dmytro", "middle": [], "last": "Okhonko", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schlichtkrull", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2021, "venue": "", "volume": "54", "issue": "", "pages": "57--60", "other_ids": { "arXiv": [ "arXiv:2012.14610" ] }, "num": null, "urls": [], "raw_text": "Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2021. Unik-qa: Unified representations of structured and unstructured knowledge for open-domain question answering. arXiv preprint arXiv:2012.14610, 54:57- 60.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Compositional semantic parsing on semi-structured tables", "authors": [ { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1470--1480", "other_ids": {}, "num": null, "urls": [], "raw_text": "Panupong Pasupat and Percy Liang. 2015. Composi- tional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470- 1480.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Answering table queries on the web using column keywords", "authors": [ { "first": "Rakesh", "middle": [], "last": "Pimplikar", "suffix": "" }, { "first": "Sunita", "middle": [], "last": "Sarawagi", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the VLDB Endowment", "volume": "5", "issue": "", "pages": "908--919", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rakesh Pimplikar and Sunita Sarawagi. 2012. Answer- ing table queries on the web using column keywords. Proceedings of the VLDB Endowment, 5(10):908- 919.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Qa dataset explosion: A taxonomy of nlp resources for question answering and reading comprehension", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2107.12708" ] }, "num": null, "urls": [], "raw_text": "Anna Rogers, Matt Gardner, and Isabelle Augenstein. 2021. Qa dataset explosion: A taxonomy of nlp resources for question answering and reading com- prehension. arXiv preprint arXiv:2107.12708.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Alon Halevyd, Hongrae Leed, Fei Wud, Reynold Xin, and Cong Yud. 2012. Finding related tables", "authors": [ { "first": "Anish", "middle": [], "last": "Das Sarmad", "suffix": "" }, { "first": "Lujun", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Nitin", "middle": [], "last": "Guptad", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anish Das Sarmad, Lujun Fang, Nitin Guptad, Alon Halevyd, Hongrae Leed, Fei Wud, Reynold Xin, and Cong Yud. 2012. Finding related tables.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Improved table retrieval using multiple context embeddings for attributes", "authors": [ { "first": "Mohamed", "middle": [], "last": "Trabelsi", "suffix": "" }, { "first": "D", "middle": [], "last": "Brian", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Davison", "suffix": "" }, { "first": "", "middle": [], "last": "Heflin", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE International Conference on Big Data (Big Data)", "volume": "", "issue": "", "pages": "1238--1244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohamed Trabelsi, Brian D Davison, and Jeff Heflin. 2019. Improved table retrieval using multiple con- text embeddings for attributes. In 2019 IEEE Inter- national Conference on Big Data (Big Data), pages 1238-1244. IEEE.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A graph-based approach for inferring semantic descriptions of wikipedia tables", "authors": [ { "first": "Binh", "middle": [], "last": "Vu", "suffix": "" }, { "first": "A", "middle": [], "last": "Craig", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Knoblock", "suffix": "" }, { "first": "Minh", "middle": [], "last": "Szekely", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Pham", "suffix": "" }, { "first": "", "middle": [], "last": "Pujara", "suffix": "" } ], "year": 2021, "venue": "International Semantic Web Conference", "volume": "", "issue": "", "pages": "304--320", "other_ids": {}, "num": null, "urls": [], "raw_text": "Binh Vu, Craig A Knoblock, Pedro Szekely, Minh Pham, and Jay Pujara. 2021. A graph-based approach for inferring semantic descriptions of wikipedia tables. In International Semantic Web Conference, pages 304-320. Springer.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Rat-sql: Relation-aware schema encoding and linking for textto-sql parsers", "authors": [ { "first": "Bailin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Oleksandr", "middle": [], "last": "Polozov", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7567--7578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. Rat-sql: Relation-aware schema encoding and linking for text- to-sql parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7567-7578.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Retrieving complex tables with multi-granular graph representation learning", "authors": [ { "first": "Fei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kexuan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Muhao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Pujara", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Szekely", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2105.01736" ] }, "num": null, "urls": [], "raw_text": "Fei Wang, Kexuan Sun, Muhao Chen, Jay Pujara, and Pedro Szekely. 2021a. Retrieving complex tables with multi-granular graph representation learning. arXiv preprint arXiv:2105.01736.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Tuta: Treebased transformers for generally structured table pretraining", "authors": [ { "first": "Zhiruo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Haoyu", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Shi", "middle": [], "last": "Han", "suffix": "" }, { "first": "Dongmei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "1780--1790", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiruo Wang, Haoyu Dong, Ran Jia, Jia Li, Zhiyi Fu, Shi Han, and Dongmei Zhang. 2021b. Tuta: Tree- based transformers for generally structured table pre- training. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1780-1790.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Break it down: A question understanding benchmark", "authors": [ { "first": "Tomer", "middle": [], "last": "Wolfson", "suffix": "" }, { "first": "Mor", "middle": [], "last": "Geva", "suffix": "" }, { "first": "Ankit", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Deutch", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "183--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gard- ner, Yoav Goldberg, Daniel Deutch, and Jonathan Berant. 2020. Break it down: A question understand- ing benchmark. Transactions of the Association for Computational Linguistics, 8:183-198.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models", "authors": [ { "first": "Tianbao", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Chen", "middle": [ "Henry" ], "last": "Wu", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Ruiqi", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Scholak", "suffix": "" }, { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "I", "middle": [], "last": "Sida", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2201.05966" ] }, "num": null, "urls": [], "raw_text": "Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text lan- guage models. arXiv preprint arXiv:2201.05966.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Tableformer: Robust transformer modeling for tabletext encoding", "authors": [ { "first": "Jingfeng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Shachi", "middle": [], "last": "Paul", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2203.00274" ] }, "num": null, "urls": [], "raw_text": "Jingfeng Yang, Aditya Gupta, Shyam Upadhyay, Luheng He, Rahul Goel, and Shachi Paul. 2022. Tableformer: Robust transformer modeling for table- text encoding. arXiv preprint arXiv:2203.00274.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Tabert: Pretraining for joint understanding of textual and tabular data", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8413--8426", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Se- bastian Riedel. 2020. Tabert: Pretraining for joint understanding of textual and tabular data. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413-8426.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Grappa: Grammar-augmented pre-training for table semantic parsing", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xi", "middle": [ "Victoria" ], "last": "Lin", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chern Tan", "suffix": "" }, { "first": "Xinyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Yu, Chien-Sheng Wu, Xi Victoria Lin, Yi Chern Tan, Xinyi Yang, Dragomir Radev, Caiming Xiong, et al. 2020. Grappa: Grammar-augmented pre-training for table semantic parsing. In International Conference on Learning Representations.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Dongxu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zifan", "middle": [], "last": "Li", "suffix": "" }, { "first": "James", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qingning", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Shanelle", "middle": [], "last": "Roman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3911--3921", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingn- ing Yao, Shanelle Roman, et al. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 3911-3921.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Representations for question answering from documents with tables and text", "authors": [ { "first": "Vicky", "middle": [], "last": "Zayats", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2101.10573" ] }, "num": null, "urls": [], "raw_text": "Vicky Zayats, Kristina Toutanova, and Mari Osten- dorf. 2021. Representations for question answering from documents with tables and text. arXiv preprint arXiv:2101.10573.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Table fact verification with structure-aware transformer", "authors": [ { "first": "Hongzhi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yingyao", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sirui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xuezhi", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Fuzheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongyuan", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1624--1629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongzhi Zhang, Yingyao Wang, Sirui Wang, Xuezhi Cao, Fuzheng Zhang, and Zhongyuan Wang. 2020. Table fact verification with structure-aware trans- former. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1624-1629.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Ta-ble2vec: Neural word and entity embeddings for table population and retrieval", "authors": [ { "first": "Li", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Krisztian", "middle": [], "last": "Balog", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "1029--1032", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Zhang, Shuo Zhang, and Krisztian Balog. 2019. Ta- ble2vec: Neural word and entity embeddings for ta- ble population and retrieval. In Proceedings of the 42nd International ACM SIGIR Conference on Re- search and Development in Information Retrieval, pages 1029-1032.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Ad hoc table retrieval using semantic similarity", "authors": [ { "first": "Shuo", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Krisztian", "middle": [], "last": "Balog", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 world wide web conference", "volume": "", "issue": "", "pages": "1553--1562", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuo Zhang and Krisztian Balog. 2018. Ad hoc table retrieval using semantic similarity. In Proceedings of the 2018 world wide web conference, pages 1553- 1562.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Web table extraction, retrieval, and augmentation: A survey", "authors": [ { "first": "Shuo", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Krisztian", "middle": [], "last": "Balog", "suffix": "" } ], "year": 2020, "venue": "ACM Transactions on Intelligent Systems and Technology", "volume": "11", "issue": "2", "pages": "1--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuo Zhang and Krisztian Balog. 2020. Web table ex- traction, retrieval, and augmentation: A survey. ACM Transactions on Intelligent Systems and Technology (TIST), 11(2):1-35.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", "authors": [ { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1709.00103" ] }, "num": null, "urls": [], "raw_text": "Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Table (a) matches the question by its title, (b) matches topic in title and answer type in header, and in (c) knowing the column alignment helps." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "-header (same column) header-to-header (diff. column) header-to-cell (same column) header-to-cell (diff. column) cell-to-header (same column) cell-to-header (diff. column) cell-to-cell (same column) cell-to-cell (diff. column) Illustration of three explicit structure encoding methods." }, "TABREF1": { "html": null, "num": null, "text": "Comparison of DPR and DTR embeddings.Training Process DTR also has a more complex training process than DPR. As summarized inFigure3, DTR has a three-stage training using tables.", "type_str": "table", "content": "
BERT
All
WikipediaMasked LM
Tables
NQ-tableHard NegativeAll Wikipedia TablesInverse Cloze Task
NQ-tableHard Negative
DPRDTR
" }, "TABREF3": { "html": null, "num": null, "text": "", "type_str": "table", "content": "" }, "TABREF4": { "html": null, "num": null, "text": "DTR (medium) 62.32 82.51 86.75 91.51 94.26 DTR (large) 63.98 84.27 89.65 93.48 95.65", "type_str": "table", "content": "
BERT-table60.97 79.81 85.51 88.20 91.62
DPR DPR-table57.04 80.54 86.13 89.54 92.34 67.91 84.89 88.72 90.58 92.86
" }, "TABREF5": { "html": null, "num": null, "text": "", "type_str": "table", "content": "" }, "TABREF7": { "html": null, "num": null, "text": "Top-k table retrieval accuracy on shuffled NQ tables, using the original DPR, the fine-tuned (DPRtable), and the fine-tuned on shuffled tables (DPR-table w/ shuffle).", "type_str": "table", "content": "
" }, "TABREF9": { "html": null, "num": null, "text": "NQ-table retrieval accuracy with linearized table w/ and w/o cell and row delimiters. cell linearizes table by only inserting delimiters between cells, row only inserts delimiters between rows, and none inserts neither.", "type_str": "table", "content": "
" }, "TABREF10": { "html": null, "num": null, "text": "Model Design", "type_str": "table", "content": "
From the previous section, we conclude that DPR
can already encode simple table structures based on
structure-preserving linearized tables with correct
cell order and delimiters. The next question is \"can
explicit table-specific model designs encode more
complex structure that is useful beyond the capacity
of implicit encoding?\"
" }, "TABREF11": { "html": null, "num": null, "text": "Structure encoding methods used in previous works for table-related tasks.", "type_str": "table", "content": "" }, "TABREF12": { "html": null, "num": null, "text": "", "type_str": "table", "content": "
, methods that explicitly
encode table structures, either with additional
row/column embeddings (w/ emb), hard attention
mask (w/ mask), or soft relation-based attention
bias (w/ bias), do not bring improvements over
the DPR-table baseline, indicating that given the
capacity of DPR in implicitly encoding structure
" }, "TABREF13": { "html": null, "num": null, "text": "DPR-table 67.91 84.89 88.72 90.58 92.86", "type_str": "table", "content": "
w/ emb w/ mask 62.11 81.88 86.96 89.86 93.06 65.73 81.99 86.02 89.23 92.86 w/ bias 65.42 82.23 86.75 89.54 92.13
" }, "TABREF14": { "html": null, "num": null, "text": "Top-k table retrieval accuracy on NQ-table test. DPR-table is fine-tuned on the NQ-table without any table-specific modules, while the other three methods add auxiliary row/column embeddings (w/ emb), hard attention mask (w/ mask), and soft relation-based attention bias respectively (w/ bias).Ma et al. (2021) showed that verbalizing structured knowledge into fluent text bring further gains over raw format for open-domain QA. Different from prior work, our paper analyzes different strategies for encoding tables with a focus on the task of table retrieval.", "type_str": "table", "content": "
6 Related Work
Open-domain Question Answering Open-domain QA systems often use a retriever-reader
pipeline, where the retriever retrieves relevant
contexts and the reader extracts or generates
answer from them. Because the candidate context
corpus is usually large with millions of documents,
good retrieval accuracy is crucial for open-domain
QA systems (Karpukhin et al., 2020). Beyond
texts, another common sources for answering
open-domain questions is tables. Herzig et al.
(2021) recently identified a subset of Natural
Questions (NQ) dataset (Kwiatkowski et al., 2019)
that is answerable by Wikipedia tables. Oguz
et al. (2021) found that incorporating structured
knowledge is beneficial for open-domain QA tasks.
" }, "TABREF15": { "html": null, "num": null, "text": "Earlier works focus on web table search in response to keyword queries (Cafarella", "type_str": "table", "content": "" }, "TABREF16": { "html": null, "num": null, "text": "table retrieval emphasizes content rather than table structure. Our experiments with the text-generic Dense Passage Retriever (DPR) and the state-ofthe-art table-specific Dense Table Retriever (DTR) demonstrate that DPR can already encode simple structures based on linearized tables and tablespecific designs such as auxiliary embeddings, hard attention mask, and soft attention bias are not necessary. Our findings suggest that future development on table retrieval can potentially be built upon successful text retrievers and table-specific model designs should be carefully examined to avoid unnecessary complexity.", "type_str": "table", "content": "
" } } } }