{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:16:14.482720Z" }, "title": "RYANSQL: Recursively Applying Sketch-based Slot Fillings for Complex Text-to-SQL in Cross-Domain Databases", "authors": [ { "first": "Donghyun", "middle": [], "last": "Choi", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing Team Kakao Enterprise and School of Software Sungkyunkwan University", "institution": "", "location": {} }, "email": "" }, { "first": "Eunggyun", "middle": [], "last": "Kim", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing Team Kakao Enterprise", "institution": "", "location": {} }, "email": "" }, { "first": "Dong", "middle": [ "Ryeol" ], "last": "Shin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sungkyunkwan University", "location": {} }, "email": "drshin@skku.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Text-to-SQL is the problem of converting a user question into an SQL query, when the question and database are given. In this article, we present a neural network approach called RYANSQL (Recursively Yielding Annotation Network for SQL) to solve complex Text-to-SQL tasks for cross-domain databases. Statement Position Code (SPC) is defined to transform a nested SQL query into a set of non-nested SELECT statements; a sketch-based slot-filling approach is proposed to synthesize each SELECT statement for its corresponding SPC. Additionally, two input manipulation methods are presented to improve generation performance further. RYANSQL achieved competitive result of 58.2% accuracy on the challenging Spider benchmark. At the time of submission (April 2020), RYANSQL v2, a variant of original RYANSQL, is positioned at 3rd place among all systems and 1st place among the systems not using database content Submission", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Text-to-SQL is the problem of converting a user question into an SQL query, when the question and database are given. In this article, we present a neural network approach called RYANSQL (Recursively Yielding Annotation Network for SQL) to solve complex Text-to-SQL tasks for cross-domain databases. Statement Position Code (SPC) is defined to transform a nested SQL query into a set of non-nested SELECT statements; a sketch-based slot-filling approach is proposed to synthesize each SELECT statement for its corresponding SPC. Additionally, two input manipulation methods are presented to improve generation performance further. RYANSQL achieved competitive result of 58.2% accuracy on the challenging Spider benchmark. At the time of submission (April 2020), RYANSQL v2, a variant of original RYANSQL, is positioned at 3rd place among all systems and 1st place among the systems not using database content Submission", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "with 60.6% exact matching accuracy. The source code is available at https: // github. com/ kakaoenterprise/ RYANSQL .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Relational databases are widely used to maintain and query structured data sets in many fields such as healthcare (Hillestad et al. 2005) , financial markets (Beck, Demirguc-Kunt, and Levine 2000) , or customer relation management (Ngai, Xiu, and Chau 2009) . Most relational databases support Structured Query Language (SQL) to access the stored data. Although SQL is expressive and powerful, it is quite difficult to master, especially for non-technical users.", "cite_spans": [ { "start": 114, "end": 137, "text": "(Hillestad et al. 2005)", "ref_id": "BIBREF13" }, { "start": 158, "end": 196, "text": "(Beck, Demirguc-Kunt, and Levine 2000)", "ref_id": "BIBREF2" }, { "start": 231, "end": 257, "text": "(Ngai, Xiu, and Chau 2009)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Text-to-SQL is the task of generating an SQL query when a user question and a target database are given. The examples are shown in Figure 1 . Recently proposed neural network architectures achieved more than 80% exact matching accuracy on the well-known Text-to-SQL benchmarks such as ATIS (Air Travel Information Service), GeoQuery, and WikiSQL (Xu, Liu, and Song 2017; Yu et al. 2018a; Shi et al. 2018; Dong and Lapata 2018; Hwang et al. 2019; He et al. 2018) . However, those benchmarks have shortcomings that restrict their applications. The ATIS (Price 1990 ) and GeoQuery (Zelle and Mooney 1996) benchmarks assume the same database across the training and test data set, thus the trained systems cannot process a newly encountered database at inference time. The WikiSQL (Zhong, Xiong, and Socher 2017) benchmark assumes crossdomain databases. Cross-domain means that the databases for training and test data sets are different; the system should predict with an unseen database as its input during testing. Meanwhile, the complexity of SQL queries and databases in the WikiSQL benchmark is somewhat limited. WikiSQL assumes that an input database always has only one table. It also assumes that the resultant SQL is non-nested, and contains SELECT and WHERE clauses only. Figure 1(a) shows an example from the WikiSQL data set.", "cite_spans": [ { "start": 346, "end": 370, "text": "(Xu, Liu, and Song 2017;", "ref_id": "BIBREF34" }, { "start": 371, "end": 387, "text": "Yu et al. 2018a;", "ref_id": "BIBREF37" }, { "start": 388, "end": 404, "text": "Shi et al. 2018;", "ref_id": "BIBREF27" }, { "start": 405, "end": 426, "text": "Dong and Lapata 2018;", "ref_id": "BIBREF9" }, { "start": 427, "end": 445, "text": "Hwang et al. 2019;", "ref_id": "BIBREF15" }, { "start": 446, "end": 461, "text": "He et al. 2018)", "ref_id": "BIBREF12" }, { "start": 551, "end": 562, "text": "(Price 1990", "ref_id": "BIBREF26" }, { "start": 578, "end": 601, "text": "(Zelle and Mooney 1996)", "ref_id": "BIBREF40" }, { "start": 777, "end": 808, "text": "(Zhong, Xiong, and Socher 2017)", "ref_id": "BIBREF42" } ], "ref_spans": [ { "start": 131, "end": 139, "text": "Figure 1", "ref_id": null }, { "start": 1279, "end": 1290, "text": "Figure 1(a)", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Different from those benchmarks, the Spider benchmark proposed by Yu et al. (2018c) contains complex SQL queries with cross-domain databases. The SQL queries in Spider benchmark could contain nested queries with multiple table JOINs, and clauses like ORDERBY, GROUPBY, and HAVING. Figure 1(b) shows an example from the Spider benchmark; Yu et al. (2018c) showed that the state-of-the-art systems for the previous benchmarks do not perform well on the Spider data set.", "cite_spans": [ { "start": 66, "end": 83, "text": "Yu et al. (2018c)", "ref_id": "BIBREF39" }, { "start": 337, "end": 354, "text": "Yu et al. (2018c)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 281, "end": 292, "text": "Figure 1(b)", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this article, we propose a novel network architecture called RYANSQL (Recursively Yielding Annotation Network for SQL) to handle such complex, cross-domain Text-to-SQL problems. The proposed approach generates nested queries by recursively yielding their component SELECT statements. A sketch-based slot-filling approach is proposed to predict each SELECT statement. In addition, two simple but effective input manipulation methods are proposed to improve the overall system performance. Among the systems not using database content, the proposed RYANSQL and its variant RYANSQL v2, with the aid of BERT (Devlin et al. 2019) , improve the previous state-ofthe-art system SLSQL (Lei et al. 2020 ) by 2.5% and 4.9%, respectively, in terms of the test set exact matching accuracy. RYANSQL v2 is ranked at 3rd place among all systems including those using database content. Our contributions are summarized as follows:", "cite_spans": [ { "start": 607, "end": 627, "text": "(Devlin et al. 2019)", "ref_id": "BIBREF7" }, { "start": 680, "end": 696, "text": "(Lei et al. 2020", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We propose a detailed sketch for the complex SELECT statements, along with a network architecture to fill the slots.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 Statement Position Code (SPC) is introduced to recursively predict nested queries with sketch-based slot-filling algorithm. (b) A Text-to-SQL example from the Spider data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Text-to-SQL examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "We suggest two simple input manipulation methods to improve performance further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "The Text-to-SQL task considered in this article is defined as follows: Given a question with n tokens Q = {w Q 1 , . . . , w Q n } and a DB schema with t tables and f foreign key", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2." }, { "text": "relations D = {T 1 , . . . , T t , F 1 , . . . , F f }, find S, the SQL translation of Q. Each table T i consists of a table name with t i words {w T i 1 , . . . , w T i t i }, and a set of columns {C j , . . . , C k }. Each column C j consists of a column name {w C j 1 , . . . , w C j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2." }, { "text": "c j }, and a marker to check if the column is a primary key.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2." }, { "text": "For an SQL query S we define a non-nested form of S, N(S) = {(P 1 , S 1 ), . . . , (P l , S l )}. In the definition, P i is the i-th SPC, and S i is its corresponding SELECT statement. Table 1 shows examples of a natural language query Q, corresponding SQL translation S, and non-nested form N(S).", "cite_spans": [], "ref_spans": [ { "start": 185, "end": 192, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Task Definition", "sec_num": "2." }, { "text": "Examples of a user question Q, SQL translation S, and non-nested form N(S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "Find the names of scientists who are not working on the project with the highest hours.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Q", "sec_num": null }, { "text": "SELECT name FROM scientists EXCEPT (SELECT T3.name FROM assignedto AS T1 JOIN projects AS T2 ON T1.project = T2.code JOIN scientists AS T3 ON T1.scientist = T3.SSN WHERE T2.hours = ( SELECT max(hours) FROM projects ) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S", "sec_num": null }, { "text": "N(S) P 1 [ NONE ] S 1 SELECT name FROM scientists EXCEPT S 2 P 2 [ EXCEPT ] S 2 SELECT T3.name FROM assignedto AS T1 JOIN projects AS T2 ON T1.project = T2.code JOIN scientists AS T3 ON T1.scientist = T3.SSN WHERE T2.hours = S 3 P 3 [ EXCEPT, WHERE ] S 3 SELECT max(hours) FROM projects Case 2 Q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S", "sec_num": null }, { "text": "Find the names of accounts whose checking balance is above the average checking balance, but savings balance is below the average savings balance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S", "sec_num": null }, { "text": "JOIN checking AS T2 ON T1.custid = T2.custid WHERE T2.balance > (SELECT avg(balance) FROM checking) INTERSECT SELECT T1.name FROM accounts AS T1 JOIN savings AS T2 ON T1.custid = T2.custid WHERE T2.balance < (SELECT avg(balance) FROM savings) N(S)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "P 1 [ NONE ] S 1 SELECT T1.name FROM accounts AS T1 JOIN checking AS T2 ON T1.custid = T2.custid WHERE T2.balance > S 2 INTERSECT S 3 P 2 [ WHERE ] S 2 SELECT avg(balance) FROM checking P 3 [ INTERSECT ] S 3 SELECT T1.name FROM accounts AS T1 JOIN savings AS T2 ON T1.custid = T2.custid WHERE T2.balance < S 4 P 4 [ INTERSECT, WHERE ] S 4 SELECT avg(balance) FROM savings", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "Each SPC P could be considered as a sequence of p position code elements,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "P = [c P 1 , . . . , c P p ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "The possible set of position code elements is {NONE, UNION, IN-TERSECT, EXCEPT, WHERE, HAVING, PARALLEL}. NONE represents the outermost statement, and PARALLEL means the parallel elements inside a single clause, for example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "Question Q Database D SQL Position Code P \" # $ # % # \" # $ # % # Embedding Embedding Encoder Question Encoder \" # $ # % # NAME TABLE 1 COLUMN 1 COLUMN )* +* \" ,* -* ,* \" +* \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": ",.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "-.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": ",.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "NAME TABLE t COLUMN k COLUMN m )/ +/ \" ,0 -0 ,0 \" +/ \" ,1 -1 ,1 )* +* \" ,* -* ,* \" +* \" ,. -. ,. )/ +/ \" ,0 -0 ,0 \" +/ \" ,1 -1 ,1 \" 3 4 3 \" 3 4 3 SPC Encoder \u210e \" , \u210e 6 , \u210e 7 , \u210e8 , Question- Column Alignment \" , 6 , 7 , 8 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "Question -Column Aligner Table Encoder Question- Table Alignment OUTPUTS", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 39, "text": "Table Encoder", "ref_id": null }, { "start": 50, "end": 66, "text": "Table Alignment", "ref_id": null } ], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "\u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 S \u210e \" + \u210e ) + \u2026 \" # $ # % # \" + ) + \u2026 \" , 6 , 7 , 8 , \u2026 \u2026 \u2026 9 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "Question- Table Aligner S S \u2026 Encoded Question Tokens Encoded Tables Encoded Columns Encoded Database Encoded SPC #", "cite_spans": [], "ref_spans": [ { "start": 10, "end": 120, "text": "Table Aligner S S \u2026 Encoded Question Tokens Encoded Tables Encoded Columns Encoded Database", "ref_id": null } ], "eq_spans": [], "section": "S SELECT T1.name FROM accounts AS T1", "sec_num": null }, { "text": "Column Encoder Column Encoder", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoded Question", "sec_num": null }, { "text": "Network architecture of the proposed input encoder. represents self-attention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "the second element of the WHERE clause. Other position code elements represent corresponding SQL clauses. Because it is straightforward to construct S from N(S), the goal of the proposed system is to construct N(S) for the given Q and D. To achieve the goal, the proposed system first sets the initial SPC P 1 = [NONE] , and predicts its corresponding SELECT statement and nested SPCs. The system recursively finds out the corresponding SELECT statements for the remaining SPCs, until every SPC has its own corresponding SELECT statement.", "cite_spans": [ { "start": 312, "end": 318, "text": "[NONE]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "In this section, the method to create the SELECT statement for the given question Q, database D, and SPC P is described. Section 3.1 describes the input encoder; the sketch-based slot-filling decoder is described in Section 3.2. Figure 2 shows the overall network architecture of the input encoder. The input encoder consists of five layers: Embedding layer, Embedding Encoder layer, Question-Column Alignment layer, Table Encoder layer, and Question- Table Alignment layer. Embedding Layer. To get the embedding vector for a word w in question, table names, or column names, its word embedding and character embedding are concatenated. The word embedding is initialized with d 1 = 300 dimensional pretrained GloVe (Pennington, Socher, and Manning 2014) word vectors, and is fixed during training. For character embedding, each character is represented as a trainable vector of dimension d 2 = 50, and we take maximum value of each dimension of component characters to get the fixed-length vector. The two vectors are then concatenated to obtain the embedding vector e w \u2208 R d 1 + d 2 . One layer highway network (Srivastava, Greff, and Schmidhuber 2015) is applied on top of this representation. For SPC P, each code element c is represented as a trainable vector of dimension d p = 100.", "cite_spans": [ { "start": 715, "end": 753, "text": "(Pennington, Socher, and Manning 2014)", "ref_id": "BIBREF24" }, { "start": 1113, "end": 1154, "text": "(Srivastava, Greff, and Schmidhuber 2015)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 229, "end": 237, "text": "Figure 2", "ref_id": null }, { "start": 452, "end": 474, "text": "Table Alignment layer.", "ref_id": null } ], "eq_spans": [], "section": "Generating a SELECT Statement", "sec_num": "3." }, { "text": "Embedding Encoder Layer. A one-dimensional convolution layer with kernel size 3 is applied on top of SPC element embedding vectors {e P 1 , . . . , e P p }. Max-pooling is applied on the output to get the SPC vector ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "v P \u2208 R d p . v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "V Q = {v Q 1 , v Q 2 , . . . , v Q n } \u2208 R n\u00d7d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": ", and hidden column vectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "H C = {h C 1 , . . . , h C m } \u2208 R m\u00d7d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "Question-Column Alignment Layer. In this layer, the model tries to update the column vectors with the input question. More precisely, the model first aligns question tokens with column vectors to obtain an attended question vector for each column. The attended question vectors are then fused with corresponding column vectors to get question context-integrated column vectors. Scaled dot-product attention (Vaswani et al. 2017 ) is used to align question tokens with column vectors:", "cite_spans": [ { "start": 407, "end": 427, "text": "(Vaswani et al. 2017", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "A QtoC = softmax( H C (V Q ) \u221a d )V Q (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "where each i-th row of A QtoC \u2208 R m\u00d7d is an attended question vector of the i-th column. The heuristic fusion function fusion(x, y), proposed in Hu et al. (2018) , is applied to merge A QtoC with H C :x", "cite_spans": [ { "start": 145, "end": 161, "text": "Hu et al. (2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= relu(W r [x; y; x \u2022 y; x \u2212 y]) g = \u03c3(W g [x; y; x \u2022 y; x \u2212 y]) fusion(x, y) = g \u2022x + (1 \u2212 g) \u2022 x F C = fusion(A QtoC , H C )", "eq_num": "(2)" } ], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "where W r and W g are trainable variables, \u03c3 denotes the sigmoid function, \u2022 denotes element-wise multiplication, and F C \u2208 R m\u00d7d is fused column matrix. Once the column vectors are updated with the question context, a transformer layer (Vaswani et al. 2017 ) is applied on top of F C to capture contextual column information. Layer outputs are the encoded column vectors ", "cite_spans": [ { "start": 237, "end": 257, "text": "(Vaswani et al. 2017", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "V C = {v C 1 , . . . , v C m } \u2208 R m\u00d7d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f s (M) = softmax(W 2 tanh(W 1 M ))M", "eq_num": "(3)" } ], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "where ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "W 1 \u2208 R d\u00d7d , W 2 \u2208 R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h T t = f s ([v C j ; ...; v C k ])", "eq_num": "(4)" } ], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "Outputs of the layer are the hidden table vectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "H T = {h T 1 , h T 2 , . . . , h T t } \u2208 R t\u00d7d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "Question- Table Alignment Layer. In this layer, the same network architecture as the Question-Column Alignment layer is used to model the table vectors with contextual information of the question. Layer outputs are the encoded table", "cite_spans": [], "ref_spans": [ { "start": 10, "end": 25, "text": "Table Alignment", "ref_id": null } ], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "vectors V T = {v T 1 , v T 2 , . . . , v T t } \u2208 R t\u00d7d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "Encoder Output. Final outputs of the input encoder are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "(1) Encoded question word vectors V Q = {v Q 1 , . . . , v Q n } \u2208 R n\u00d7d , (2) Encoded column vectors V C = {v C 1 , . . . , v C m } \u2208 R m\u00d7d , (3) Encoded table vectors V T = {v T 1 , . . . , v T t } \u2208 R t\u00d7d , and (4) Encoded SPC v P \u2208 R d p . Additionally, (5) Encoded question vector v Q = f s (V Q ) and (6) Encoded DB schema vector v D = f s (V C ) \u2208 R d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "are calculated for later use in the decoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "3.1.1 BERT-based Input Encoder. Inspired by the work of Hwang et al. (2019) and , BERT (Devlin et al. 2019 ) is considered as another version of the input encoder. The input to BERT is constructed by concatenating question words, SPC elements, and column words as follows:", "cite_spans": [ { "start": 56, "end": 75, "text": "Hwang et al. (2019)", "ref_id": "BIBREF15" }, { "start": 87, "end": 106, "text": "(Devlin et al. 2019", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "[CLS], w Q 1 , . . . , w Q n , [SEP], c P 1 , . . . , c P p , [SEP], w C 1 1 , . . . , w C 1 c 1 , [SEP], . . . , [SEP], w C m 1 , . . . , w C m c m , [SEP]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": ". Hidden states of the last layer are retrieved to form V Q and V C ; for V C , the state of each column's last word is taken to represent an encoded column vector. Each table vector v T j is calculated as a self-attended vector of its containing columns; v Q , v D , and v P are calculated as the same. Table 2 shows the proposed sketch for a SELECT statement. The sketch-based slot-filling decoder predicts values for slots of the proposed sketch, as well as the number of slots.", "cite_spans": [], "ref_spans": [ { "start": 304, "end": 311, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Input Encoder", "sec_num": "3.1" }, { "text": "Classifying Base Structure. By the term base structure of a SELECT statement, we refer to the existence of its component clauses and the number of conditions for each clause. We first combine the encoded vectors v Q , v D , and v P to obtain the statement encoding vector v S , as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sketch-based Slot-Filling Decoder", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "hc(x, y) = concat(x, y, |x \u2212 y|, x \u2022 y) (5) v S = W concat(hc(v Q , v D ), v P )", "eq_num": "(6)" } ], "section": "Sketch-based Slot-Filling Decoder", "sec_num": "3.2" }, { "text": "where W \u2208 R d\u00d7(4d+d p ) is a trainable parameter, and function hc(x, y) is the concatenation function for the heuristic matching method proposed in Mou et al. (2016) .", "cite_spans": [ { "start": 148, "end": 165, "text": "Mou et al. (2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Sketch-based Slot-Filling Decoder", "sec_num": "3.2" }, { "text": "Eleven values b g , b o , b l , b w , b h , n g , n o , n s , n w , n h", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sketch-based Slot-Filling Decoder", "sec_num": "3.2" }, { "text": ", and c IUEN are classified by applying two fully connected layers on v S . Binary values b g , b o , b l , b w , b h represent the existence of GROUPBY, ORDERBY, LIMIT, WHERE, and HAVING, respectively. Note that FROM and SELECT clauses must exist to form a valid SELECT statement. n g , n o , n s , n w , n h represent Table 2 Proposed sketch for a SELECT statement. $TBL and $COL represent a table and a column, respectively. $AGG is one of {none, max, min, count, sum, avg}, $ARI is one of the arithmetic operators {none, -, +, *, / }, and $COND is one of the conditional operators {between, =, >, <, >=, <=, !=, in, like, is, exists}. $DIST and $NOT are Boolean variables representing the existence of keywords DISTINCT and NOT, respectively. $ORD is a binary value for keywords ASC/DESC, and $CONJ is one of conjunctions {AND, OR}. $VAL is the value for WHERE/HAVING condition; $SEL represents the slot for another SELECT statement.", "cite_spans": [], "ref_spans": [ { "start": 320, "end": 327, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Sketch-based Slot-Filling Decoder", "sec_num": "3.2" }, { "text": "FROM ($TBL) + SELECT $DIST ( $AGG ( $DIST 1 $AGG 1 $COL 1 $ARI $DIST 2 $AGG 2 $COL 2 ) ) + ORDERBY ( ( $DIST 1 $AGG 1 $COL 1 $ARI $DIST 2 $AGG 2 $COL 2 ) $ORD ) * GROUPBY ( $COL ) * LIMIT $NUM WHERE ( $CONJ ( $DIST 1 $AGG 1 $COL 1 $ARI $DIST 2 $AGG 2 $COL 2 ) HAVING $NOT $COND $VAL 1 |$SEL 1 $VAL 2 |$SEL 2 ) * INTERSECT UNION $SEL EXCEPT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "the number of conditions in GROUPBY, ORDERBY, SELECT, WHERE, and HAVING clauses, respectively. The maximal numbers of conditions N g = 3, N o = 3, N s = 6, N w = 4, and N h = 2 are defined for GROUPBY, ORDERBY, SELECT, WHERE, and HAVING clauses, to solve the problem as n-way classification problem. The values of maximal condition numbers are chosen to cover all the training cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "Finally, c IUEN represents the existence of one of INTERSECT, UNION, or EXCEPT, or NONE if no such clause exists. If the value of c IUEN is one of INTERSECT, UNION, or EXCEPT, the corresponding SPC is created, and the SELECT statement for that SPC is generated recursively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "FROM Clause. A list of $TBLs should be decided to predict the FROM clause. For each table i, P fromtbl (i|Q, D, P), the probability that table i is included in the FROM clause, is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c i = concat(v T i , v Q , v D , v P ) s i = W 2 tanh(W 1 c i )", "eq_num": "(7)" } ], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "P fromtbl (i|Q, D, P) = \u03c3(s) i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "where W 1 , W 2 are trainable variables, s = [s 1 , . . . , s t ] \u2208 R t represents the scores for tables, and \u03c3 denotes the sigmoid function. From now on, we omit the notations Q, D, and P for the sake of simplicity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "Top n t tables with the highest P fromtbl (i) values are chosen. We set an upper bound N t = 6 on possible number of tables. The formula to get P #tbl (n t ) for each possible n t is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "v T = softmax(s)V T P #tbl (n t ) = softmax(full 2 (v T )) (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "In the equation, full 2 means the application of two fully connected layers, and table score vector s is from Equation 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "During the inference, the $TBLs are classified first, and $COLs for other clauses are chosen among the columns of the classified $TBLs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "SELECT Clause. The decoder first generates N s conditions to predict the SELECT clause. Because each condition depends on different parts of Q, we calculate attended question vector for each condition:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "A Q sel = W 3 tanh(V Q W 1 + v P W 2 ) V Q sel = softmax(A Q sel )V Q (9) While W 1 , W 2 \u2208 R d\u00d7d , W 3 \u2208 R N s \u00d7d are trainable parameters, and V Q sel \u2208 R N s \u00d7d is the matrix of attended question vectors for N s conditions. v P is tiled to match the row of V Q .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "For m columns and N s conditions, P sel col1 \u2208 R N s \u00d7m , the probability matrix for each column to fill the slot $COL 1 of each condition, is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A C sel [i] = W 6 tanh(V Q sel [i]W 4 + V C W 5 ) P sel col1 [i] = softmax(A C sel [i])", "eq_num": "(10)" } ], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "where W 4 , W 5 \u2208 R d\u00d7d and W 6 \u2208 R 1\u00d7d are trainable parameters. In this and following equations, notation M[i] is used to represent the i-th row of matrix M. The attended question vectors are then updated with selected column information to get the updated question vector U Q sel col1 \u2208 R N s \u00d7d :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "U C sel col1 [i] = P sel col1 [i]V C U Q sel col1 [i] = W 7 hc(V Q sel [i], U C sel col1 [i])", "eq_num": "(11)" } ], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "where W 7 is a trainable variable, and hc(x, y) is defined in Equation (5). The probabilities for $DIST 1 , $AGG 1 , $ARI, and $AGG are calculated by applying a fully connected layer on U Q sel col1 [i] . Equation (10) is reused to calculate", "cite_spans": [ { "start": 199, "end": 202, "text": "[i]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "P sel col2 , with V Q sel [i] replaced by U Q sel col1 [i]; then U Q sel col2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "is retrieved in the same way as Equation (11), and the probabilities of $DIST 2 and $AGG 2 are calculated in the same way as $DIST 1 and $AGG 1 . Finally, the $DIST slot, DISTINCT marker for overall SELECT clause, is calculated by applying a fully connected layer on v S .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "Once all the slots are filled for N s conditions, the decoder retrieves the first n s conditions to predict the SELECT clause. This is possible because the CNN with Dense Connection used for question encoding (Yoon, Lee, and Lee 2018) captures relative position information. Due to the SQL consistency protocol of the Spider benchmark (Yu et al. 2018c) , we expect that the conditions are ordered in the same way as they are presented in Q. For the data sets without such consistency protocol, the proposed slot-filling method could easily be changed to an LSTM-based model, as shown in Xu, Liu, and Song (2017) .", "cite_spans": [ { "start": 209, "end": 234, "text": "(Yoon, Lee, and Lee 2018)", "ref_id": "BIBREF36" }, { "start": 335, "end": 352, "text": "(Yu et al. 2018c)", "ref_id": "BIBREF39" }, { "start": 587, "end": 611, "text": "Xu, Liu, and Song (2017)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "ORDERBY Clause. The same network structure as a SELECT clause is applied. The only difference is the prediction of $ORD slot; this could be done by applying a fully connected layer on U Q ob col1 , which is the correspondence of U Q sel col1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "GROUPBY Clause. The same network structure as a SELECT clause is applied. For the GROUPBY case, retrieving only the values of P gb col1 is enough to fill the necessary slots.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "LIMIT Clause. A question does not explicitly contain the $NUM slot value for LIMIT clause in many cases, if the question is for the top-1 result (For example: \"Show the name and the release year of the song by the youngest singer\"). Thus, the LIMIT decoder first determines if the given Q requests for the top-1 result. If so, the decoder sets the $NUM value to 1; otherwise, it tries to find out the specific token for $NUM among the tokens of Q using pointer network (Vinyals, Fortunato, and Jaitly 2015) . LIMIT top-1 probability P limit top1 is retrieved by applying a fully-connected layer on v S . P Q limit num [i], the probability of i-th question token for $NUM slot value, is calculated as:", "cite_spans": [ { "start": 469, "end": 506, "text": "(Vinyals, Fortunato, and Jaitly 2015)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "A Q limit num = W 3 tanh(V Q W 1 + v P W 2 ) P Q limit num [i] = softmax(A Q limit num ) i (12) W 1 , W 2 \u2208 R d\u00d7d , W 3 \u2208 R 1\u00d7d are trainable parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "WHERE Clause. The same network structure as a SELECT clause is applied to get the attended question vectors V Q wh \u2208 R N w \u00d7d , and probabilities for $COL 1 , $COL 2 , $DIST 1 , $DIST 2 , $AGG 1 , $AGG 2 , and $ARI. A fully connected layer is applied on U Q wh col1 to get the probabilities for $CONJ, $NOT, and $COND.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "A fully connected layer is applied on U Q wh col1 and U Q wh col2 to determine if the condition value for each column is another nested SELECT statement or not. If the value is determined as a nested SELECT statement, the corresponding SPC is generated, and the SELECT statement for the SPC is predicted recursively. If not, the pointer network is used to get the start and end position of the value span from question tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "HAVING Clause. The same network structure as a WHERE clause is applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "In this section, we introduce two input manipulation methods to improve the performance of our proposed system further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two Input Manipulation Methods", "sec_num": "4." }, { "text": "In a FROM clause, some tables (and their columns) are not mentioned explicitly in the given question, but they are still required to make a \"link\" between other tables to form a proper SQL query. One such example is given in Table 3 . The table writes is not explicitly mentioned in Q, but it is used in the JOIN clause to link between tables author and paper. Those \"link\" tables are necessary to create the proper SELECT statement, but they work as noise in aligning question tokens and tables because the link tables do not have the corresponding tokens in Q.", "cite_spans": [], "ref_spans": [ { "start": 225, "end": 232, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "JOIN Table Filtering", "sec_num": "4.1" }, { "text": "To reduce the training noises, only the non-link tables are considered as the $TBL slot values of FROM clause during training. A table of FROM clause is considered a link table if (1) all the $AGG values of the SELECT clause are none, and (2) none of its columns appears in other clauses' slots. During the inference, the link tables could easily be recovered by using the foreign key relations of the extracted tables. More precisely, the system uses a heuristic of finding the shortest joinable foreign key relation \"path\" between the extracted tables. Once a path is found, tables in the path are added as the $TBLs of the FROM clause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "JOIN Table Filtering", "sec_num": "4.1" }, { "text": "The goal of this method is to distinguish the link tables from non-link tables during the training phase. SyntaxSQLNet (Yu et al. 2018b ) first predicts all the columns, and then chooses FROM tables based on the classified columns. As noted in Yu et al. (2018b) , the approach cannot handle count queries with additional JOINs, for example, \"SELECT T2.name, count(*) FROM singer in concert AS T1 JOIN singer AS T2 ON T1.singer id = T2.singer id GROUP BY T2.singer id.\" Its corresponding user question is \"List singer names and number of concerts for each singer.\" GNN (Bogin, Gardner, and Berant 2019) handles the problem by turning the database schema into a graph; foreign key links between nodes help the system to distinguish between two types of tables. IRNet and RAT-SQL (Wang et al. 2020) have the separated schema linking processing module to explicitly link columns with question tokens.", "cite_spans": [ { "start": 119, "end": 135, "text": "(Yu et al. 2018b", "ref_id": "BIBREF38" }, { "start": 244, "end": 261, "text": "Yu et al. (2018b)", "ref_id": "BIBREF38" }, { "start": 568, "end": 601, "text": "(Bogin, Gardner, and Berant 2019)", "ref_id": "BIBREF3" }, { "start": 777, "end": 795, "text": "(Wang et al. 2020)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "JOIN Table Filtering", "sec_num": "4.1" }, { "text": "We supplement the column names with their table names to distinguish between columns with the same name but belonging to different tables and representing different meanings. Table 4 shows SCN examples; the three columns with the same name id are distinguished with their SCNs. We can also expect the SCNs to align better with question tokens, since a SCN contains more information about what the column actually refers to.", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 182, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Supplemented Column Names", "sec_num": "4.2" }, { "text": "The method aims to integrate tables with their columns. To achieve the goal, IRNet and RAT-SQL (Wang et al. 2020) separately encode tables and columns, Table 4 Examples of supplemented column names (SCNs). and integrate the two embeddings on the network; GNN (Bogin, Gardner, and Berant 2019) represents database schema as a graph, generating links between tables and their columns, and directly processes the graph using graph neural network; EditSQL (Zhang et al. 2019) concatenates the table names with its column names, using a special character.", "cite_spans": [ { "start": 95, "end": 113, "text": "(Wang et al. 2020)", "ref_id": "BIBREF32" }, { "start": 259, "end": 292, "text": "(Bogin, Gardner, and Berant 2019)", "ref_id": "BIBREF3" }, { "start": 452, "end": 471, "text": "(Zhang et al. 2019)", "ref_id": "BIBREF41" } ], "ref_spans": [ { "start": 152, "end": 159, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Supplemented Column Names", "sec_num": "4.2" }, { "text": "Most recent works on the Text-to-SQL task used the encoder-decoder model. Those works could be classified into three main categories, based on their decoder outputs. Sequence-to-Sequence translation approaches generate SQL query tokens. Dong and Lapata (2016) introduced the hierarchical tree decoder to prevent the model from generating grammatically incorrect semantic representations of the input sentences. Zhong, Xiong, and Socher (2017) used policy-based reinforcement learning to deal with the unordered nature of WHERE conditions. Grammar-based approaches generate a sequence of grammar rules and apply the generated rules sequentially to obtain the resultant SQL query. IRNet defined a structural representation of an SQL query and a set of parse actions to handle the WikiSQL data set. IRNet defined the SemQL query, which is an abstraction of a SQL query in tree form. They also proposed a set of grammar rules to synthesize SemQL queries; synthesizing a SQL query from a SemQL tree structure is straightforward. RAT-SQL (Wang et al. 2020) improved the work of by proposing a relation-aware transformer to effectively encode relations between columns, tables, and question tokens. GNN (Bogin, Gardner, and Berant 2019) focused on the DB constraints selection problem during the grammar decoding process; they applied global reasoning between question words and database columns/tables. SLSQL (Lei et al. 2020) manually annotated link information between user questions and database columns to show the role of schema linking.", "cite_spans": [ { "start": 237, "end": 259, "text": "Dong and Lapata (2016)", "ref_id": "BIBREF8" }, { "start": 1032, "end": 1050, "text": "(Wang et al. 2020)", "ref_id": "BIBREF32" }, { "start": 1196, "end": 1229, "text": "(Bogin, Gardner, and Berant 2019)", "ref_id": "BIBREF3" }, { "start": 1403, "end": 1420, "text": "(Lei et al. 2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "Sketch-based slot-filling approaches use a sketch, which aligns with the syntactic structure of a SQL query. A sketch should be defined generic enough to handle all SQL queries of interest. Once a sketch is defined, one can simply fill the slots of the sketch to obtain the resultant SQL query. SQLNet (Xu, Liu, and Song 2017) first introduced a sketch to handle the WikiSQL data set, along with attention-based slot-filling algorithms. The proposed sketch for WikiSQL is shown in Table 5 . TypeSQL (Yu et al. 2018a) added category information such as named entity to better encode the input question. SQLova (Hwang et al. 2019) introduced BERT (Devlin et al. 2019) to encode the input question and database, and the encoded vectors were used to fill the slots of the sketch.", "cite_spans": [ { "start": 302, "end": 326, "text": "(Xu, Liu, and Song 2017)", "ref_id": "BIBREF34" }, { "start": 499, "end": 516, "text": "(Yu et al. 2018a)", "ref_id": "BIBREF37" }, { "start": 609, "end": 628, "text": "(Hwang et al. 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 481, "end": 488, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "Sketch for WikiSQL data set. $COL represents a column, and $AGG is one of {none, max, min, count, sum, avg}. $COND is one of the conditional operators { =, >, < }. $VAL is the value for WHERE condition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 5", "sec_num": null }, { "text": "SELECT $AGG $COL WHERE ($COL $COND $VAL)* Table 6 The sketch for a SELECT statement proposed by RCSQL (Lee 2019) . $COL represents a column. $AGG is one of {none, max, min, count, sum, avg}, and $COND is one of the conditional operators {between, =, >, <, >=, <=, !=, in, like, is, exists}. $ORD is a binary value for keywords ASC/DESC, and $CONJ is one of conjunctions {AND, OR}. $VAL is the value for WHERE/HAVING condition; $SEL represents the slot for another SELECT statement. (He et al. 2018) aligned the contextual information with column tokens to better summarize each column. The sketch-based approaches for WikiSQL described here all used the sketch shown in Table 5 , which is enough for the WikiSQL queries but oversimplified for general SQL queries, for example, those contained in the Spider benchmark.", "cite_spans": [ { "start": 102, "end": 112, "text": "(Lee 2019)", "ref_id": "BIBREF16" }, { "start": 482, "end": 498, "text": "(He et al. 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 6", "ref_id": null }, { "start": 670, "end": 677, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "SELECT ( $AGG $COL ) + ORDERBY ( $AGG $COL ) + $ORD GROUPBY ( $COL ) * LIMIT $NUM WHERE ( $CONJ $COL $COND $VAL | $SEL ) * HAVING ($CONJ $AGG $COL $COND $VAL | $SEL ) * INTERSECT UNION $SEL EXCEPT X-SQL", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "The sketch-based approach on the more complex Spider benchmark showed relatively low performance compared to the grammar-based approaches so far. There are two major reasons: (1) It is hard to define a sketch for Spider queries since the allowed syntax of the Spider SQL queries is far more complicated than that of the WikiSQL queries. (2) Because the sketch-based approaches fill values for the predefined slots, the approaches have difficulties in predicting the nested queries. RCSQL (Lee 2019) tried to apply the sketch-based approach on the Spider data set; Table 6 shows the sketch proposed by Lee (2019) . To predict a nested SELECT statement, RCSQL takes a temporal generated SQL query with a special token [SUB QUERY] in the corresponding location as its input. For example, for Case 1 of Table 1 , RCSQL gets a temporal generated query string \"SELECT name FROM scientists EXCEPT [SUB QUERY]\" as its input to generate the nested statement S 2 , along with the user question and database schema.", "cite_spans": [ { "start": 488, "end": 498, "text": "(Lee 2019)", "ref_id": "BIBREF16" }, { "start": 601, "end": 611, "text": "Lee (2019)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 564, "end": 571, "text": "Table 6", "ref_id": null }, { "start": 799, "end": 806, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "Our proposed approach has three improvements compared to RCSQL. First, our sketch in Table 2 is more \"complete\" in terms of expressiveness. For example, because the RCSQL sketch lacks $ARI elements, the RCSQL cannot generate queries with arithmetic operations between columns, for example, \"SELECT T1.name FROM accounts AS T1 JOIN checking AS T2 ON T1.custid = T2.custid JOIN savings AS T3 ON T1.custid = T3.custid ORDER BY T2.balance + T3.balance LIMIT 1.\" Second, while our proposed approach directly predicts for the tables in FROM clause, the RCSQL heuristically predicts the tables using the extracted columns for other clauses. The RCSQL approach cannot generate count queries with additional table JOINs, for example, \"SELECT count(*) FROM institution AS T1 JOIN protein AS T2 ON T1.institution id = T2.institution id WHERE T1.founded > 1880 OR T1.type = 'Private'.\" Third, RCSQL fails to generate the nested SELECT statements when two or more statements are on the same depth, for example, S 2 and S 3 in Case 2 of Table 1 . Because RCSQL generates one SELECT statement for an input, it expects only one special token for a query.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 92, "text": "Table 2", "ref_id": null }, { "start": 1023, "end": 1030, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "In this article, we propose a more completed sketch compared to the WikiSQL (Table 5 ) and RCSQL (Table 6 ) sketches for complex SELECT statements, along with the Statement Position Code (SPC) to handle the nested queries more efficiently. Although our proposed sketch is tuned using the Spider data set, the sketch is based on the generic SQL syntax and could be applied to other SQL generation tasks.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 84, "text": "(Table 5", "ref_id": null }, { "start": 97, "end": 105, "text": "(Table 6", "ref_id": null } ], "eq_spans": [], "section": "CLAUSE SKETCH", "sec_num": null }, { "text": "Implementation. The proposed RYANSQL is implemented with Tensorflow (Abadi et al. 2015) . Layernorm (Ba, Kiros, and Hinton 2016) and dropout (Srivastava et al. 2014) are applied between layers, with a dropout rate of 0.1. Exponential decay with decay rate 0.8 is applied to the learning rate for every three epochs. On each epoch, the trained classifier is evaluated against the validation data set, and the training stops when the exact match score for the validation data set is not improved for 20 consequent training epochs. Minibatch size is set to 16; learning rate is set to 4e \u22124 . Loss is defined as the sum of all classification losses from the slot-filling decoder. The trained network has 22M parameters. For pretrained language model-based input encoding, we downloaded the publicly available pretrained model of BERT, BERT-Large, Uncased (Whole Word Masking), and fine-tuned the model during training. The learning rate is set to 1e \u22125 , and minibatch size is set to 4. The model with BERT has 445M parameters.", "cite_spans": [ { "start": 68, "end": 87, "text": "(Abadi et al. 2015)", "ref_id": "BIBREF0" }, { "start": 100, "end": 128, "text": "(Ba, Kiros, and Hinton 2016)", "ref_id": "BIBREF1" }, { "start": 141, "end": 165, "text": "(Srivastava et al. 2014)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "Data sets. The Spider data set (Yu et al. 2018c ) is mainly used to evaluate our proposed system. We use the same data split as Yu et al. (2018c) ; 206 databases are split into 146 train, 20 dev, and 40 test. All questions for the same database are in the same split; there are 8,659 questions for train, 1,034 for dev, and 2,147 for test. The test set of Spider is not publicly available, so for testing our models are submitted to the data owner. For evaluation, we used exact matching accuracy, with the same definition as defined in Yu et al. (2018c) . Table 7 shows comparisons of the proposed system with several state-of-the-art systems; evaluation scores for dev and test data sets are retrieved from the Spider Table 7 Evaluation results of the proposed systems and other state-of-the-art systems.", "cite_spans": [ { "start": 31, "end": 47, "text": "(Yu et al. 2018c", "ref_id": "BIBREF39" }, { "start": 128, "end": 145, "text": "Yu et al. (2018c)", "ref_id": "BIBREF39" }, { "start": 537, "end": 554, "text": "Yu et al. (2018c)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 557, "end": 564, "text": "Table 7", "ref_id": null }, { "start": 720, "end": 727, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "Dev Test", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "GrammarSQL (Lin et al. 2019) 34.8% 33.8% EditSQL (Zhang et al. 2019) 36.4% 32.9% IRNet 53.3% 46.7% RATSQL v2 (Wang et al. 2020) 62.7% 57.2% leaderboard. 1 The proposed system is compared with grammar-based systems GrammarSQL (Lin et al. 2019) , Global-GNN (Bogin, Gardner, and Berant 2019), IRNet , and RATSQL (Wang et al. 2020) . Also, we compared the proposed system with RCSQL (Lee 2019), which so far showed the best performance on the Spider data set using a sketch-based slot-filling approach. Evaluation results are presented in three different groups, based on the use of pretrained language models and database content. Although the use of database content (i.e., cell values) could greatly improve the performance of a Text-to-SQL system (as shown in Wang et al. 2018; Hwang et al. 2019; He et al. 2018 ), a Text-to-SQL system could rarely have access to database content in real world applications due to various reasons such as personal privacy, business secrets, or legal issues. Because the use of database content improves the system performance but decreases the system availability, we put models using database content in a separated group.", "cite_spans": [ { "start": 11, "end": 28, "text": "(Lin et al. 2019)", "ref_id": "BIBREF18" }, { "start": 49, "end": 68, "text": "(Zhang et al. 2019)", "ref_id": "BIBREF41" }, { "start": 109, "end": 127, "text": "(Wang et al. 2020)", "ref_id": "BIBREF32" }, { "start": 225, "end": 242, "text": "(Lin et al. 2019)", "ref_id": "BIBREF18" }, { "start": 310, "end": 328, "text": "(Wang et al. 2020)", "ref_id": "BIBREF32" }, { "start": 761, "end": 778, "text": "Wang et al. 2018;", "ref_id": "BIBREF33" }, { "start": 779, "end": 797, "text": "Hwang et al. 2019;", "ref_id": "BIBREF15" }, { "start": 798, "end": 812, "text": "He et al. 2018", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Without pretrained language models", "sec_num": null }, { "text": "For RYANSQL v2, we trained two networks called table network and slot network, with the same network architectures as the proposed RYANSQL. The table network is trained to maximize the $TBL classification accuracy on dev set; the slot network is trained to maximize the exact match accuracy on dev set as RYANSQL does, but the $TBL classification results are fetched from the table network (which is fixed during the training of the slot network). During the inference, the model first classifies $TBLs using the table network, and fills other slots using the slot network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RYANSQL (Ours", "sec_num": null }, { "text": "Exact matching accuracy of the proposed system and other state-of-the-art systems for each hardness level. Med. means medium hardness. As can be observed from the table, the proposed system RYANSQL improves the previous sketch-based slot-filling system RCSQL by a large margin of 15% on the dev set. Note that the RCSQL fine-tuned another well-known pretrained language model ELMo (Peters et al. 2018) . With the use of BERT, among the systems without database content, the proposed systems RYANSQL + BERT and RYANSQL v2 + BERT outperform the previous state-of-the-art by 2.5% and 4.9%, respectively, on the hidden test data set, in terms of exact matching accuracy. The proposed system still shows competitive results compared to the systems using database content; RATSQL v3 + BERT outperforms the proposed system by better aligning user questions and database schemas using database content. Table 8 compares the exact matching accuracies of the proposed systems and other state-of-the-art systems for each hardness level. The proposed RYANSQL + BERT outperforms the previous sketch-based approach RCSQL in every hardness level on dev set. Additionally, the proposed RYANSQL + BERT showed relatively poor performance for the test set at the Extra hardness level, compared to RATSQL v3 + BERT. This suggests that much test data at the Extra hardness level require database content to answer, since the two systems showed comparable results for the Extra hardness dev set.", "cite_spans": [ { "start": 381, "end": 401, "text": "(Peters et al. 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 895, "end": 902, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Table 8", "sec_num": null }, { "text": "Next, ablation studies are conducted on the proposed methods to clarify the contribution of each feature. The results are presented in Table 9 . It turns out that the use of SPC greatly improves the performances for Hard and Extra hardness levels. The result shows that the SPC plays an important role in generating the nested SQL queries. The SPC also slightly increases the performance for Easy and Medium hardness levels. This is because the SPC helps the model to distinguish between each nested SELECT statement, thus removing noise on aligning question tokens and columns. The use of SCN moderately improves the accuracies for all hardness levels. This is expected, since SCN helps a database column to better align with question tokens by supplementing the column name with its table information.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 142, "text": "Table 9", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Approaches", "sec_num": null }, { "text": "The JOIN table filtering (JTF) increases performance only when the other two features SPC and SCN are used together. Analysis shows that for some cases, the link tables removed by JTF actually have their corresponding question tokens. One example is the SQL query \"SELECT T3.amenity name FROM dorm AS T1 JOIN has amenity AS T2 ON T1.dormid = T2.dormid JOIN dorm amenity AS T3 ON T2.amenid = T3.amenid WHERE T1.dorm name = 'Smith Hall' \" for question \"Find the name of amenities Smith Hall dorm have.\" Table has amenity is considered as a link table, but there exist corresponding clues in the question. Removing the table from $TBL list according to the JTF feature would introduce alignment noise during training. But the evaluation result also shows that, by better aligning question and database schema using the other two features SPC and SCN, the model can recover from the alignment noise introduced by JTF, improving the overall system performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approaches", "sec_num": null }, { "text": "Proposed models are also evaluated without all three features SPC, SCN, and JTF to separately see the contribution of our newly proposed sketch. Without the three features, RYANSQL shows 35.5% accuracy on dev set, which is a 6.7% improvement compared to another sketch-based slot-filling model RCSQL; RYANSQL + BERT shows 52.6% dev set accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approaches", "sec_num": null }, { "text": "Effect of the pretrained language model. There exists a huge performance gap of 23.2% on dev set between RYANSQL and RYANSQL + BERT. SQL component matching F1 scores for the two models are shown in Table 10 to figure out the reason. For the item keyword (which measures the existence of SQL predefined keywords such as SELECT or GROUP BY), the performance gap between the two models is 6.2%, which is relatively small compared to the overall performance gap of 23.2%. Meanwhile, the performance gaps on clause components such as WHERE are similar to or larger than the overall performance gap. These evaluation results suggest that the use of a pretrained language model mainly improves the column classification performance, rather than base structures classification accuracy of a SQL query. Next, a series of experiments is conducted to see if additional performance improvements could be gained by applying different pretrained language models. Table 11 shows the evaluation results with four different pretrained language models, namely, BERT-base, BERT-large, RoBERTa (Liu et al. 2019) , and ELECTRA (Clark et al. 2020) . Although RoBERTa and ELECTRA are generally known to perform better than BERT, the evaluation results showed no performance improvement.", "cite_spans": [ { "start": 1074, "end": 1091, "text": "(Liu et al. 2019)", "ref_id": "BIBREF19" }, { "start": 1106, "end": 1125, "text": "(Clark et al. 2020)", "ref_id": null } ], "ref_spans": [ { "start": 198, "end": 206, "text": "Table 10", "ref_id": "TABREF12" }, { "start": 949, "end": 957, "text": "Table 11", "ref_id": null } ], "eq_spans": [], "section": "Approaches", "sec_num": null }, { "text": "Generality of SCN and JTF. The two proposed input manipulation methods SCN and JTF are applied on IRNet to see their generalities. We downloaded the source code from the author's homepage, 3 and trained to obtain the dev set accuracy. Evaluation results are shown in Table 12 . The performance improvements due to the two input manipulation methods were almost ignorable; because IRNet has the separated schema linking preprocessing module, whose purpose is to link columns with question tokens, the role of SCN and JTF are greatly reduced.", "cite_spans": [], "ref_spans": [ { "start": 267, "end": 275, "text": "Table 12", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Approaches", "sec_num": null }, { "text": "We conducted experiments on WikiSQL (Zhong, Xiong, and Socher 2017) and CSpider (Min, Shi, and Zhang 2019) data sets to test the generalization capability of the proposed model to new data sets. Table 13 shows the comparison between the proposed RYAN- (Wang et al. 2018 ) are compared, since EGD makes use of the database content. As can be observed from the table, the proposed RYANSQL + BERT showed comparable results to other WikiSQL state-of-the-art systems. Next, we evaluated the proposed models on the CSpider data set. CSpider (Min, Shi, and Zhang 2019 ) is a Chinese-translated version of the Spider benchmark. Only the question of the Spider data set is translated; database table names and column names remain in English. Evaluation on the CSpider data set will show whether the proposed model could be applied on the different languages, even when the question language and database schema language are different. To handle the case, we used multilingual BERT, which has the same network architecture with BERT-base but is trained using a multilingual corpus. Table 14 shows the comparisons between the proposed system and other state-of-the-art systems on the leaderboard. Compared to the exact matching accuracy 51.4% of RYANSQL + BERT-base on Spider data set, the multilingual version shows 10% lower accuracy on dev set, but still shows comparable results to other state-of-the-art systems that are designed for CSpider data set. Our proposed system showed 34.7% test accuracy on the test set, and ranked 2nd place on the leaderboard.", "cite_spans": [ { "start": 36, "end": 67, "text": "(Zhong, Xiong, and Socher 2017)", "ref_id": "BIBREF42" }, { "start": 80, "end": 106, "text": "(Min, Shi, and Zhang 2019)", "ref_id": "BIBREF21" }, { "start": 252, "end": 269, "text": "(Wang et al. 2018", "ref_id": "BIBREF33" }, { "start": 535, "end": 560, "text": "(Min, Shi, and Zhang 2019", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 195, "end": 203, "text": "Table 13", "ref_id": "TABREF6" }, { "start": 1072, "end": 1080, "text": "Table 14", "ref_id": null } ], "eq_spans": [], "section": "Evaluation on Different Data Sets", "sec_num": "6.3" }, { "text": "Evaluation results on CSpider data set 5 with other state-of-the-art systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 14", "sec_num": null }, { "text": "SyntaxSQLNet (Yu et al. 2018b) 16.4% 13.3% CN-SQL (Anonymous) 22.9% 18.8% DG-SQL (Anonymous) 35.5% 26.8% XL-SQL (Anonymous) 54.9% 47.8%", "cite_spans": [ { "start": 13, "end": 30, "text": "(Yu et al. 2018b)", "ref_id": "BIBREF38" }, { "start": 43, "end": 61, "text": "CN-SQL (Anonymous)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "System Dev Test", "sec_num": null }, { "text": "RYANSQL + Multilingual BERT (Ours) 41.3% 34.7%", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Dev Test", "sec_num": null }, { "text": "Exact matching accuracy of the proposed system on the Spider dev set, with the different ranges of overlap scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 15", "sec_num": null }, { "text": "0.0 \u2264 O(Q, S) \u2264 0.2 32.5% 40 0.2 < O(Q, S) \u2264 0.4 47.3% 74 0.4 < O(Q, S) \u2264 0.6 46.7% 182 0.6 < O(Q, S) \u2264 0.8 67.0% 282 0.8 < O(Q, S) \u2264 1.0 80.5% 456 Total 66.6% 1,034", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overlap Score Accuracy Examples", "sec_num": null }, { "text": "We analyzed 345 failed examples of the RYANSQL + BERT on the development set. We were able to categorize 195 of those examples according to failure types. The most common cause of failure is column selection failure; 68 out of 195 cases (34.9%) suffered from the error. In many of these cases, the correct column name is not mentioned in a question; for example, for the question \"What is the airport name for airport 'AKO'?\", the decoder chooses column AirportName instead of AirportCode as its WHERE clause condition column. As mentioned in Yavuz et al. (2018) , cell value examples for each column will be helpful to solve this problem.", "cite_spans": [ { "start": 543, "end": 562, "text": "Yavuz et al. (2018)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.4" }, { "text": "The second frequent error is table number classification error; 49 out of 195 cases (25.2%) belong to the category. The decoder occasionally chooses too many tables for the FROM clause, resulting in unnecessary table JOINs. Similarly, 22 out of 195 cases (11.3%) were due to condition number classification error. Those errors could be handled by observing and updating the extracted slot values as a whole; for example, for a user question \"List the maximum weight and type for each type of pet.\" the system generates SQL query \"SELECT PetType, max(weight), weight FROM Pets GROUP BY PetType.\" If the system could observe the extracted slot values as a whole, it would figure out that extracting weight and max(weight) together for SELECT clause is unlikely. Our future work will mainly focus on solving this issue. Relation between the number of foreign key relations in the database, and exact matching accuracy of the queries on dev set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.4" }, { "text": "The remaining 150 errors were hard to be classified into one category, and some of them were due to different representations of the same meaning, for example: \"SELECT max(age) FROM Dogs\" vs. \"SELECT age FROM Dogs ORDER BY age DESC LIMIT 1.\" Next, we tried to see if the proposed model could handle user questions in which words are different from database column and table names. We define an overlap score O(Q, S) = size(w(Q)\u2229w(S)) size(w(S)) between a user question Q and its SQL translation S. In the equation, w(Q) is the set of stemmed words in Q, and w(S) is the set of stemmed words from column names and table names used in S. Intuitively, the score measures how much overlap exists between the column/table names of SQL query S and user question Q.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.4" }, { "text": "Overlap scores are calculated for question-SQL query pairs in the Spider dev set. The data set is divided into five categories based on the calculated overlap scores; Table 15 shows exact matching accuracies of the proposed RYANSQL for those categories. As can be seen from the table, the proposed system shows relatively low performance on the examples with low overlap scores. This suggests one limitation of the proposed system: Even with the aid of pre-trained language models, the system frequently fails to link between question tokens and database schema when their words are different. Better alignment methods between question tokens and database schema should be studied as a future work to further improve the system performance.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 175, "text": "Table 15", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.4" }, { "text": "Another limitation of the proposed model is that the model does not use foreign keys during encoding; foreign keys are used only for JOIN table filtering. We analyzed the correlation between the number of foreign keys and exact matching accuracies in Figure 3 , to figure out the effect of such limitation. The number of foreign keys and exact matching accuracy shows weak negative correlation, with Pearson correlation coefficient \u03c1 = \u22120.22. Based on the analysis result, in future work we will try to integrate the foreign keys into the encoding process, for example, by using the relation aware transformer proposed in Wang et al. (2020) , to improve the proposed model further.", "cite_spans": [ { "start": 622, "end": 640, "text": "Wang et al. (2020)", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 251, "end": 259, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.4" }, { "text": "In this article, we proposed a sketch-based slot-filling algorithm for complex, crossdomain Text-to-SQL problems. A detailed sketch for complex SELECT statement prediction is proposed, along with the Statement Position Code to handle nested queries. Two simple but effective input manipulation methods are additionally proposed to enhance the overall system performance further. The system achieved 3rd place among all systems and 1st place among the systems not using database content, on the challenging Spider benchmark data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "Based on the error analysis results, as a next step of the research we will focus on globally updating the extracted slots by considering the slot prediction values as a whole. The analysis results also show the need to encode the relation structures of the database schema, for example, foreign keys, to improve the performance. We will also work on a method to effectively use the database content instead of using only the database schema, to further improve the system performance for the cases when database content is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "https://yale-lily.github.io/spider, as of April 2020.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For IRNet + BERT, we downloaded the source code and trained the model from authors' homepage (https://github.com/microsoft/IRNet), but we were not able to reproduce the authors' suggested dev set exact matching accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/microsoft/IRNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We re-trained the IRNet model using the authors' source code with the Spider data set. We were not able to obtain the authors' presented 53.3% accuracy on the dev set, and it turns out that the preprocessed Spider data sets on the authors' homepage and generated from the source code script are different. Since we need to preprocess the data using the source code to apply the input manipulation methods, we presented the dev set accuracy of our re-trained IRNet model, not the one presented in the authors' paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://taolusi.github.io/CSpider-explorer/, as of July 2020.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Citro", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Devin", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Ghemawat", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Harp", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Irving", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Yangqing", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Manjunath", "middle": [], "last": "Kudlur", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Levenberg", "suffix": "" } ], "year": 2015, "venue": "Oriol Vinyals", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abadi, Mart\u00edn, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Man, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vigas, Oriol Vinyals, PeteWarden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Layer normalization. Computing Research Repository", "authors": [ { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" }, { "first": "Jamie", "middle": [ "Ryan" ], "last": "Lei", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Kiros", "suffix": "" }, { "first": "", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.06450" ] }, "num": null, "urls": [], "raw_text": "Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. Computing Research Repository, arXiv:1607.06450.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A new database on the structure and development of the financial sector", "authors": [ { "first": "Thorsten", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Demirguc-Kunt", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Levine", "suffix": "" } ], "year": 2000, "venue": "The World Bank Economic Review", "volume": "14", "issue": "3", "pages": "597--605", "other_ids": { "DOI": [ "10.1093/wber/14.3.597" ] }, "num": null, "urls": [], "raw_text": "Beck, Thorsten, Asli Demirguc-Kunt, and Ross Levine. 2000. A new database on the structure and development of the financial sector. The World Bank Economic Review, 14(3):597-605. https://doi.org/10.1093 /wber/14.3.597", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Global reasoning over database structures for text-to-SQL parsing", "authors": [ { "first": "Ben", "middle": [], "last": "Bogin", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bogin, Ben, Matt Gardner, and Jonathan Berant. 2019. Global reasoning over database structures for text-to-SQL parsing. In Proceedings of the 2019", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "3650--3655", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3650-3655. Hong Kong.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Electra: Pre-training text encoders as discriminators rather than generators", "authors": [ { "first": "Le", "middle": [], "last": "", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.10555" ] }, "num": null, "urls": [], "raw_text": "Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. Computing Research Repository, arXiv:2003.10555.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 4171-4186. Minneapolis, MN.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Language to logical form with neural attention", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "33--43", "other_ids": { "DOI": [ "10.18653/v1/P16-1004" ] }, "num": null, "urls": [], "raw_text": "Dong, Li and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 33-43. Berlin. https://doi .org/10.18653/v1/P16-1004", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Coarse-to-fine decoding for neural semantic parsing", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "731--742", "other_ids": { "DOI": [ "10.18653/v1/P18-1068" ] }, "num": null, "urls": [], "raw_text": "Dong, Li and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 731-742. Melbourne. https://doi.org/10.18653 /v1/P18-1068", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Towards complex text-to-SQL in cross-domain database with intermediate representation", "authors": [ { "first": "Jiaqi", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Zecheng", "middle": [], "last": "Zhan", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Jian-Guang", "middle": [], "last": "Lou", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dongmei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.082057" ] }, "num": null, "urls": [], "raw_text": "Guo, Jiaqi, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in cross-domain database with intermediate representation. Computing Research Repository, arXiv:1905 .082057.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Content enhanced BERT-based text-to-SQL generation", "authors": [ { "first": "Tong", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Huilin", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.07179" ] }, "num": null, "urls": [], "raw_text": "Guo, Tong and Huilin Gao. 2019. Content enhanced BERT-based text-to-SQL generation. Computing Research Repository, arXiv:1910.07179.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Reinforce schema representation with context. Computing Research Repository", "authors": [ { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Kaushik", "middle": [], "last": "Chakrabarti", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.073837" ] }, "num": null, "urls": [], "raw_text": "He, Pengcheng, Yi Mao, Kaushik Chakrabarti, and Weizhu Chen. 2018. X Reinforce schema representation with context. Computing Research Repository, arXiv:1808.073837.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Can electronic medical record systems transform health care? Potential health benefits, savings, and costs", "authors": [ { "first": "Richard", "middle": [], "last": "Hillestad", "suffix": "" }, { "first": "James", "middle": [], "last": "Bigelow", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Bower", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Girosi", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Meili", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Scoville", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2005, "venue": "Health Affairs", "volume": "24", "issue": "5", "pages": "", "other_ids": { "DOI": [ "10.1377/hlthaff.24.5.1103" ] }, "num": null, "urls": [], "raw_text": "Hillestad, Richard, James Bigelow, Anthony Bower, Federico Girosi, Robin Meili, Richard Scoville, and Roger Taylor. 2005. Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Affairs. 24(5):1103-1117. https://doi.org /10.1377/hlthaff.24.5.1103, PubMed: 16162551", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Reinforced mnemonic reader for machine reading comprehension", "authors": [ { "first": "Minghao", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Yuxing", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Zhen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "4099--4106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hu, Minghao, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. 2018. Reinforced mnemonic reader for machine reading comprehension. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4099-4106. Stockholm.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A comprehensive exploration on WikiSQL with table-aware word contextualization. Computing Research Repository", "authors": [ { "first": "Wonseok", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "Jinyeong", "middle": [], "last": "Yim", "suffix": "" }, { "first": "Seunghyun", "middle": [], "last": "Park", "suffix": "" }, { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.01069" ] }, "num": null, "urls": [], "raw_text": "Hwang, Wonseok, Jinyeong Yim, Seunghyun Park, and Minjoon Seo. 2019. A comprehensive exploration on WikiSQL with table-aware word contextualization. Computing Research Repository, arXiv:1902.01069.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Clause-wise and recursive decoding for complex and cross-domain text-to-SQL generation", "authors": [ { "first": "Dongjun", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "6047--6053", "other_ids": { "DOI": [ "10.18653/v1/D19-1624" ] }, "num": null, "urls": [], "raw_text": "Lee, Dongjun. 2019. Clause-wise and recursive decoding for complex and cross-domain text-to-SQL generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6047-6053, Hong Kong. https://doi.org/10.18653 /v1/D19-1624", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Re-examining the role of schema linking in text-to-SQL", "authors": [ { "first": "Wenqiang", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Weixin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhixin", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Tian", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "6943--6954", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei, Wenqiang, Weixin Wang, Zhixin Ma, Tian Gan, Wei Lu, Min-Yen Kan, and Tat-Seng Chua. 2020. Re-examining the role of schema linking in text-to-SQL. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 6943-6954. Online.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Grammar-based neural text-to-SQL generation", "authors": [ { "first": "Kevin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Bogin", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2019, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.13326" ] }, "num": null, "urls": [], "raw_text": "Lin, Kevin, Ben Bogin, Mark Neumann, Jonathan Berant, and Matt Gardner. 2019. Grammar-based neural text-to-SQL generation. Computing Research Repository, arXiv:1905.13326.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "RoBERTa: A robustly optimized BERT pretraining approach. Computing Research Repository", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. Computing Research Repository. arXiv:1907.11692.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Hybrid ranking network for text-to-SQL, Microsoft Dynamics 365 AI", "authors": [ { "first": "Qin", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "Kaushik", "middle": [], "last": "Chakrabarti", "suffix": "" }, { "first": "Shobhit", "middle": [], "last": "Hathi", "suffix": "" }, { "first": "Souvik", "middle": [], "last": "Kundu", "suffix": "" }, { "first": "Jianwen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lyu, Qin, Kaushik Chakrabarti, Shobhit Hathi, Souvik Kundu, Jianwen Zhang, and Zheng Chen. 2020. Hybrid ranking network for text-to-SQL, Microsoft Dynamics 365 AI.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A pilot study for Chinese SQL semantic parsing", "authors": [ { "first": "Qingkai", "middle": [], "last": "Min", "suffix": "" }, { "first": "Yuefeng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3652--3658", "other_ids": { "DOI": [ "10.18653/v1/D19-1377" ] }, "num": null, "urls": [], "raw_text": "Min, Qingkai, Yuefeng Shi, and Yue Zhang. 2019. A pilot study for Chinese SQL semantic parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3652-3658. Hong Kong. https://doi.org/10.18653/v1/D19-1377", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Natural language inference by tree-based convolution and heuristic matching", "authors": [ { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Men", "suffix": "" }, { "first": "Ge", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "130--136", "other_ids": { "DOI": [ "10.18653/v1/P16-2022" ] }, "num": null, "urls": [], "raw_text": "Mou, Lili, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 130-136. Berlin. https://doi.org /10.18653/v1/P16-2022", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Application of data mining techniques in customer relationship management: A literature review and classification", "authors": [ { "first": "Eric", "middle": [ "Wt" ], "last": "Ngai", "suffix": "" }, { "first": "Li", "middle": [], "last": "Xiu", "suffix": "" }, { "first": "Dorothy", "middle": [ "C K" ], "last": "Chau", "suffix": "" } ], "year": 2009, "venue": "Expert Systems with Applications", "volume": "36", "issue": "2", "pages": "2592--2602", "other_ids": { "DOI": [ "10.1016/j.eswa.2008.02.021" ] }, "num": null, "urls": [], "raw_text": "Ngai, Eric WT, Li Xiu, and Dorothy C. K. Chau. 2009. Application of data mining techniques in customer relationship management: A literature review and classification. Expert Systems with Applications, 36(2):2592-2602. https:// doi.org/10.1016/j.eswa.2008.02.021", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha. https://doi.org/10.3115/v1 /D14-1162", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Peters, Matthew, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 2227-2237. Minneapolis, MN. https://doi.org/10 .18653/v1/N18-1202", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Evaluation of spoken language systems: The ATIS domain", "authors": [ { "first": "P", "middle": [ "J" ], "last": "Price", "suffix": "" } ], "year": 1990, "venue": "HLT '90: Proceedings of the Workshop on Speech and Natural Language", "volume": "", "issue": "", "pages": "91--95", "other_ids": { "DOI": [ "10.3115/116580.116612" ] }, "num": null, "urls": [], "raw_text": "Price, P. J. 1990. Evaluation of spoken language systems: The ATIS domain. In HLT '90: Proceedings of the Workshop on Speech and Natural Language, pages 91-95. Hidden Valley, PA. https://doi.org/10 .3115/116580.116612", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "IncSQL: Training incremental text-to-SQL parsers with non-deterministic oracles", "authors": [ { "first": "Tianze", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Kedar", "middle": [], "last": "Tatwawadi", "suffix": "" }, { "first": "Kaushik", "middle": [], "last": "Chakrabarti", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Oleksandr", "middle": [], "last": "Polozov", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.05054" ] }, "num": null, "urls": [], "raw_text": "Shi, Tianze, Kedar Tatwawadi, Kaushik Chakrabarti, Yi Mao, Oleksandr Polozov, and Weizhu Chen. 2018. IncSQL: Training incremental text-to-SQL parsers with non-deterministic oracles. Computing Research Repository. arXiv:1809.05054.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "The Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Training very deep networks", "authors": [ { "first": "Rupesh", "middle": [ "K" ], "last": "Srivastava", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Greff", "suffix": "" }, { "first": "Jrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2377--2385", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srivastava, Rupesh K., Klaus Greff, and Jrgen Schmidhuber. 2015. Training very deep networks. Advances in Neural Information Processing Systems, pages 2377-2385.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems, 30:5998-6008.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Pointer networks", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Meire", "middle": [], "last": "Fortunato", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2692--2700", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vinyals, Oriol, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. Advances in Neural Information Processing Systems, pages 2692-2700.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers", "authors": [ { "first": "Bailin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Oleksandr", "middle": [], "last": "Polozov", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7567--7578", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.677" ] }, "num": null, "urls": [], "raw_text": "Wang, Bailin, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7567-7578. Online. https://doi .org/10.18653/v1/2020.acl-main.677", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Robust text-to-SQL generation with execution-guided decoding", "authors": [ { "first": "Chenglong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kedar", "middle": [], "last": "Tatwawadi", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brockschmidt", "suffix": "" }, { "first": "Po-Sen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Oleksandr", "middle": [], "last": "Polozov", "suffix": "" }, { "first": "Rishabh", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2018, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1807.03100" ] }, "num": null, "urls": [], "raw_text": "Wang, Chenglong, Kedar Tatwawadi, Marc Brockschmidt, Po-Sen Huang, Yi Mao, Oleksandr Polozov, and Rishabh Singh. 2018. Robust text-to-SQL generation with execution-guided decoding. Computing Research Repository, arXiv:1807.03100.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "SQLnet: Generating structured queries from natural language without reinforcement learning. Computing Research Repository", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dawn", "middle": [], "last": "Song", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.04436" ] }, "num": null, "urls": [], "raw_text": "Xu, Xiaojun, Chang Liu, and Dawn Song. 2017. SQLnet: Generating structured queries from natural language without reinforcement learning. Computing Research Repository, arXiv:1711.04436.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "What it takes to achieve 100% condition accuracy on WikiSQL", "authors": [ { "first": "Semih", "middle": [], "last": "Yavuz", "suffix": "" }, { "first": "Izzeddin", "middle": [], "last": "Gur", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Xifeng", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1702--1711", "other_ids": { "DOI": [ "10.18653/v1/D18-1197" ] }, "num": null, "urls": [], "raw_text": "Yavuz, Semih, Izzeddin Gur, Yu Su, and Xifeng Yan. 2018. What it takes to achieve 100% condition accuracy on WikiSQL. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1702-1711. Brussels. https://doi.org/10.18653/v1/D18-1197", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Dynamic self-attention: Computing attention over words dynamically for sentence embedding", "authors": [ { "first": "Deunsol", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Dongbok", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sangkeun", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2018, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.073837" ] }, "num": null, "urls": [], "raw_text": "Yoon, Deunsol, Dongbok Lee, and Sangkeun Lee. 2018. Dynamic self-attention: Computing attention over words dynamically for sentence embedding. Computing Research Repository, arXiv:1808.073837.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "TypeSQL: Knowledge-based type-aware neural text-to-SQL generation", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Zifan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zilin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "588--594", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, Tao, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018a. TypeSQL: Knowledge-based type-aware neural text-to-SQL generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 588-594. New Orleans, LA.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "SyntaxSQLnet: Syntax tree networks for complex and cross-domain text-to-SQL task", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dongxu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zifan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1653--1663", "other_ids": { "DOI": [ "10.18653/v1/D18-1193" ] }, "num": null, "urls": [], "raw_text": "Yu, Tao, Michihiro Yasunaga, Kai Yang, Rui Zhang, Dongxu Wang, Zifan Li, and Dragomir Radev. 2018b. SyntaxSQLnet: Syntax tree networks for complex and cross-domain text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1653-1663. Brussels. https://doi.org/10.18653/v1/D18-1193", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Spider: A large-scale human-labeled data set for complex and cross-domain semantic parsing and text-to-SQL task", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Dongxu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zifan", "middle": [], "last": "Li", "suffix": "" }, { "first": "James", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qingning", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Shanelle", "middle": [], "last": "Roman", "suffix": "" }, { "first": "Zilin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3911--3921", "other_ids": { "DOI": [ "10.18653/v1/D18-1425" ] }, "num": null, "urls": [], "raw_text": "Yu, Tao, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018c. Spider: A large-scale human-labeled data set for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921. Brussels. https://doi.org/10.18653/v1/D18-1425", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Learning to parse database queries using inductive logic programming", "authors": [ { "first": "John", "middle": [], "last": "Zelle", "suffix": "" }, { "first": "Raymond Joseph", "middle": [], "last": "Marvin", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Thirteenth National Conference On Artificial Intelligence", "volume": "2", "issue": "", "pages": "1050--1055", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zelle, John Marvin and Raymond Joseph Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference On Artificial Intelligence, volume 2, pages 1050-1055. Portland, OR.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Editing-based SQL query generation for cross-domain context-dependent questions", "authors": [ { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Heyang", "middle": [], "last": "Er", "suffix": "" }, { "first": "Sungrok", "middle": [], "last": "Shim", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Victoria Lin", "suffix": "" }, { "first": "Tianze", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5341--5352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Rui, Tao Yu, Heyang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019. Editing-based SQL query generation for cross-domain context-dependent questions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5341-5352. Hong Kong.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Seq2SQL: Generating structured queries from natural language using reinforcement learning. Computing Research Repository", "authors": [ { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1709.00103" ] }, "num": null, "urls": [], "raw_text": "Zhong, Victor, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating structured queries from natural language using reinforcement learning. Computing Research Repository, arXiv:1709.00103.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Age FROM Student WHERE StuID IN ( SELECT StuID FROM Sportsinfo WHERE SportName = \"Football\" INTERSECT SELECT StuID FROM Sportsinfo WHERE SportName = \"Lacrosse\")", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Figure 3 Relation between the number of foreign key relations in the database, and exact matching accuracy of the queries on dev set.", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "text": "", "type_str": "table", "html": null, "num": null, "content": "" }, "TABREF1": { "text": "Text-to-SQL example from the WikiSQL data set.", "type_str": "table", "html": null, "num": null, "content": "
/territorytext
Text/background colour text
Formattext
Current slogantext
Current seriestext
Notestext
(a) A
" }, "TABREF2": { "text": "", "type_str": "table", "html": null, "num": null, "content": "
StuID LName Fname Age Sex Major AdvisorINT, primary key VARCHAR VARCHAR INT INT VARCHAR INTforeign keyTABLE SportsInfo StuID INT OnScholarship VARCHAR GamesPlayed INT HoursPerWeek INT SportName VARCHAR
city_code VARCHAR
" }, "TABREF4": { "text": "Table Encoder Layer. Column vectors belonging to each table are integrated to get the encoded table vector. For a matrix M \u2208 R n\u00d7d , self-attention function f s (M) \u2208 R 1\u00d7d is defined as follows:", "type_str": "table", "html": null, "num": null, "content": "" }, "TABREF5": { "text": "1\u00d7d are trainable parameters. Then, for table t with columns {C j , . . . , C k }, the hidden table vector h T t is calculated as follows:", "type_str": "table", "html": null, "num": null, "content": "
" }, "TABREF6": { "text": "An example SQL query with a link table. What are the papers of Liwen Xiong in 2015? SQL: SELECT DISTINCT", "type_str": "table", "html": null, "num": null, "content": "
= \"Liwen Xiong\"
AND t3.year = 2015;
" }, "TABREF7": { "text": "Table names are concatenated in front of their belonging column names to form supplemented column names (SCNs), but if the stemmed form of a table name is wholly included in the stemmed form of a column name, the table name is not concatenated.", "type_str": "table", "html": null, "num": null, "content": "" }, "TABREF11": { "text": "Ablation study results of the proposed models for each hardness level on dev set. SPC, SCN, and JTF represent the use of Statement Position Code, supplemented column names, and JOIN table filtering, respectively.", "type_str": "table", "html": null, "num": null, "content": "
Approaches SPC SCN JTFEasyMed.HardExtraALL
OOO69.2% 43.0% 28.2% 22.4% 43.4%
OOX68.0% 40.2% 27.6% 19.4% 41.4%
OXO62.0% 38.2% 24.1% 14.7% 37.7%
RYANSQLO XX OX O65.2% 37.7% 29.9% 18.8% 39.9% 63.2% 39.5% 19.0% 16.5% 38.0%
XOX68.4% 41.6% 18.4% 14.7% 39.7%
XXO63.2% 39.3% 14.9% 16.5% 37.2%
XXX60.0% 38.2% 16.7% 11.8% 35.5%
OOO86.0% 70.5% 54.6% 40.6% 66.6%
OOX86.8% 66.1% 46.6% 42.4% 63.9%
OXO76.4% 58.2% 46.6% 30.6% 56.1%
RYANSQLOXX78.0% 63.4% 46.0% 28.8% 58.3%
+ BERTXOO85.6% 66.6% 27.0% 22.4% 57.3%
XOX83.6% 68.4% 25.9% 26.5% 58.0%
XXO78.4% 60.2% 21.3% 24.7% 52.2%
XXX77.2% 60.5% 23.6% 25.9% 52.6%
" }, "TABREF12": { "text": "Component matching F1 scores of RYANSQL and RYANSQL + BERT on dev set.", "type_str": "table", "html": null, "num": null, "content": "
ApproachesSELECT WHERE GROUP ORDER keywordsALL
RYANSQL69.4%47.4% 67.5% 73.9%82.3%43.4%
RYANSQL + BERT88.2%74.4% 78.8% 83.3%88.5%66.6%
Table 11
Evaluations with different pretrained language models on dev set.
SystemDev
RYANSQL43.4%
RYANSQL + BERT-base51.4%
RYANSQL + BERT-large 66.6%
RYANSQL + RoBERTa65.7%
RYANSQL + ELECTRA63.6%
" }, "TABREF13": { "text": "Evaluations of IRNet with two input manipulation methods on dev set.", "type_str": "table", "html": null, "num": null, "content": "
System SCN JTFDev
OO52.3%
IRNETO X XX O X52.9% 52.3% 52.2% 4
" }, "TABREF14": { "text": "Evaluation results on WikiSQL benchmark with other state-of-the-art systems.", "type_str": "table", "html": null, "num": null, "content": "
SystemDev LF Dev X Test LF Test X
SQLova (Hwang et al. 2019)81.6 %87.2 %80.7 %86.2 %
X-SQL (He et al. 2018)83.8 %89.5 %83.3 %88.7 %
Guo and Gao (2019)84.3 %90.3 %83.7 %89.2 %
HydraNet (Lyu et al. 2020)83.6 %89.1 %83.8 %89.2 %
RYANSQL + BERT81.6 %87.7 %81.3 %87.0 %
SQL + BERT and other WikiSQL state-of-the-art systems. Only the systems without
execution-guided decoding (EGD)
" } } } }