|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:38:58.480005Z" |
|
}, |
|
"title": "Testing Cross-Database Semantic Parsers Using Canonical Utterances", |
|
"authors": [ |
|
{ |
|
"first": "Heather", |
|
"middle": [], |
|
"last": "Lent", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "C\u02das", |
|
"middle": [], |
|
"last": "Emih Yavuz", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Tao", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Tong Niu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Yingbo", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Dragomir", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Radev", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Xi", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Victoria", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The benchmark performance of cross-database semantic parsing has climbed steadily in recent years, catalyzed by the wide adoption of pre-trained language models. Yet existing work have shown that state-of-the-art crossdatabase semantic parsers struggle to generalize to novel user utterances, databases and query structures. To obtain transparent details on the strengths and limitation of these models, we propose a diagnostic testing approach based on controlled synthesis of canonical natural language and SQL pairs. Inspired by the CheckList (Ribeiro et al., 2020), we characterize a set of essential capabilities for cross-database semantic parsing models, and detailed the method for synthesizing the corresponding test data. We evaluated a variety of high performing models using the proposed approach, and identified several non-obvious weaknesses across models (e.g. unable to correctly select many columns). Our dataset and code are released as a test suite at http://github.com/hclent/ BehaviorCheckingSemPar.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The benchmark performance of cross-database semantic parsing has climbed steadily in recent years, catalyzed by the wide adoption of pre-trained language models. Yet existing work have shown that state-of-the-art crossdatabase semantic parsers struggle to generalize to novel user utterances, databases and query structures. To obtain transparent details on the strengths and limitation of these models, we propose a diagnostic testing approach based on controlled synthesis of canonical natural language and SQL pairs. Inspired by the CheckList (Ribeiro et al., 2020), we characterize a set of essential capabilities for cross-database semantic parsing models, and detailed the method for synthesizing the corresponding test data. We evaluated a variety of high performing models using the proposed approach, and identified several non-obvious weaknesses across models (e.g. unable to correctly select many columns). Our dataset and code are released as a test suite at http://github.com/hclent/ BehaviorCheckingSemPar.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Cross-database semantic parsing, the task of mapping natural language utterances to SQL queries for any database, has attracted increasing attention since the introduction of benchmarks like Wik-iSQL (Zhong et al., 2017) and Spider (Yu et al., 2018) . The advent of pre-trained language models (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019; Lewis et al., 2020) has further accelerated the progress in this area Yu et al., 2020; Shi et al., 2020; Wang et al., 2020; Choi et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 220, |
|
"text": "Wik-iSQL (Zhong et al., 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 249, |
|
"text": "Spider (Yu et al., 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 315, |
|
"text": "(Peters et al., 2018;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 336, |
|
"text": "Devlin et al., 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 354, |
|
"text": "Liu et al., 2019;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 374, |
|
"text": "Lewis et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 441, |
|
"text": "Yu et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 459, |
|
"text": "Shi et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 478, |
|
"text": "Wang et al., 2020;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 497, |
|
"text": "Choi et al., 2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Despite impressive gains on standard benchmarks, studies on cross-database semantic parsing models show that they still suffer from outof-distribution (OOD) generalization when pre-Work done during an internship at Salesforce Research. Figure 1 : The database (top) is applied to our SCFG production rule (middle) to produce a new example for the DISTINCT category (bottom). See Appendix B for production rules of other categories. sented with novel user utterances (Suhr et al., 2020; Radhakrishnan et al., 2020; Shaw et al., 2021) , databases (Suhr et al., 2020) and SQL query structures (Finegan-Dollak et al., 2018; Suhr et al., 2020; Shaw et al., 2021) . As baseline performance climbs ever upward, at what point can we confidently deploy our models to end users, and how will we know we have reached this point?", |
|
"cite_spans": [ |
|
{ |
|
"start": 466, |
|
"end": 485, |
|
"text": "(Suhr et al., 2020;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 513, |
|
"text": "Radhakrishnan et al., 2020;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 532, |
|
"text": "Shaw et al., 2021)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 564, |
|
"text": "(Suhr et al., 2020)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 590, |
|
"end": 619, |
|
"text": "(Finegan-Dollak et al., 2018;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 620, |
|
"end": 638, |
|
"text": "Suhr et al., 2020;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 639, |
|
"end": 657, |
|
"text": "Shaw et al., 2021)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 244, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Inspired by Ribeiro et al. (2020) , which has shown the effectiveness of simple, systematic, and heuristic behavior checking strategies for evaluating the robustness of NLP models, we propose a controllable, non-adversarial unit testing approach to shed more light on the capabilities of crossdatabase semantic parsers. We implement a synchronous context-free grammar (SCFG) to generate natural language questions based on SQL queries (Figure 1 ). This grammar features production rules that evaluate important categories of SQL element types such as clauses (e.g. SELECT and WHERE), as well as commonly used operators including aggre-gators (MAX), conditionals (BETWEEN), and logical operators (OR). We handcraft the rules for these categories to ensure that the generated question-query pairs are simple, natural, unambiguous, and with minimal cross-category overlap.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 33, |
|
"text": "Ribeiro et al. (2020)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 435, |
|
"end": 444, |
|
"text": "(Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We apply our evaluation framework to four stateof-the-art text-to-SQL models, namely BRIDGE (Lin et al., 2020) , RATSQL-RoBERTa and RATSQL-GraPPa (Yu et al., 2020) , and RATSQL-GAP (Shi et al., 2020) , and observe that these models struggle to extend their success on the Spider dev set consistently to our evaluation data, with the exception of a few categories. Further analysis of the fine grained categories shows that they also fail on many rudimentary test cases (e.g., selecting multiple columns and properly producing conjunctions). While existing studies show that the models tend to fail on challenging cases that involve novel user expression (Suhr et al., 2020) and SQL structures (Suhr et al., 2020; Shaw et al., 2021) , our diagnosis exposes more robustness issues in their surface form understanding (even with seemingly simple inputs), and highlights the importance of addressing such issues in the modeling foundation (Bommasani et al., 2021) . Our dataset and code are released as an extensible test suite.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 110, |
|
"text": "BRIDGE (Lin et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 163, |
|
"text": "RATSQL-RoBERTa and RATSQL-GraPPa (Yu et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 199, |
|
"text": "(Shi et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 654, |
|
"end": 673, |
|
"text": "(Suhr et al., 2020)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 693, |
|
"end": 712, |
|
"text": "(Suhr et al., 2020;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 713, |
|
"end": 731, |
|
"text": "Shaw et al., 2021)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 935, |
|
"end": 959, |
|
"text": "(Bommasani et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Paraphrasing A number of augmentation methods have been made to create paraphrases of the input query, with methods such as synonym replacement (Kwiatkowski et al., 2013) , use of a paraphrase model (Berant and Liang, 2014) , and backwards utterance generation (Zhong et al., 2020) . While these approaches ensure the creation of additional examples with more variation on the natural language side, they can be vulnerable to error, when a wrong synonym or paraphrase is chosen by a model. Although such errors may amount to just noise when used as additional training data in conjunction with a benchmark dataset, they make evaluation on such generated sets impossible, unless examples with errors are manually removed from the dataset. Wang et al. (2015) demonstrated that it is possible to lessen the reliance on humans for creating a dataset by first generating logical forms and canonical utterances, and then use crowdsourcing to create more naturalsounding paraphrases of the questions. They note that this method is particularly effective when you seek to quickly create data for creating a domain specific parser. Iyer et al. (2017) also demonstrated that crowdsourced annotations from such approaches, as in turn user feedback in an online setting, can be used improve parses and detect incorrect queries. Although originally designed in the context of transfer-based machine translation to generate translation pairs (Chiang, 2005) , SCFG's have also been adapted in previous semantic parsing work (Wong and Mooney, 2006, 2007) for generating new sentence-parse pairs. More recent utilization's of SCFG's for semantic parsing induce the grammar and use the resulting data for additional training and pre-training (Jia and Liang, 2016; Yu et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 170, |
|
"text": "(Kwiatkowski et al., 2013)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 199, |
|
"end": 223, |
|
"text": "(Berant and Liang, 2014)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 281, |
|
"text": "(Zhong et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 738, |
|
"end": 756, |
|
"text": "Wang et al. (2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1123, |
|
"end": 1141, |
|
"text": "Iyer et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1428, |
|
"end": 1442, |
|
"text": "(Chiang, 2005)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1509, |
|
"end": 1518, |
|
"text": "(Wong and", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1519, |
|
"end": 1538, |
|
"text": "Mooney, 2006, 2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1724, |
|
"end": 1745, |
|
"text": "(Jia and Liang, 2016;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1746, |
|
"end": 1762, |
|
"text": "Yu et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Robustness Testing Finally, Ribeiro et al. 2020has demonstrated the efficacy of handcrafting templates for generating data points to \"unit test\" the models. We design synchronous context-free grammar (SCFG) production rules to generate test data for specific cross-database semantic parsing capabilities. Other NLP evaluation frameworks that look beyond accuracy and target a more general set of NLP tasks have also been proposed (Goel et al., 2021; Liu et al., 2021; Kiela et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 430, |
|
"end": 449, |
|
"text": "(Goel et al., 2021;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 450, |
|
"end": 467, |
|
"text": "Liu et al., 2021;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 487, |
|
"text": "Kiela et al., 2021)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Utterances", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Language Utterances Using SCFG Motivation There are in general two ways to perform behavior testing on a model: one with automatically generated data, the other with manually curated data. In this work we focus on the former because it not only scales with almost no additional cost, but also serves as a pre-filtering mechanism before we test it further with human-in-theloop. The input to text-to-SQL models is a natural question. However, generating natural language has two challenges: (i) it is difficult to automatically produce novel human-like utterances with high-fidelity; (ii) natural language is inherently ambiguous, while input to text-to-SQL models is required to be accurate enough to have a one-toone mapping between the natural question and the SQL query. Motivated by the above requirements, we propose using the inherently non-ambiguous Synchronous context-free grammar (SCFG) for generating canonical natural language utterances in English 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating Canonical Natural", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Details of SCFG SCFG is a type of formal grammar which produce pairs of utterances that share a meaning with each other. There are two key components of a context-free grammar: symbols and production rules that connect them. In our case, the symbols correspond to the SQL elements, which are presented in the first column of Table 1 . 2 The production rules are mappings between SQL elements and natural language words. In Figure 1 we provide such an example where SCFG maps the SQL element DISTINCT to the word \"unique\", hence converting the SQL query \"SELECT DIS-TINCT Column FROM Table\" to the natural language question \"Select unique Column from Table\". The mappings between symbols and query words are intentionally designed to mimic the language in the Spider dataset (Yu et al., 2018) , which ensures that the generated examples remain close to the training distribution. 3 Intuitively, questions produced by the SCFG lie somewhere in-between natural language and SQL: they are not as natural as real human questions, but are much more human-like than the SQL queries. Accommodating such a trade-off ensures that the generated queries are both natural and accurate. More examples of SCFG rules can be found in Appendix C.", |
|
"cite_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 336, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 774, |
|
"end": 791, |
|
"text": "(Yu et al., 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 879, |
|
"end": 880, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 325, |
|
"end": 332, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 431, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 583, |
|
"end": 589, |
|
"text": "Table\"", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generating Canonical Natural", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To thoroughly evaluate each SQL element, we create as many valid question-query pairs as possible for each database in Spider, so that there is adequate representation for infrequent categories. Note that many databases have tables that only correspond to a subset of elements. 4 Consequently the number of collected examples in Table 1 (second column) are not evenly distributed. 5 When generating examples for a given SQL element, the example operates over only one table, and we only introduce the minimum amount of other elements to make the generation grammatical and uncompounded. For example, the operator BETWEEN necessitates SELECT and WHERE clauses to generate a coherent query, but any additional operators, even if they can make the query more compositional, are excluded, as our goal is to unit test each SQL element individually. In turn, our generated data are also intended to be as easy as possible for models to succeed on.", |
|
"cite_spans": [ |
|
{ |
|
"start": 381, |
|
"end": 382, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 329, |
|
"end": 352, |
|
"text": "Table 1 (second column)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generation of evaluation data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To verify that our generated examples are indeed humanlike and accurate, we recruited volunteers 6 who are proficient in SQL to label a subset of 40 randomly chosen question-query pairs, and rate each pair on its \"readability\" and \"semantic equality\". The question-query pairs are chosen such that all cate-Target SQL Element and Example Model Predictions with Highlighted Errors SELECT BRIDGE:SELECT student.ID, student.name, student.dept name, student.tot cred FROM student NL: Select name, id, department name, total credits from student RS+RoB:SELECT student.name, student.ID, student.dept name, Sum(student.tot cred) FROM student GROUP BY student.ID SQL:SELECT name, ID, dept name, tot cred FROM student RS+GraPPa:SELECT student.name, student.ID, student.dept name, Sum(student.tot cred) FROM student RS+GAP:SELECT student.name, student.ID, student.dept name, Sum(student.tot cred) FROM student gories are represented at least twice. Each questionquery pair was annotated by three annotators and we take their majority vote. An example given to annotators can be found in Appendix B. For readability, 77.5% of generated questions were labeled by annotators to be \"easily understandable\"; 17.5% were labeled \"understandable with some effort\" and 5% were labeled \"not understandable\". We obtain this statistics by taking the majority vote of the three annotations for each question and counting a tie as \"not understandable\". In the same manner, annotators also identified 97.5% of questions were \"semantically equivalent\" to their SQL counterpart and 2.5% were \"not equivalent\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human verification of evaluation data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We computed Fleiss' Kappa to measure interannotator agreement for both readability and equivalency. The results were 0.19 and 0.04, respectively, which are generally considered insufficient to claim there is strong agreement. However, we find the low scores a result of the limitation of Fleiss' Kappa, which is more reliable when each example is annotated by more annotators (we have only 3). Reviewing the annotations for readability reveals that there were 14 examples without perfect agreement for readability. For equivalency, there were only 4 of them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human verification of evaluation data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Models We evaluate four leading models on the Spider challenge (Yu et al., 2018) on our generated question-query pairs: BRIDGE (Lin et al., 2020), RATSQL-RoBERTa and RATSQL-GraPPa (Yu et al., 2020) and RATSQL-GAP (Shi et al., 2020) . With the exception of BRIDGE, the other models were developed upon the original RATSQL model (Wang et al., 2020) , which was notable for introducing a relation-aware self-attention mechanism for schema linking. Yu et al. (2020) extended the RATSQL framework by adding pre-training into their setup, and Shi et al. (2020) also incorporates supplementary pre-training triplet data generated by another model. The BRIDGE model is fundamentally different from the others, as it consists of a sequentially-driven architecture, rather than operating over graphs. For schema-linking, BRIDGE uses a custom encoder powered by BERT (Devlin et al., 2019) with attention over the sequences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 231, |
|
"text": "(Shi et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 346, |
|
"text": "(Wang et al., 2020)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 856, |
|
"end": 877, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments 4.1 Experiment Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Evaluation Methodology Our experiments consist of evaluating each model on the generated set of question-query pairs with the canonical language questions as inputs. We evaluate Exact Set Match Accuracy for subsets of the data pertaining to each target SQL element, and then calculate the average score for each SQL token category weighted by number of examples. Table 1 highlights several interesting observations. 7 Most models only perform on par with their baseline (or better) on a few target SQL elements (e.g. DISTINCT, WHERE). More often they perform below the baseline on most elements, with a few extreme outliers for total or near total failure (e.g. HAVING, AND).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 363, |
|
"end": 370, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments 4.1 Experiment Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Controlled Evaluation All models perform below their own baseline accuracies for simple examples that test the SELECT clause. We present an example of such model predictions in Table 2 . One contributing factor to these low scores is the number of columns being selected. Table 3 shows that SQL models are only able to successfully produce queries with a limited number of columns, although basic column selection should not be such a difficult task for these models. While it is not surprising that models show difficulty generalizing to unseen length or structures (Lake and Baroni, 2017), this finding is concerning because there are many practical use cases where users will need to select more than four columns. 8", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 184, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 279, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We propose a simple and controllable approach for synthesizing text-to-SQL pairs for unit testing model performance on various semantic categories. Our controlled test suites allow for more extensive and fine-grained evaluation of state-of-the-art text-to-SQL models, which reveal a general lack of robustness in generalizing beyond the benchmark examples across several categories such as SELECT and WHERE. More importantly, our study highlights the importance of developing evaluation strategies beyond fixed test and dev set accuracy for understanding real progress made by the stateof-the-art text-to-SQL models and the remaining key challenges. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Example database schema given to annotators:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Example of Annotation Task", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Example question-query pair given to annotators: /Question: Select year from movie when movie id is greater than 1 \\Query: SELECT Year FROM movie WHERE movie id > 1 ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Example of Annotation Task", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Annotators are asked to choose one answer from the list below, to describe the readability and equivalency of the question-query pair, above: Base \u00d1 x Select all columns from parties in events when, SELECT * FROM Parties in Events WHERE y ConjunctionPhrase \u00d1 x ColEqualityValue and , ColEqualityValue AND y ColEqualityValue \u00d1 x event id equals 9, Event ID = 9 y ColEqualityValue \u00d1 x role code equals Organizer, Role Code = 'Organizer' y ColEqualityValue \u00d1 x party id equals 4, Party ID = 4 y Output Production: / NL: Select all columns from parties in events when event id equals 9 and role code equals Organizer and party id equals 4 \\SQL:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Example of Annotation Task", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "SELECT * FROM Parties in Events WHERE Event ID = 9 AND Role Code = 'Organizer' AND Party ID = 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Example of Annotation Task", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figure 3: Example SCFG Production rules for other selected SQL operators", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Example of Annotation Task", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This method is also extendable to other languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We collected the SQL elements from https://www. w3schools.com/sql/ and https://www.techonthenet. com/sqlite/.3 Competent performance across categories inTable 1demonstrate our data overlap with the training distribution.4 For example, a table with only text-type columns can not be used to generate pairs with mathematical concepts minimum or less than.5 To have a uniform distribution, one may perform subsampling (which wastes valuable data), or design a model to automatically generate new tables -we leave the latter as future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our annotation task posed no risk or harm to annotators, and required 30 minutes of the volunteers' time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The metrics inTable 1are diagnostic instead of explanatory. There can be multiple factors affecting the model performance on an evaluation point and our tests cannot isolate them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For example, the large tables in Spider's soccer 1 database", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank our reviewers for their helpful feedback. Heather Lent received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 801199.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": " Table 5 : Example predictions on selected target SQL elements from the BRIDGE, and RATSQL (RS) based models using RoBERTa (+RoB), GraPPa, and GAP.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 8, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Semantic parsing via paraphrasing", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1415--1425", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-1133" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic pars- ing via paraphrasing. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415- 1425, Baltimore, Maryland. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A hierarchical phrase-based model for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "263--270", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1219840.1219873" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Pro- ceedings of the 43rd Annual Meeting of the As- sociation for Computational Linguistics (ACL'05), pages 263-270, Ann Arbor, Michigan. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Ryansql: Recursively applying sketch-based slot fillings for complex textto-sql in cross-domain databases", |
|
"authors": [ |
|
{ |
|
"first": "Donghyun", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myeong Cheol", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunggyun", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong Ryeol", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "DongHyun Choi, Myeong Cheol Shin, EungGyun Kim, and Dong Ryeol Shin. 2020. Ryansql: Recursively applying sketch-based slot fillings for complex text- to-sql in cross-domain databases.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Improving text-to-SQL evaluation methodology", |
|
"authors": [ |
|
{ |
|
"first": "Catherine", |
|
"middle": [], |
|
"last": "Finegan-Dollak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Kummerfeld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Ramanathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sesh", |
|
"middle": [], |
|
"last": "Sadasivam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "351--360", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1033" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-SQL evaluation methodology. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 351-360, Melbourne, Australia. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Robustness gym: Unifying the NLP evaluation landscape", |
|
"authors": [ |
|
{ |
|
"first": "Karan", |
|
"middle": [], |
|
"last": "Goel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jesse", |
|
"middle": [], |
|
"last": "Nazneen Fatema Rajani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samson", |
|
"middle": [], |
|
"last": "Vig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "R\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karan Goel, Nazneen Fatema Rajani, Jesse Vig, Sam- son Tan, Jason Wu, Stephan Zheng, Caiming Xiong, Mohit Bansal, and Christopher R\u00e9. 2021. Robust- ness gym: Unifying the NLP evaluation landscape. CoRR, abs/2101.04840.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning a neural semantic parser from user feedback", |
|
"authors": [ |
|
{ |
|
"first": "Srinivasan", |
|
"middle": [], |
|
"last": "Iyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Konstas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alvin", |
|
"middle": [], |
|
"last": "Cheung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jayant", |
|
"middle": [], |
|
"last": "Krishnamurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "963--973", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1089" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learn- ing a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 963-973, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Data recombination for neural semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "12--22", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 12-22, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Dynabench: Rethinking benchmarking in NLP", |
|
"authors": [ |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Bartolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yixin", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Divyansh", |
|
"middle": [], |
|
"last": "Kaushik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atticus", |
|
"middle": [], |
|
"last": "Geiger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengxuan", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bertie", |
|
"middle": [], |
|
"last": "Vidgen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grusha", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pratik", |
|
"middle": [], |
|
"last": "Ringshia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyi", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tristan", |
|
"middle": [], |
|
"last": "Thrush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeerak", |
|
"middle": [], |
|
"last": "Waseem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pontus", |
|
"middle": [], |
|
"last": "Stenetorp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4110--4124", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.naacl-main.324" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vid- gen, Grusha Prasad, Amanpreet Singh, Pratik Ring- shia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mo- hit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 4110-4124. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Scaling semantic parsers with on-the-fly ontology matching", |
|
"authors": [ |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1545--1556", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natu- ral Language Processing, pages 1545-1556, Seattle, Washington, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brenden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Lake", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brenden M. Lake and Marco Baroni. 2017. Still not systematic after all these years: On the composi- tional skills of sequence-to-sequence recurrent net- works. CoRR, abs/1711.00350.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal ; Abdelrahman Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7871--7880", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.703" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Bridging textual and tabular data for crossdomain text-to-SQL semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Xi Victoria Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4870--4888", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.findings-emnlp.438" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2020. Bridging textual and tabular data for cross- domain text-to-SQL semantic parsing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4870-4888, Online. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "EXPLAIN-ABOARD: an explainable leaderboard for NLP", |
|
"authors": [ |
|
{ |
|
"first": "Pengfei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinlan", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weizhe", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuaichen", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junqi", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yixin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zihuiwen", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaichen Chang, Junqi Dai, Yixin Liu, Zihui- wen Ye, and Graham Neubig. 2021. EXPLAIN- ABOARD: an explainable leaderboard for NLP. CoRR, abs/2104.06387.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Colloql: Robust cross-domain textto-sql over search queries", |
|
"authors": [ |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Karthik Radhakrishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xi Victoria", |
|
"middle": [], |
|
"last": "Srikantan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karthik Radhakrishnan, Arvind Srikantan, and Xi Vic- toria Lin. 2020. Colloql: Robust cross-domain text- to-sql over search queries. CoRR, abs/2010.09927.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList", |
|
"authors": [ |
|
{ |
|
"first": "Tongshuang", |
|
"middle": [], |
|
"last": "Marco Tulio Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4902--4912", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.442" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Compositional generalization and natural language variation: Can a semantic parsing approach handle both?", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Shaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Panupong", |
|
"middle": [], |
|
"last": "Pasupat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "922--938", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.acl-long.75" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Shaw, Ming-Wei Chang, Panupong Pasupat, and Kristina Toutanova. 2021. Compositional general- ization and natural language variation: Can a se- mantic parsing approach handle both? In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing, ACL/IJCNLP 2021, (Volume 1: Long Pa- pers), Virtual Event, August 1-6, 2021, pages 922- 938. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Cicero Nogueira dos Santos, and Bing Xiang. 2020. Learning contextual representations for semantic parsing with generation", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiguo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Henghui", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"Hanbo" |
|
], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Shi, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos Santos, and Bing Xiang. 2020. Learning con- textual representations for semantic parsing with generation-augmented pre-training.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Exploring unexplored generalization challenges for cross-database semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Alane", |
|
"middle": [], |
|
"last": "Suhr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Shaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8372--8388", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.742" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alane Suhr, Ming-Wei Chang, Peter Shaw, and Ken- ton Lee. 2020. Exploring unexplored generalization challenges for cross-database semantic parsing. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8372- 8388, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers", |
|
"authors": [ |
|
{ |
|
"first": "Bailin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oleksandr", |
|
"middle": [], |
|
"last": "Polozov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7567--7578", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.677" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7567-7578, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Building a semantic parser overnight", |
|
"authors": [ |
|
{ |
|
"first": "Yushi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1332--1342", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-1129" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 1332-1342, Beijing, China. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Learning for semantic parsing with statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yuk", |
|
"middle": [ |
|
"Wah" |
|
], |
|
"last": "Wong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuk Wah Wong and Raymond Mooney. 2006. Learn- ing for semantic parsing with statistical machine translation. In Proceedings of the Human Language", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td colspan=\"3\">Exact Set Match Acc.</td></tr><tr><td colspan=\"2\">Target SQL Element #</td><td>BRIDGE \u2020</td><td colspan=\"2\">RATSQL+ RoBERTa GraPPa GAP</td></tr><tr><td colspan=\"2\">Spider Dev 1034</td><td>68.2</td><td colspan=\"2\">69.6 73.4 71.8</td></tr><tr><td/><td>SELECT 1700</td><td>53.6</td><td colspan=\"2\">46.5 62.6 73.5</td></tr><tr><td>Basic Clauses</td><td>DISTINCT 850 WHERE 1003 ORDER BY 1946 GROUP BY 653 HAVING 604</td><td>86.4 73.2 51.0 35.5 0.1</td><td colspan=\"2\">86.6 94.5 88.3 70.3 84.4 82.1 54.7 71.4 76.5 51.3 45.9 5.7 0.0 0.0 0.0</td></tr><tr><td/><td>Cat. Avg.</td><td>53.4</td><td colspan=\"2\">53.7 65.7 64.4</td></tr><tr><td>Aggregate Ops</td><td>MIN 794 MAX 794 SUM 794 COUNT 850 AVG 794</td><td>74.5 75.3 66.0 34.4 56.7</td><td colspan=\"2\">59.1 93.7 83.2 17.5 85.9 47.4 71.1 52.2 52.1 56.3 70.3 66.8 58.1 81.8 79.7</td></tr><tr><td/><td>Cat. Avg.</td><td>61.0</td><td colspan=\"2\">52.5 76.7 65.9</td></tr><tr><td>Condition Ops</td><td>\u010f, \u0103, \u0105, \u011b 440 ! \" 397 BETWEEN 256 Cat. Avg.</td><td>55.2 27.2 65.9 49.4</td><td colspan=\"2\">37.9 61.3 88.6 68.3 62.4 92.4 26.7 34.9 51.0 44.3 52.9 77.3</td></tr><tr><td>Logic Ops</td><td>AND 401 OR 401 AND & OR 369 Cat. Avg.</td><td>3.2 5.1 4.1 4.1</td><td>4.5 5.0 4.3 4.6</td><td>7.2 16.2 8.2 17.1 8.6 18.1 8.0 17.1</td></tr><tr><td/><td>Overall Avg.</td><td>45.0</td><td colspan=\"2\">42.9 55.3 55.6</td></tr></table>", |
|
"text": "Table 1: Results on the models per our SCFG categories. # shows the number of test examples present. Cat. Avg. reflects the category average weighted by the number of examples per each target SQL element.", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td colspan=\"3\">Exact Set Match Acc.</td></tr><tr><td colspan=\"2\">Columns #</td><td>BRIDGE</td><td colspan=\"2\">RATSQL+ RoBERTa GraPPa GAP</td></tr><tr><td>1</td><td>852</td><td>69.1</td><td colspan=\"2\">52.3 70.8 85.4</td></tr><tr><td>2</td><td>253</td><td>60.9</td><td colspan=\"2\">68.8 81.0 88.9</td></tr><tr><td>3</td><td>191</td><td>68.3</td><td colspan=\"2\">63.4 85.9 85.9</td></tr><tr><td>4</td><td>154</td><td>21.0</td><td colspan=\"2\">32.5 61.0 81.8</td></tr><tr><td>5</td><td>122</td><td>0.0</td><td>0.0</td><td>0.0 0.0</td></tr><tr><td>6</td><td>69</td><td>0.0</td><td>0.0</td><td>0.0 0.0</td></tr></table>", |
|
"text": "Model predictions on a randomly chosen SELECT example. See Appendix B for additional qualitative examples of model predictions on different categories.", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td colspan=\"6\">A Model Performance on Dev Examples</td></tr><tr><td/><td/><td colspan=\"5\">Corresponding to Categories</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"3\">Exact Set Match Acc.</td></tr><tr><td/><td/><td colspan=\"3\">Target SQL Element #train #dev</td><td>BRIDGE</td><td colspan=\"2\">RATSQL+ RoBERTa GraPPa GAP</td></tr><tr><td colspan=\"2\">Tao Yu, Chien-Sheng Wu, Xi Victoria Lin, Bailin</td><td>SELECT</td><td>213</td><td>32</td><td>82.3</td><td colspan=\"2\">90.6 96.9 81.2</td></tr><tr><td colspan=\"2\">Wang, Yi Chern Tan, Xinyi Yang, Dragomir Radev,</td><td>DISTINCT</td><td>113</td><td>5</td><td>86.7</td><td>100</td><td>100 60.0</td></tr><tr><td colspan=\"2\">Richard Socher, and Caiming Xiong. 2020. Grappa:</td><td>WHERE</td><td>343</td><td>61</td><td>77.2</td><td colspan=\"2\">83.6 83.6 100</td></tr><tr><td colspan=\"2\">Grammar-augmented pre-training for table semantic</td><td>ORDER BY</td><td>560</td><td>83</td><td>78.3</td><td colspan=\"2\">88.0 90.4 78.3</td></tr><tr><td>parsing.</td><td/><td>GROUP BY</td><td>16</td><td>8</td><td>83.3</td><td colspan=\"2\">50.0 50.0 75.0</td></tr><tr><td colspan=\"2\">Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga,</td><td>MIN</td><td>2</td><td>4</td><td>16.7</td><td colspan=\"2\">0.0 50.0 0.0</td></tr><tr><td colspan=\"2\">Dongxu Wang, Zifan Li, James Ma, Irene Li,</td><td>MAX</td><td>10</td><td>5</td><td>0.0</td><td>0.0</td><td>0.0 0.0</td></tr><tr><td colspan=\"2\">Qingning Yao, Shanelle Roman, Zilin Zhang,</td><td>SUM</td><td>25</td><td>2</td><td>100</td><td>100</td><td>100 100</td></tr><tr><td colspan=\"2\">and Dragomir Radev. 2018. Spider: A large-</td><td>COUNT</td><td>245</td><td>40</td><td>99.2</td><td colspan=\"2\">100 97.5 97.5</td></tr><tr><td colspan=\"2\">scale human-labeled dataset for complex and cross-</td><td>\u0103\", \u0103, \u0105, \u0105\"</td><td>70</td><td>6</td><td>77.8</td><td>66.7</td><td>100 100</td></tr><tr><td colspan=\"2\">domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical</td><td>! \"</td><td>14</td><td>52</td><td>83.3</td><td colspan=\"2\">85.7 85.7 100</td></tr><tr><td colspan=\"2\">Methods in Natural Language Processing, pages</td><td>AND</td><td>50</td><td>5</td><td>66.7</td><td>60.0</td><td>100 60.0</td></tr><tr><td colspan=\"2\">3911-3921, Brussels, Belgium. Association for</td><td>OR</td><td>54</td><td>10</td><td>100</td><td>88.0</td><td>100 78.3</td></tr><tr><td colspan=\"2\">Computational Linguistics.</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Victor Zhong, Mike Lewis, Sida I. Wang, and Luke</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Zettlemoyer. 2020. Grounded adaptation for zero-</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">shot executable semantic parsing. In Proceedings of</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">the 2020 Conference on Empirical Methods in Nat-</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">ural Language Processing (EMNLP), pages 6869-</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">6882, Online. Association for Computational Lin-</td><td/><td/><td/><td/><td/></tr><tr><td>guistics.</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Victor Zhong, Caiming Xiong, and Richard Socher.</td><td/><td/><td/><td/><td/></tr><tr><td>2017.</td><td>Seq2sql: Generating structured queries</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">from natural language using reinforcement learning.</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">CoRR, abs/1709.00103.</td><td/><td/><td/><td/><td/></tr></table>", |
|
"text": "Technology Conference of the NAACL, Main Conference, pages 439-446. Yuk Wah Wong and Raymond Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 960-967.", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Performance of models on Spider Dev by our categories. SCFG elements that had zero corresponding examples are removed from the table. Here we include the number of examples in Spider training and Spider dev to demonstrate the underlying training and development distributions. Examples counted here are strictly relate to the chosen category. (i.e. examples with multiple SQL elements that do not pertain exactly to the categories are excluded from these counts).", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">\u2022 I have some problems understanding the</td><td/></tr><tr><td colspan=\"2\">question, but I can understand with some</td><td/></tr><tr><td colspan=\"2\">effort</td><td/></tr><tr><td colspan=\"2\">\u2022 I do not understand the question after</td><td/></tr><tr><td colspan=\"2\">trying my best to interpret it</td><td/></tr><tr><td colspan=\"2\">2. Equivalency:</td><td/></tr><tr><td colspan=\"2\">\u2022 The question and the SQL query match</td><td/></tr><tr><td colspan=\"2\">perfectly Output Production:</td><td/></tr><tr><td colspan=\"2\">NL: Select class a from school performance \u2022 The question and SQL query do not fully SQL: SELECT Class A FROM school performance</td><td/></tr><tr><td colspan=\"2\">match, but the answer to the question can</td><td/></tr><tr><td colspan=\"2\">be inferred from the SQL query results Rule \u00d1 x Select Columns from Table, SELECT Columns FROM Tabley</td><td/></tr><tr><td colspan=\"3\">Example: \u2022 The SQL query does not return the an-Table \u00d1 x people , people y swer to the question C Example Model Predictions and SCFG Production Rules See Table 5 for model predictions, and Figures 2 and 3 for example SCFG production rules. Columns \u00d1 x (height, name, weight, people id) , (Height, Name, Weight, People ID) y Output Production: NL: Select height, name, weight, people id from people SQL: SELECT Height, Name, Weight, People ID FROM people ORDER BY / NL: Select minimum monthly rental from student addresses \\SQL: SELECT MIN(monthly rental) FROM Student Addresses <=, <,\u0105,\u0105\" Rule \u00d1 x Output Production: Rule \u00d1 x AND</td></tr><tr><td>Rule \u00d1</td><td>x Base ConjunctionPhrase ConjunctionPhrase ColEqualityValue,</td><td>Base ConjunctionPhrase</td></tr><tr><td colspan=\"2\">ConjunctionPhrase ColEqualityValuey</td><td/></tr><tr><td>Example:</td><td/><td/></tr><tr><td/><td>1. Readability:</td><td/></tr><tr><td/><td colspan=\"2\">\u2022 I can easily understand the question</td></tr></table>", |
|
"text": "Select Column from Table, SELECT Column FROM Tabley Example: Table \u00d1 x school performance , school performance y Column \u00d1 x class a , Class A y Select Column1 from Table sorted by Column2, SELECT Column1 FROM Table ORDER BYColumn2y Example: Table \u00d1 x circuits , circuits y Column1 \u00d1 x longitude, lng y Column2 \u00d1 x latitude, lat y Output Production: / NL: Select longitude from circuits sorted by latitude \\SQL: SELECT lng FROM circuits ORDER BY lat Rule \u00d1 x Select Column1 from Table sorted by Column2 Order, SELECT Column1 FROM Table ORDER BY Column2 Ordery Example: Table \u00d1 x debate people , debate people y Column1 \u00d1 x debate id, debate id y Column2 \u00d1 x negative, Negativ y Order \u00d1 x in ascending order , ASC y Output Production: / NL: Select debate id from debate people sorted by negative in ascending order \\SQL: SELECT Debate ID FROM debate people ORDER BY Negative ASC HAVING Rule \u00d1 x Select Column1 from Table grouped by Column1 with Degree imum Column2 equal to ColumnValue, SELECT Column1 FROM Table GROUP BY Column1 HAVING Degree Column2 \" ColumnValue y Example: Table \u00d1 x climber , climber y Column1 \u00d1 x name, Name y Column2 \u00d1 x points, Points y Degree \u00d1 x min , MIN y ColumnValue \u00d1 x 6.0, 6.0 y Output Production: / NL: Select name from climber grouped by name with minimum points equal to 6.0 \\SQL: SELECT Name FROM climber GROUP BY Name HAVING MIN(Points) = 6.0 Figure 2: Example SCFG Production Rules for selected SQL Clauses 83 MIN Rule \u00d1 x Select minimum Column from Table, SELECT MIN( Column) FROM Tabley Example: Table \u00d1 x student addresses , Student Addresses y Column \u00d1 x monthly rental , monthly rental y Select Column1 from Table when Column2 Equality ColumnValue, SELECT Column1 FROM Table WHERE Column2 Equality ColumnValuey Example:Table \u00d1x faculty , faculty y Column1 \u00d1 x faculty, Faculty y Column2 \u00d1 x campus, Campus y Equality \u00d1 x greater than, > y ColumnValue \u00d1 x 20 , 20 y Output Production: / NL: Select faculty from faculty when campus is greater than 20 \\SQL: SELECT Faculty FROM faculty WHERE Campus > 20", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |