SParC 1.0 test image

Yale & Salesforce Semantic Parsing and Text-to-SQL in Context Challenge

What is SParC?

SParC is a dataset for cross-domain Semantic Parsing in Context. It is the context-dependent/multi-turn version of the Spider task, a complex and cross-domain text-to-SQL challenge. SParC consists of 4,298 coherent question sequences (12k+ unique individual questions annotated with SQL queries annotated by 14 Yale students), obtained from user interactions with 200 complex databases over 138 domains.
SParC Paper (ACL'19) SParC Post
Related challenge: Spider introduces the first complex and cross-domain text-to-SQL task. It's the context-agnostic version of the SParC task. Spider Chanllenge (EMNLP'18)

News

Why SParC?

SParC is built upon the Spider dataset. Comparing to other existing context-dependent semantic parsing/text-to-SQL datasets such as ATIS, it demonstrates:
  • complex contextual dependencies (annotated by 15 Yale computer science students)
  • has greater semantic diversity due to complex coverage of SQL logic patterns in the Spider dataset.
  • requires generalization to new domains due to its cross-domain nature and the unseen databasest time.

Getting Started

The data is split into training, development, and unreleased test sets. Download a copy of the dataset (distributed under the CC BY-SA 4.0 license):

SParC Dataset Details of baseline models and evaluation script can be found on the following GitHub site: SParC GitHub Page

Once you have built a model that works to your expectations on the dev set, you can submit it to get official scores on the dev and a hidden test set. To preserve the integrity of test results, we do not release the test set to the public. Instead, we request you to submit your model so that we can run it on the test set for you. Here's a tutorial walking you through official evaluation of your model:

Submission Tutorial

Data Examples

Some examples look like the following:

test image Another example: test image

Have Questions or Want to Contribute ?

Ask us questions at our Github issues page or contact Tao Yu, Rui Zhang, or Xi Victoria Lin.

We expect the dataset to evolve. We would greatly appreciate it if you could donate us your non-private databases or SQL queries for the project.

Acknowledgement

We thank Tianze Shi and the anonymous reviewers for their precious comments on this project and Melvin Gruesbeck for designing the nice example illustrations. Also, we thank Pranav Rajpurkar for giving us the permission to build this website based on SQuAD. .

Part of our SParC team at YINS:

test image

Leaderboard - Exact Set Match without Values

For exact matching evaluation, instead of simply conducting string comparison between the predicted and gold SQL queries, we decompose each SQL into several clauses, and conduct set comparison in each SQL clause. Please refer to the paper and the Github page for more details.

Rank Model Question Match Interaction Match

1

Sep 1, 2019
EditSQL

Yale University & Salesforce Research

(Zhang et al. EMNLP '19) code
47.9 25.3

2

May 17, 2019
CD-Seq2Seq

Yale University & Salesforce Research

(Yu et al. ACL '19) code
23.2 7.5

3

May 17, 2019
SyntaxSQL-con

Yale University & Salesforce Research

(Yu et al. ACL '19) code
20.2 5.2

Leaderboard - Execution with Value Selection

Our current models do not predict any value in SQL conditions so that we do not provide execution accuracies. However, we encourage you to provide it in the future submissions. For value prediction, you can assume that a list of gold values for each question is given. Your model has to fill them into the right slots in the SQL. Is your system going to the first one showing up on this leaderboard?

Rank Model Question Match Interaction Match