--- pretty_name: J dataset_info: - config_name: Github_easy features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 1191732.6278950076 num_examples: 1169 - name: val num_bytes: 194714.22748327328 num_examples: 191 - name: test num_bytes: 594337.144621719 num_examples: 583 download_size: 552088 dataset_size: 1980784.0 - config_name: Github_hard features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 12177171.572580645 num_examples: 746 - name: val num_bytes: 1991440.927419355 num_examples: 122 - name: test num_bytes: 6072262.5 num_examples: 372 download_size: 3631293 dataset_size: 20240875.0 - config_name: Github_medium features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 4810334.171052632 num_examples: 1189 - name: val num_bytes: 784865.2894736842 num_examples: 194 - name: test num_bytes: 2399098.539473684 num_examples: 593 download_size: 1659118 dataset_size: 7994298.0 - config_name: Github_trivial features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 467333.24324324325 num_examples: 266 - name: val num_bytes: 77303.24324324324 num_examples: 44 - name: test num_bytes: 235423.51351351352 num_examples: 134 download_size: 158044 dataset_size: 780060.0 - config_name: Github_ultra features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 7311744.743902439 num_examples: 98 - name: val num_bytes: 1193754.243902439 num_examples: 16 - name: test num_bytes: 3730482.012195122 num_examples: 50 download_size: 2221455 dataset_size: 12235981.0 - config_name: Glaiveai2K features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 865943.3989455184 num_examples: 1026 - name: val num_bytes: 141791.9015817223 num_examples: 168 - name: test num_bytes: 432971.6994727592 num_examples: 513 download_size: 284264 dataset_size: 1440707.0 - config_name: JsonSchemaStore features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 13308367.977642277 num_examples: 295 - name: val num_bytes: 2210542.4776422763 num_examples: 49 - name: test num_bytes: 6676740.544715447 num_examples: 148 download_size: 4019966 dataset_size: 22195651.0 - config_name: Kubernetes features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 15388503.69924812 num_examples: 639 - name: val num_bytes: 2528627.3684210526 num_examples: 105 - name: test num_bytes: 7706292.932330827 num_examples: 320 download_size: 6819424 dataset_size: 25623424.0 - config_name: Snowplow features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 969083.2952853598 num_examples: 242 - name: val num_bytes: 160179.0570719603 num_examples: 40 - name: test num_bytes: 484541.6476426799 num_examples: 121 download_size: 298277 dataset_size: 1613804.0 - config_name: WashingtonPost features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 1604526.016 num_examples: 74 - name: val num_bytes: 281876.192 num_examples: 13 - name: test num_bytes: 823945.792 num_examples: 38 download_size: 565170 dataset_size: 2710348.0 - config_name: default features: - name: json_schema dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 58273912.61728395 num_examples: 5753 - name: val num_bytes: 9491162.197530864 num_examples: 937 - name: test num_bytes: 29050857.185185187 num_examples: 2868 download_size: 21016788 dataset_size: 96815932.0 configs: - config_name: Github_easy data_files: - split: train path: Github_easy/train-* - split: val path: Github_easy/val-* - split: test path: Github_easy/test-* - config_name: Github_hard data_files: - split: train path: Github_hard/train-* - split: val path: Github_hard/val-* - split: test path: Github_hard/test-* - config_name: Github_medium data_files: - split: train path: Github_medium/train-* - split: val path: Github_medium/val-* - split: test path: Github_medium/test-* - config_name: Github_trivial data_files: - split: train path: Github_trivial/train-* - split: val path: Github_trivial/val-* - split: test path: Github_trivial/test-* - config_name: Github_ultra data_files: - split: train path: Github_ultra/train-* - split: val path: Github_ultra/val-* - split: test path: Github_ultra/test-* - config_name: Glaiveai2K data_files: - split: train path: Glaiveai2K/train-* - split: val path: Glaiveai2K/val-* - split: test path: Glaiveai2K/test-* - config_name: JsonSchemaStore data_files: - split: train path: JsonSchemaStore/train-* - split: val path: JsonSchemaStore/val-* - split: test path: JsonSchemaStore/test-* - config_name: Kubernetes data_files: - split: train path: Kubernetes/train-* - split: val path: Kubernetes/val-* - split: test path: Kubernetes/test-* - config_name: Snowplow data_files: - split: train path: Snowplow/train-* - split: val path: Snowplow/val-* - split: test path: Snowplow/test-* - config_name: WashingtonPost data_files: - split: train path: WashingtonPost/train-* - split: val path: WashingtonPost/val-* - split: test path: WashingtonPost/test-* - config_name: default data_files: - split: train path: data/train-* - split: val path: data/val-* - split: test path: data/test-* license: mit task_categories: - text-generation --- # JSONSchemaBench [![Paper](https://img.shields.io/badge/Paper-arXiv-blue)](https://arxiv.org/abs/2501.10868) [![GitHub](https://img.shields.io/badge/Code-GitHub-blue)](https://github.com/guidance-ai/jsonschemabench) JSONSchemaBench is a benchmark of **real-world JSON schemas** designed to evaluate **structured output generation** for Large Language Models (LLMs). It contains approximately **10,000 JSON schemas**, capturing diverse constraints and complexities. ## 📌 Dataset Overview - **Purpose:** Evaluate the **efficiency** and **coverage** of structured output generation. - **Sources:** GitHub, Kubernetes, API specifications, curated collections. - **Schemas:** Categorized based on complexity and domain. ### 📊 Dataset Breakdown | Dataset | Category | Count | | --------------- | ------------------- | ----- | | GlaiveAI-2K | Function Call | 1707 | | Github-Trivial | Misc | 444 | | Github-Easy | Misc | 1943 | | Snowplow | Operational API | 403 | | Github-Medium | Misc | 1976 | | Kubernetes | Kubernetes API | 1064 | | Washington Post | Resource Access API | 125 | | Github-Hard | Misc | 1240 | | JSONSchemaStore | Misc | 492 | | Github-Ultra | Misc | 164 | | **Total** | | 9558 | ## 📥 Loading the Dataset ```python from datasets import load_dataset dataset = load_dataset("epfl-dlab/JSONSchemaBench") print(dataset) ``` ## 🔍 Data Structure Each dataset split contains: - `"json_schema"`: The schema definition. - `"unique_id"`: A unique identifier for the schema. 🚀 **For more details, check out the [paper](https://arxiv.org/abs/2501.10868).** ## 📚 Citation ```bibtex @misc{geng2025jsonschemabench, title={Generating Structured Outputs from Language Models: Benchmark and Studies}, author={Saibo Geng et al.}, year={2025}, eprint={2501.10868}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.10868} } ``` ## License This dataset is provided under the [MIT License](https://opensource.org/licenses/MIT). Please ensure that you comply with the license terms when using or distributing this dataset. ## Acknowledgements We would like to thank the contributors and maintainers of the JSON schema projects and the open-source community for their invaluable work and support.