Dataset Viewer
_id
stringlengths 24
24
| id
stringlengths 5
121
| author
stringlengths 2
42
| cardData
stringlengths 2
1.09M
โ | disabled
bool 2
classes | gated
null | lastModified
timestamp[ns]date 2021-02-05 16:03:35
2025-06-06 23:31:27
| likes
int64 0
7.86k
| trendingScore
float64 0
70
| private
bool 1
class | sha
stringlengths 40
40
| description
stringlengths 0
6.67k
โ | downloads
int64 0
3.21M
| downloadsAllTime
int64 0
142M
| tags
sequencelengths 1
7.92k
| createdAt
timestamp[ns]date 2022-03-02 23:29:22
2025-06-06 23:28:23
| citation
stringlengths 0
10.7k
โ | paperswithcode_id
stringclasses 667
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
683596e3bb729b5955ef0fac | yandex/yambda | yandex | {"license": "apache-2.0", "tags": ["recsys", "retrieval", "dataset"], "pretty_name": "Yambda-5B", "size_categories": ["1B<n<10B"], "configs": [{"config_name": "flat-50m", "data_files": ["flat/50m/multi_event.parquet"]}, {"config_name": "flat-500m", "data_files": ["flat/500m/multi_event.parquet"]}, {"config_name": "flat-5b", "data_files": ["flat/5b/multi_event.parquet"]}]} | false | null | 2025-06-06T13:13:37 | 138 | 70 | false | 7ec47287e3a002eab8f9f9b64efaf4bed52ce44f |
Yambda-5B โ A Large-Scale Multi-modal Dataset for Ranking And Retrieval
Industrial-scale music recommendation dataset with organic/recommendation interactions and audio embeddings
๐ Overview โข ๐ Key Features โข ๐ Statistics โข ๐ Format โข ๐ Benchmark โข โฌ๏ธ Download โข โ FAQ
Overview
The Yambda-5B dataset is a large-scale open database comprising 4.79 billion user-item interactions collected from 1 million users and spanning 9.39 million tracks. The dataset includesโฆ See the full description on the dataset page: https://huggingface.co/datasets/yandex/yambda. | 29,965 | 29,965 | [
"license:apache-2.0",
"size_categories:1B<n<10B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.22238",
"region:us",
"recsys",
"retrieval",
"dataset"
] | 2025-05-27T10:41:39 | null | null |
6820fb77b82e61bb50999662 | open-r1/Mixture-of-Thoughts | open-r1 | {"dataset_info": [{"config_name": "all", "features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "num_tokens", "dtype": "int64"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7062819826.825458, "num_examples": 349317}], "download_size": 3077653717, "dataset_size": 7062819826.825458}, {"config_name": "code", "features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "num_tokens", "dtype": "int64"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3872656251.3167396, "num_examples": 83070}], "download_size": 1613338604, "dataset_size": 3872656251.3167396}, {"config_name": "math", "features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "num_tokens", "dtype": "int64"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1599028646, "num_examples": 93733}], "download_size": 704448153, "dataset_size": 1599028646}, {"config_name": "science", "features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "num_tokens", "dtype": "int64"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1590765326, "num_examples": 172514}], "download_size": 674333812, "dataset_size": 1590765326}], "configs": [{"config_name": "all", "data_files": [{"split": "train", "path": "all/train-*"}]}, {"config_name": "code", "data_files": [{"split": "train", "path": "code/train-*"}]}, {"config_name": "math", "data_files": [{"split": "train", "path": "math/train-*"}]}, {"config_name": "science", "data_files": [{"split": "train", "path": "science/train-*"}]}], "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "Mixture of Thoughts", "size_categories": ["100K<n<1M"]} | false | null | 2025-05-26T15:25:56 | 197 | 64 | false | e55fa28006c0d0ec60fb3547520f775dd42d02cd |
Dataset summary
Mixture-of-Thoughts is a curated dataset of 350k verified reasoning traces distilled from DeepSeek-R1. The dataset spans tasks in mathematics, coding, and science, and is designed to teach language models to reason step-by-step. It was used in the Open R1 project to train OpenR1-Distill-7B, an SFT model that replicates the reasoning capabilities of deepseek-ai/DeepSeek-R1-Distill-Qwen-7B from the same base model.
To load the dataset, run:
from datasets importโฆ See the full description on the dataset page: https://huggingface.co/datasets/open-r1/Mixture-of-Thoughts. | 26,663 | 26,710 | [
"task_categories:text-generation",
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.21318",
"arxiv:2505.00949",
"region:us"
] | 2025-05-11T19:33:11 | null | null |
63990f21cc50af73d29ecfa3 | fka/awesome-chatgpt-prompts | fka | {"license": "cc0-1.0", "tags": ["ChatGPT"], "task_categories": ["question-answering"], "size_categories": ["100K<n<1M"]} | false | null | 2025-01-06T00:02:53 | 7,857 | 47 | false | 68ba7694e23014788dcc8ab5afe613824f45a05c | ๐ง Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts
View All Prompts on GitHub
License
CC-0
| 20,728 | 176,305 | [
"task_categories:question-answering",
"license:cc0-1.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"ChatGPT"
] | 2022-12-13T23:47:45 | null | null |
6837854ff36dbe5068b5d602 | open-thoughts/OpenThoughts3-1.2M | open-thoughts | {"dataset_info": {"features": [{"name": "difficulty", "dtype": "int64"}, {"name": "source", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 59763369750, "num_examples": 1200000}], "download_size": 28188197544, "dataset_size": 59763369750}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2025-06-05T16:29:40 | 39 | 39 | false | 27cae0b2bcd671919c3e84be2f8d3e5628777383 |
paper |
dataset |
model
[!NOTE]
We have released a paper for OpenThoughts! See our paper here.
OpenThoughts3-1.2M
Open-source state-of-the-art reasoning dataset with 1.2M rows. ๐
OpenThoughts3-1.2M is the third iteration in our line of OpenThoughts datasets, building on our previous OpenThoughts-114k and OpenThoughts2-1M.
This time around, we scale even further and generate our dataset in a much more systematic way -- OpenThoughts3-1.2M is the result of aโฆ See the full description on the dataset page: https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M. | 852 | 852 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2506.04178",
"region:us"
] | 2025-05-28T21:51:11 | null | null |
68127daac6370caf375aadd5 | Hcompany/WebClick | Hcompany | {"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation", "image-to-text"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "instruction", "dtype": "string"}, {"name": "bbox", "sequence": "float64"}, {"name": "bucket", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 334903619, "num_examples": 1639}], "download_size": 334903619, "dataset_size": 334903619}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "test*"}]}]} | false | null | 2025-06-04T07:54:53 | 36 | 36 | false | ed3e50d1c14209461ae58e2f8e236f458ded23bc |
WebClick: A Multimodal Localization Benchmark for Web-Navigation Models
We introduce WebClick, a high-quality benchmark dataset for evaluating navigation and localization capabilities of multimodal models and agents in Web environments. WebClick features 1,639 English-language web screenshots from over 100 websites paired with precisely annotated natural-language instructions and pixel-level click targets, in the same format as the widely-used screenspot benchmark.โฆ See the full description on the dataset page: https://huggingface.co/datasets/Hcompany/WebClick. | 2,076 | 2,082 | [
"task_categories:text-generation",
"task_categories:image-to-text",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2401.13919",
"arxiv:2506.02865",
"arxiv:2410.23218",
"arxiv:2502.13923",
"arxiv:2501.12326",
"region:us"
] | 2025-04-30T19:44:42 | null | null |
68328c9f85ebf2e6b1c31d12 | MiniMaxAI/SynLogic | MiniMaxAI | {"language": ["en", "zh"], "license": "mit", "tags": ["logical reasoning"], "configs": [{"config_name": "easy", "data_files": [{"split": "train", "path": "synlogic_easy/train.parquet"}, {"split": "validation", "path": "synlogic_easy/validation.parquet"}]}, {"config_name": "hard", "data_files": [{"split": "train", "path": "synlogic_hard/train.parquet"}, {"split": "validation", "path": "synlogic_hard/validation.parquet"}]}]} | false | null | 2025-06-05T06:35:33 | 78 | 30 | false | 9ff8dddedc7df8ddfc92b568a0c387b804b6736e |
SynLogic Dataset
SynLogic is a comprehensive synthetic logical reasoning dataset designed to enhance logical reasoning capabilities in Large Language Models (LLMs) through reinforcement learning with verifiable rewards.
Dataset Description
SynLogic contains 35 diverse logical reasoning tasks with automatic verification capabilities, making it ideal for reinforcement learning training.
Key Features
35 Task Types: Including Sudoku, Game of 24, Cipher, Arrow Mazeโฆ See the full description on the dataset page: https://huggingface.co/datasets/MiniMaxAI/SynLogic. | 906 | 906 | [
"language:en",
"language:zh",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.19641",
"region:us",
"logical reasoning"
] | 2025-05-25T03:21:03 | null | null |
683fa649ee7dce90f5aafa46 | a-m-team/AM-DeepSeek-R1-0528-Distilled | a-m-team | {"task_categories": ["text-generation"], "language": ["en", "zh"], "tags": ["reasoning"], "size_categories": ["1M<n<10M"]} | false | null | 2025-06-05T01:47:09 | 30 | 30 | false | 212a1160146d6e9b965707002c46a5b834d5c59d |
๐ Dataset Summary
This dataset is a high-quality reasoning corpus distilled from DeepSeek-R1-0528, an improved version of the DeepSeek-R1 large language model. Compared to its initial release, DeepSeek-R1-0528 demonstrates significant advances in reasoning, instruction following, and multi-turn dialogue. Motivated by these improvements, we collected and distilled a diverse set of 2.6 million queries across multiple domains, using DeepSeek-R1-0528 as the teacher.
A notableโฆ See the full description on the dataset page: https://huggingface.co/datasets/a-m-team/AM-DeepSeek-R1-0528-Distilled. | 1,192 | 1,192 | [
"task_categories:text-generation",
"language:en",
"language:zh",
"size_categories:1M<n<10M",
"region:us",
"reasoning"
] | 2025-06-04T01:50:01 | null | null |
67c92e867c6308c49ce2e98c | openbmb/Ultra-FineWeb | openbmb | {"configs": [{"config_name": "default", "data_files": [{"split": "en", "path": "data/ultrafineweb_en/*"}, {"split": "zh", "path": "data/ultrafineweb_zh/*"}], "features": [{"name": "content", "dtype": "string"}, {"name": "score", "dtype": "float"}, {"name": "source", "dtype": "string"}]}], "task_categories": ["text-generation"], "language": ["en", "zh"], "pretty_name": "Ultra-FineWeb", "size_categories": ["n>1T"], "license": "apache-2.0"} | false | null | 2025-06-06T07:35:23 | 147 | 20 | false | 57df35e37806c5a5cfa7d1ce93b4b0fa10bb34c9 |
Ultra-FineWeb
๐ Technical Report
๐ Introduction
Ultra-FineWeb is a large-scale, high-quality, and efficiently-filtered dataset. We use the proposed efficient verification-based high-quality filtering pipeline to the FineWeb and Chinese FineWeb datasets (source data from Chinese FineWeb-edu-v2, which includes IndustryCorpus2, MiChao, WuDao, SkyPile, WanJuan, ChineseWebText, TeleChat, and CCI3), resulting in the creation of higher-quality Ultra-FineWeb-enโฆ See the full description on the dataset page: https://huggingface.co/datasets/openbmb/Ultra-FineWeb. | 26,558 | 26,558 | [
"task_categories:text-generation",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:1B<n<10B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2505.05427",
"arxiv:2412.04315",
"region:us"
] | 2025-03-06T05:11:34 | null | null |
6835ce29eac05bd2e0fc2803 | microsoft/mediflow | microsoft | {"license": "cdla-permissive-2.0", "task_categories": ["text-generation"], "language": ["en"], "tags": ["clinical", "medical"], "size_categories": ["1M<n<10M"]} | false | null | 2025-05-30T19:26:32 | 25 | 20 | false | 2464e1fb01adce9466bdaeaf670674862bca6508 |
MediFlow
A large-scale synthetic instruction dataset of 2.5M rows (~700k unique instructions) for clinical natural language processing covering 14 task types and 98 fine-grained input clinical documents.
t-SNE 2D Plot of MediFlow Embeddings by Task Types
Dataset Splits
mediflow: 2.5M instruction data for SFT alignment.
mediflow_dpo: ~135k top-quality instructions with GPT-4o generated rejected_output for DPO alignment.
Main Columns
instruction:โฆ See the full description on the dataset page: https://huggingface.co/datasets/microsoft/mediflow. | 2,264 | 2,264 | [
"task_categories:text-generation",
"language:en",
"license:cdla-permissive-2.0",
"size_categories:1M<n<10M",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2505.10717",
"region:us",
"clinical",
"medical"
] | 2025-05-27T14:37:29 | null | null |
683395e32973b9f8f52c0956 | cognitivecomputations/china-refusals | cognitivecomputations | {"license": "apache-2.0"} | false | null | 2025-05-25T22:43:32 | 41 | 17 | false | cc765cf59f276ba30dc2a8a5620e3dca0d6b5929 |
China Refusals
Eric Hartford
This is a set of prompts that are refused by Chinese models, and answered freely by non-Chinese models.
Some potential use cases:
Training a model to comply with Chinese law
Activation Steering / Abliteration
Evaluation of model alignment
etc.
Enjoy.
Thanks to Nous Research for the Minos-v1 model! https://huggingface.co/NousResearch/Minos-v1
| 728 | 728 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | 2025-05-25T22:12:51 | null | null |
67335bb8f014ee49558ef3fe | PleIAs/common_corpus | PleIAs | {"language": ["en", "fr", "de", "it", "es", "la", "nl", "pl"]} | false | null | 2025-06-04T12:52:12 | 276 | 16 | false | 5f0d4bc3e8eff087256f213f9529bc15fd1539d1 |
Common Corpus
Full data paper
Common Corpus is the largest open and permissible licensed text dataset, comprising 2 trillion tokens (1,998,647,168,282 tokens). It is a diverse dataset, consisting of books, newspapers, scientific articles, government and legal documents, code, and more. Common Corpus has been created by Pleias in association with several partners and contributed in-kind to Current AI initiative.
Common Corpus differs from existing open datasets in that it is:โฆ See the full description on the dataset page: https://huggingface.co/datasets/PleIAs/common_corpus. | 222,982 | 442,790 | [
"language:en",
"language:fr",
"language:de",
"language:it",
"language:es",
"language:la",
"language:nl",
"language:pl",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2506.01732",
"arxiv:2410.22587",
"region:us"
] | 2024-11-12T13:44:24 | null | null |
682ecc8c5a021c03ac3ecf64 | Jiahao004/DeepTheorem | Jiahao004 | {"license": "mit", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "source", "dtype": "string"}, {"name": "ori_question", "dtype": "string"}, {"name": "ori_solution", "dtype": "string"}, {"name": "domain", "sequence": "string"}, {"name": "difficulty", "dtype": "float64"}, {"name": "rationale", "dtype": "string"}, {"name": "informal_theorem", "dtype": "string"}, {"name": "informal_theorem_qa", "dtype": "string"}, {"name": "proof", "dtype": "string"}, {"name": "truth_value", "dtype": "bool"}, {"name": "pos", "struct": [{"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "truth_value", "dtype": "bool"}]}, {"name": "neg", "struct": [{"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "truth_value", "dtype": "bool"}]}], "splits": [{"name": "train", "num_bytes": 1146705612, "num_examples": 120754}], "download_size": 554423240, "dataset_size": 1146705612}} | false | null | 2025-06-06T08:59:36 | 16 | 14 | false | ddac0e09837e227c4e89b35dbe697e98736e734e |
DeepTheorem: Advancing LLM Reasoning for Theorem Proving Through Natural Language and Reinforcement Learning ๐
Welcome to the GitHub repository for DeepTheorem ๐, a comprehensive framework for enhancing large language model (LLM) mathematical reasoning through informal, natural language-based theorem proving. This project introduces a novel approach to automated theorem proving (ATP) by leveraging the informal reasoning strengths of LLMs, moving beyond traditional formal proofโฆ See the full description on the dataset page: https://huggingface.co/datasets/Jiahao004/DeepTheorem. | 1,186 | 1,186 | [
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-05-22T07:04:44 | null | null |
6532270e829e1dc2f293d6b8 | gaia-benchmark/GAIA | gaia-benchmark | {"language": ["en"], "pretty_name": "General AI Assistants Benchmark", "extra_gated_prompt": "To avoid contamination and data leakage, you agree to not reshare this dataset outside of a gated or private repository on the HF hub.", "extra_gated_fields": {"I agree to not reshare the GAIA submissions set according to the above conditions": "checkbox"}} | false | null | 2025-02-13T08:36:12 | 361 | 13 | false | 897f2dfbb5c952b5c3c1509e648381f9c7b70316 |
GAIA dataset
GAIA is a benchmark which aims at evaluating next-generation LLMs (LLMs with augmented capabilities due to added tooling, efficient prompting, access to search, etc).
We added gating to prevent bots from scraping the dataset. Please do not reshare the validation or test set in a crawlable format.
Data and leaderboard
GAIA is made of more than 450 non-trivial question with an unambiguous answer, requiring different levels of tooling and autonomy to solve. Itโฆ See the full description on the dataset page: https://huggingface.co/datasets/gaia-benchmark/GAIA. | 12,369 | 63,718 | [
"language:en",
"arxiv:2311.12983",
"region:us"
] | 2023-10-20T07:06:54 | null |
|
67ea026a0e7c42eb4b4da945 | JokerJan/MMR-VBench | JokerJan | {"dataset_info": {"features": [{"name": "video", "dtype": "string"}, {"name": "videoType", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "correctAnswer", "dtype": "string"}, {"name": "abilityType_L2", "dtype": "string"}, {"name": "abilityType_L3", "dtype": "string"}, {"name": "question_idx", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 1135911, "num_examples": 1257}], "download_size": 586803, "dataset_size": 1135911}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "task_categories": ["video-text-to-text"]} | false | null | 2025-06-05T02:54:51 | 14 | 13 | false | fded5eca0a342b7b50cd74218666aaa4af939cdd |
MMR-V: Can MLLMs Think with Video? A Benchmark for Multimodal Deep Reasoning in Videos
๐ Paper |
๐ป Code |
๐ Homepage
๐ MMR-V Data Card ("Think with Video")
The sequential structure of videos poses a challenge to the ability of multimodal large language models (MLLMs) to ๐ต๏ธlocate multi-frame evidence and conduct multimodal reasoning. However, existing video benchmarks mainly focus on understanding tasks, which only require models to match framesโฆ See the full description on the dataset page: https://huggingface.co/datasets/JokerJan/MMR-VBench. | 1,007 | 1,036 | [
"task_categories:video-text-to-text",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2506.04141",
"region:us"
] | 2025-03-31T02:48:10 | null | null |
625552d2b339bb03abe3432d | openai/gsm8k | openai | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "gsm8k", "pretty_name": "Grade School Math 8K", "tags": ["math-word-problems"], "dataset_info": [{"config_name": "main", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3963202, "num_examples": 7473}, {"name": "test", "num_bytes": 713732, "num_examples": 1319}], "download_size": 2725633, "dataset_size": 4676934}, {"config_name": "socratic", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5198108, "num_examples": 7473}, {"name": "test", "num_bytes": 936859, "num_examples": 1319}], "download_size": 3164254, "dataset_size": 6134967}], "configs": [{"config_name": "main", "data_files": [{"split": "train", "path": "main/train-*"}, {"split": "test", "path": "main/test-*"}]}, {"config_name": "socratic", "data_files": [{"split": "train", "path": "socratic/train-*"}, {"split": "test", "path": "socratic/test-*"}]}]} | false | null | 2024-01-04T12:05:15 | 755 | 12 | false | e53f048856ff4f594e959d75785d2c2d37b678ee |
Dataset Card for GSM8K
Dataset Summary
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
These problems take between 2 and 8 steps to solve.
Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ โ รรท) to reach theโฆ See the full description on the dataset page: https://huggingface.co/datasets/openai/gsm8k. | 535,715 | 5,428,593 | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2110.14168",
"region:us",
"math-word-problems"
] | 2022-04-12T10:22:10 | null | gsm8k |
682f3c7f855225dd954bf66b | snorkelai/Multi-Turn-Insurance-Underwriting | snorkelai | {"language": ["en"], "size_categories": ["n<1K"], "license": "apache-2.0", "tags": ["legal"]} | false | null | 2025-05-29T14:58:57 | 19 | 12 | false | 03b973c183f43a51e050a555e9365034fe381543 |
Dataset Card for Multi-Turn-Insurance-Underwriting
Dataset Summary
This dataset includes traces and associated metadata from multi-turn interactions between a commercial underwriter and AI assistant. We built the system in langgraph with model context protocol and ReAct agents. In each sample, the underwriter has a specific task to solve related to a recent application for insurance by a small business. We created a diverse sample dataset covering 6 distinct types ofโฆ See the full description on the dataset page: https://huggingface.co/datasets/snorkelai/Multi-Turn-Insurance-Underwriting. | 1,477 | 1,477 | [
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"legal"
] | 2025-05-22T15:02:23 | null | null |
683a35c76d1a968a658e4c15 | allenai/reward-bench-2 | allenai | {"language": ["en"], "license": "odc-by", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "chosen", "sequence": "string"}, {"name": "rejected", "sequence": "string"}, {"name": "num_correct", "dtype": "int64"}, {"name": "num_incorrect", "dtype": "int64"}, {"name": "total_completions", "dtype": "int64"}, {"name": "models", "sequence": "string"}, {"name": "subset", "dtype": "string"}, {"name": "additional_metadata", "struct": [{"name": "category", "dtype": "string"}, {"name": "correct", "dtype": "string"}, {"name": "index", "dtype": "float64"}, {"name": "instruction_id_list", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "method", "dtype": "string"}, {"name": "models", "sequence": "string"}, {"name": "prompt_norm", "dtype": "string"}, {"name": "subcategory", "dtype": "string"}, {"name": "valid", "dtype": "float64"}]}], "splits": [{"name": "test", "num_bytes": 13772499, "num_examples": 1865}], "download_size": 6973189, "dataset_size": 13772499}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | false | null | 2025-06-04T08:53:38 | 12 | 12 | false | 7ff08853b0d5686e79b13fda8677024f566a104a | Code | Leaderboard | Results | Paper
RewardBench 2 Evaluation Dataset Card
The RewardBench 2 evaluation dataset is the new version of RewardBench that is based on unseen human data and designed to be substantially more difficult! RewardBench 2 evaluates capabilities of reward models over the following categories:
Factuality (NEW!): Tests the ability of RMs to detect hallucinations and other basic errors in completions.
Precise Instruction Following (NEW!): Tests the ability of RMsโฆ See the full description on the dataset page: https://huggingface.co/datasets/allenai/reward-bench-2. | 511 | 511 | [
"task_categories:question-answering",
"language:en",
"license:odc-by",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2506.01937",
"region:us"
] | 2025-05-30T22:48:39 | null | null |
6842c81fe9598a4b0d5de03e | DeepMount00/italian_conversations | DeepMount00 | {"language": ["it"]} | false | null | 2025-06-06T18:33:04 | 11 | 11 | false | 06044577023f2c13978609823591d5e7b3e770da | ๐ Panoramica del Dataset
Nome: Dataset Conversazioni Italiane Strutturate
Versione: 2.0
Lingua: Italiano ๐ฎ๐น
Licenza: [Creative Commons Attribution 4.0 International License (CC BY 4.0)]
๐ฏ Finalitร d'Uso
Questo dataset รจ progettato per addestrare modelli linguistici a sostenere conversazioni approfondite e strutturate in italiano, con focus su argomentazioni complesse, analisi critica e discussioni multi-turno su tematiche di rilevanza sociale, politica, culturale ed economica. Includeโฆ See the full description on the dataset page: https://huggingface.co/datasets/DeepMount00/italian_conversations. | 0 | 0 | [
"language:it",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-06-06T10:51:11 | null | null |
End of preview. Expand
in Data Studio

NEW Changes Feb 27th
Added new fields on the
models
split:downloadsAllTime
,safetensors
,gguf
Added new field on the
datasets
split:downloadsAllTime
Added new split:
papers
which is all of the Daily Papers
Updated Daily
- Downloads last month
- 2,621
Data Sourcing report
powered
by
Spawning.aiNo elements in this dataset have been identified as either opted-out, or opted-in, by their creator.