datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-07 12:14:51
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-07 12:14:06
| trending_score
float64 0
40
⌀ | card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TAUR-dev/qwen2.5_1.5B__2d_retries_eval_fixed__EXTRA | TAUR-dev | 2025-06-06T01:17:05Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T21:51:42Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: solution
dtype: string
- name: model_responses
sequence: string
- name: is_model_response_correct__correctness_reasoning
sequence: string
- name: is_model_response_correct__final_answer
sequence: string
- name: is_model_response_correct__correctness_prompt
sequence: string
- name: is_model_response_correct
sequence: bool
splits:
- name: train
num_bytes: 11112627
num_examples: 1000
download_size: 3702681
dataset_size: 11112627
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
trentmkelly/lots-of-essays | trentmkelly | 2025-06-06T01:12:40Z | 55 | 0 | [
"language:en",
"size_categories:100K<n<1M",
"region:us"
] | [] | 2025-05-30T21:20:59Z | null | ---
language:
- en
pretty_name: Lots of essays
size_categories:
- 100K<n<1M
---
# Essay Dataset Collection
This repository contains a unified collection of essay datasets combined into a single JSONL format for research and analysis purposes.
## Overview
The combined dataset aggregates essays from multiple high-quality sources, providing a diverse corpus of student writing across different contexts, grade levels, and assessment frameworks. All data has been standardized into a consistent format while preserving original metadata.
## Dataset Statistics
| Source Dataset | Row Count | Description |
|----------------|-----------|-------------|
| ASAP2 | 24,728 | Automated Student Assessment Prize dataset with scored essays |
| Feedback Prize ELL | 3,911 | English Language Learning essays with proficiency scores |
| PERSUADE | 173,266 | Discourse-annotated persuasive essays with effectiveness ratings |
| IvyPanda Essays | 128,293 | Academic essays from IvyPanda educational platform |
| **Total** | **330,198** | **Combined unified dataset** |
## File Structure
- `combined_essays.jsonl` - Main unified dataset file
- `combine_datasets.py` - Processing script for data combination
- Individual source files:
- `ASAP2_train_sourcetexts.csv`
- `train.csv`, `test.csv`, `sample_submission.csv` (Feedback Prize ELL)
- `persuade_train_srctexts.csv`
- `ivypanda_essays_train.csv`
## Data Format
Each line in `combined_essays.jsonl` contains a JSON object with the following structure:
```json
{
"text": "Essay content text...",
"source": "Dataset source name",
"extra_data": {
"original_field_1": "value",
"original_field_2": "value"
}
}
```
### Fields
- **text**: The essay content/text
- **source**: Source dataset name for attribution
- **extra_data**: All original metadata fields preserved from source datasets
## Source Dataset Details
### ASAP2 (Automated Student Assessment Prize)
- **Records**: 24,728
- **Content**: Student essays with holistic scores (1-4 scale)
- **Features**: Demographics, assignment prompts, source texts
- **Use Case**: Automated essay scoring research
### Feedback Prize ELL (English Language Learning)
- **Records**: 3,911
- **Content**: Student essays with language proficiency dimensions
- **Features**: Cohesion, syntax, vocabulary, phraseology, grammar, conventions scores
- **Use Case**: English language proficiency assessment
### PERSUADE
- **Records**: 173,266
- **Content**: Discourse-level annotated persuasive essays
- **Features**: Discourse types (Lead, Claim, Evidence), effectiveness ratings, hierarchical structure
- **Use Case**: Argumentation quality analysis, discourse analysis
### IvyPanda Essays
- **Records**: 128,293
- **Content**: Academic essays from educational platform
- **Features**: Original source attribution
- **Use Case**: General essay analysis, academic writing research
## Attribution
Please cite the original dataset creators when using this combined corpus:
- **ASAP2**: Automated Student Assessment Prize dataset
- **Feedback Prize ELL**: Kaggle Feedback Prize - English Language Learning competition
- **PERSUADE**: The Learning Agency Lab - PERSUADE corpus
- **IvyPanda Essays**: qwedsacf/ivypanda-essays (Hugging Face)
## Usage
Load the dataset in Python:
```python
import json
essays = []
with open('combined_essays.jsonl', 'r', encoding='utf-8') as f:
for line in f:
essays.append(json.loads(line))
print(f"Loaded {len(essays)} essays")
```
Filter by source:
```python
asap2_essays = [essay for essay in essays if essay['source'] == 'ASAP2']
```
## License
### Source Dataset Licenses
- **ASAP2**: Licensed under CC BY (Creative Commons Attribution)
- **Feedback Prize ELL**: Subject to Kaggle competition rules and terms
- **PERSUADE**: Subject to Kaggle competition rules and terms (uncertain)
- **IvyPanda Essays**: License terms unknown - please verify with original source
### Combined Dataset License
This combined dataset is licensed under **CC BY (Creative Commons Attribution)** for lack of a more comprehensive licensing option.
**Important Note**: This license may change if any of the original copyright holders request modifications or object to the current licensing terms. Users should be aware that licensing terms are subject to change based on the requirements of the source dataset providers.
Please respect the original licenses and terms of use for each source dataset. This compilation is provided for research and educational purposes. |
lyl472324464/pick_place | lyl472324464 | 2025-06-06T01:12:08Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-06T01:05:53Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "aloha",
"total_episodes": 3,
"total_frames": 2380,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 50,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
14
],
"names": [
[
"right_waist",
"right_shoulder",
"right_elbow",
"right_forearm_roll",
"right_wrist_angle",
"right_wrist_rotate",
"right_gripper",
"left_waist",
"left_shoulder",
"left_elbow",
"left_forearm_roll",
"left_wrist_angle",
"left_wrist_rotate",
"left_gripper"
]
]
},
"action": {
"dtype": "float32",
"shape": [
14
],
"names": [
[
"right_waist",
"right_shoulder",
"right_elbow",
"right_forearm_roll",
"right_wrist_angle",
"right_wrist_rotate",
"right_gripper",
"left_waist",
"left_shoulder",
"left_elbow",
"left_forearm_roll",
"left_wrist_angle",
"left_wrist_rotate",
"left_gripper"
]
]
},
"observation.images.cam_high": {
"dtype": "image",
"shape": [
3,
256,
256
],
"names": [
"channels",
"height",
"width"
]
},
"observation.images.cam_low": {
"dtype": "image",
"shape": [
3,
256,
256
],
"names": [
"channels",
"height",
"width"
]
},
"observation.images.cam_left_wrist": {
"dtype": "image",
"shape": [
3,
256,
256
],
"names": [
"channels",
"height",
"width"
]
},
"observation.images.cam_right_wrist": {
"dtype": "image",
"shape": [
3,
256,
256
],
"names": [
"channels",
"height",
"width"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
hamishivi/OpenThoughts2-1M | hamishivi | 2025-06-06T01:09:55Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T23:51:10Z | null | ---
dataset_info:
features:
- name: difficulty
dtype: int64
- name: source
dtype: string
- name: domain
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 59769369575
num_examples: 1200000
download_size: 28182281878
dataset_size: 59769369575
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
datonic/world_development_indicators | datonic | 2025-06-06T01:04:03Z | 58 | 0 | [
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-07T18:08:09Z | null |
---
license: mit
---
# world_development_indicators
World Development Indicators (WDI) is the World Bank's premier compilation of cross-country comparable data on development.
Bulk data download is available at https://datatopics.worldbank.org/world-development-indicators/
This dataset is produced and published automatically by [Datadex](https://github.com/davidgasquez/datadex),
a fully open-source, serverless, and local-first Data Platform that improves how communities collaborate on Open Data.
## Dataset Details
- **Number of rows:** 8883048
- **Number of columns:** 6
|
sincostangerines/stack_cubes_30 | sincostangerines | 2025-06-06T00:59:14Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-05T22:43:29Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 9,
"total_frames": 6259,
"total_tasks": 1,
"total_videos": 18,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:9"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Ibisbill/hash_deduplicated_reasoning_data_english | Ibisbill | 2025-06-06T00:54:55Z | 0 | 0 | [
"task_categories:text-generation",
"language:zh",
"language:en",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"english",
"text-generation",
"instruction-following",
"sft",
"filtered"
] | [
"text-generation"
] | 2025-06-06T00:54:34Z | null | ---
language:
- zh
- en
tags:
- english
- text-generation
- instruction-following
- sft
- filtered
size_categories:
- 10K<n<100K
task_categories:
- text-generation
dataset_info:
features:
- name: question
dtype: string
- name: quality
dtype: string
- name: difficulty
dtype: string
- name: topic
dtype: string
- name: validity
dtype: string
splits:
- name: train
num_examples: 72710
configs:
- config_name: default
data_files:
- split: train
path: hash_deduplicated_reasoning_data_english.jsonl
---
# hash_deduplicated_reasoning_data_english
## 数据集描述
Hash deduplicated reasoning data filtered from OpenThoughts2-1M, 72710 examples in total
## 文件结构
- `hash_deduplicated_reasoning_data_english.jsonl`: 主数据文件(JSONL格式)
## 数据格式
数据集包含以下字段:
- **question**: str
- **quality**: int
- **difficulty**: int
- **topic**: str
- **validity**: int
## 使用方法
### 方法1: 使用datasets库
```python
from datasets import load_dataset
# 加载数据集
dataset = load_dataset("Ibisbill/hash_deduplicated_reasoning_data_english")
print(dataset)
```
### 方法2: 直接下载JSONL文件
```python
from huggingface_hub import hf_hub_download
import json
# 下载文件
file_path = hf_hub_download(
repo_id="Ibisbill/hash_deduplicated_reasoning_data_english",
filename="hash_deduplicated_reasoning_data_english.jsonl",
repo_type="dataset"
)
# 读取JSONL
data = []
with open(file_path, 'r', encoding='utf-8') as f:
for line in f:
data.append(json.loads(line))
print(f"加载了 {len(data)} 条记录")
```
## 示例数据
```json
{
"question": "Generate an executable Python function generated from the given prompt. The function should take stdin as input and print the output. Simply call the function after the definition.You went to the store, selling $n$ types of chocolates. There are $a_i$ chocolates of type $i$ in stock.\n\nYou have unlimited amount of cash (so you are not restricted by any prices) and want to buy as many chocolates as possible. However if you buy $x_i$ chocolates of type $i$ (clearly, $0 \\le x_i \\le a_i$), then for all $1 \\le j < i$ at least one of the following must hold: $x_j = 0$ (you bought zero chocolates of type $j$) $x_j < x_i$ (you bought less chocolates of type $j$ than of type $i$) \n\nFor example, the array $x = [0, 0, 1, 2, 10]$ satisfies the requirement above (assuming that all $a_i \\ge x_i$), while arrays $x = [0, 1, 0]$, $x = [5, 5]$ and $x = [3, 2]$ don't.\n\nCalculate the maximum number of chocolates you can buy.\n\n\n-----Input-----\n\nThe first line contains an integer $n$ ($1 \\le n \\le 2 \\cdot 10^5$), denoting the number of types of chocolate.\n\nThe next line contains $n$ integers $a_i$ ($1 \\le a_i \\le 10^9$), denoting the number of chocolates of each type.\n\n\n-----Output-----\n\nPrint the maximum number of chocolates you can buy.\n\n\n-----Examples-----\nInput\n5\n1 2 1 3 6\n\nOutput\n10\nInput\n5\n3 2 5 4 10\n\nOutput\n20\nInput\n4\n1 1 1 1\n\nOutput\n1\n\n\n-----Note-----\n\nIn the first example, it is optimal to buy: $0 + 0 + 1 + 3 + 6$ chocolates.\n\nIn the second example, it is optimal to buy: $1 + 2 + 3 + 4 + 10$ chocolates.\n\nIn the third example, it is optimal to buy: $0 + 0 + 0 + 1$ chocolates.\n",
"quality": 9,
"difficulty": 8,
"topic": "Reasoning",
"validity": 1
}
```
## 数据统计
- 总样本数: 72710
- 数据格式: JSONL
- 文件大小: 约 72 MB
|
phucminh/test | phucminh | 2025-06-06T00:52:21Z | 0 | 0 | [
"language:vi",
"license:apache-2.0",
"region:us"
] | [] | 2025-06-06T00:51:03Z | null | ---
license: apache-2.0
language:
- vi
--- |
thailevann/Government_services_QA_v6 | thailevann | 2025-06-06T00:36:42Z | 376 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-31T03:08:21Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: label
dtype: float64
- name: relevant
dtype: string
- name: reason
dtype: string
- name: reason_classification
dtype: string
splits:
- name: train
num_bytes: 85587309
num_examples: 27466
download_size: 20940326
dataset_size: 85587309
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
randall-lab/mpi3d-toy | randall-lab | 2025-06-06T00:24:04Z | 0 | 0 | [
"license:cc-by-4.0",
"region:us"
] | [] | 2025-06-05T21:17:59Z | null | ---
license: cc-by-4.0
---
# Dataset Card for MPI3D-toy
## Dataset Description
The **MPI3D-toy dataset** is a **synthetically rendered image dataset** designed for benchmarking algorithms in **disentangled representation learning** and **unsupervised representation learning**. It is part of the broader MPI3D dataset suite, which also includes [realistic simulated](https://huggingface.co/datasets/randall-lab/mpi3d-realistic), [real-world](https://huggingface.co/datasets/randall-lab/mpi3d-real) and [complex real-world](https://huggingface.co/datasets/randall-lab/mpi3d-complex)variants.
The **toy version** was generated using a **simplified computer graphics renderer** (Quicksilver hardware renderer in Autodesk 3ds Max), based on CAD models of a physical robotic setup. This allows researchers to systematically study **sim-to-real transfer** and the effect of simulation fidelity.
All images depict **physical 3D objects** that would be manipulated by a robotic platform, under **controlled variations of 7 known factors**:
- Object color (6 values)
- Object shape (6 values)
- Object size (2 values)
- Camera height (3 values)
- Background color (3 values)
- Robotic arm horizontal axis (40 values)
- Robotic arm vertical axis (40 values)
The dataset contains **1,036,800 images** at a resolution of **64×64 pixels**, representing the full Cartesian product of these factors. All factors are **identical** to those used in the realistic and real-world versions of MPI3D, enabling direct comparisons between different levels of simulation fidelity.

## Dataset Source
- **Homepage**: [https://github.com/rr-learning/disentanglement_dataset](https://github.com/rr-learning/disentanglement_dataset)
- **License**: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/)
- **Paper**: Muhammad Waleed Gondal et al. _On the Transfer of Inductive Bias from Simulation to the Real World: A New Disentanglement Dataset_. NeurIPS 2019.
## Dataset Structure
|Factors|Possible Values|
|---|---|
|object_color|white=0, green=1, red=2, blue=3, brown=4, olive=5|
|object_shape|cone=0, cube=1, cylinder=2, hexagonal=3, pyramid=4, sphere=5|
|object_size|small=0, large=1|
|camera_height|top=0, center=1, bottom=2|
|background_color|purple=0, sea green=1, salmon=2|
|horizontal_axis (DOF1)|0,...,39|
|vertical_axis (DOF2)|0,...,39|
Each image corresponds to a unique combination of these 7 factors. The images are stored in a **row-major order** (fastest-changing factor is `vertical_axis`, slowest-changing factor is `object_color`).
### Why no train/test split?
The MPI3D-toy dataset does not provide an official train/test split. It is designed for **representation learning research**, where the goal is to learn disentangled and interpretable latent factors. Since the dataset is a complete Cartesian product of all factor combinations, models typically require access to the full dataset to explore factor-wise variations.
## Example Usage
Below is a quick example of how to load this dataset via the Hugging Face Datasets library:
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("randall-lab/mpi3d-toy", split="train", trust_remote_code=True)
# Access a sample from the dataset
example = dataset[0]
image = example["image"]
label = example["label"] # [object_color: 0, object_shape: 0, object_size: 0, camera_height: 0, background_color: 0, horizontal_axis: 0, vertical_axis: 0]
color = example["color"] # 0
shape = example["shape"] # 0
size = example["size"] # 0
height = example["height"] # 0
background = example["background"] # 0
dof1 = example["dof1"] # 0
dof2 = example["dof2"] # 0
image.show() # Display the image
print(f"Label (factors): {label}")
```
If you are using colab, you should update datasets to avoid errors
```
pip install -U datasets
```
## Citation
```
@article{gondal2019transfer,
title={On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset},
author={Gondal, Muhammad Waleed and Wuthrich, Manuel and Miladinovic, Djordje and Locatello, Francesco and Breidt, Martin and Volchkov, Valentin and Akpo, Joel and Bachem, Olivier and Sch{\"o}lkopf, Bernhard and Bauer, Stefan},
journal={Advances in Neural Information Processing Systems},
volume={32},
year={2019}
}
``` |
srushtisingh/MNLP_final_general_dataset | srushtisingh | 2025-06-06T00:20:46Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-06T00:20:38Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 50337935.2
num_examples: 19200
- name: validation
num_bytes: 6292241.9
num_examples: 2400
- name: test
num_bytes: 6292241.9
num_examples: 2400
download_size: 35810691
dataset_size: 62922419.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
zijian2022/eval_itrgg2 | zijian2022 | 2025-06-06T00:17:23Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-06-06T00:17:19Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 10,
"total_frames": 6947,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
MushroomGecko/BIBLE | MushroomGecko | 2025-06-06T00:12:10Z | 37 | 0 | [
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"Bible",
"God",
"Jesus",
"Christ",
"Scripture",
"Christian",
"faith",
"theology",
"benchmark",
"question-answering",
"multiple-choice",
"evaluation",
"religion",
"llm-eval"
] | [
"question-answering"
] | 2025-04-02T01:34:28Z | null | ---
license: cc-by-4.0
language:
- en
pretty_name: BIBLE
tags:
- Bible
- God
- Jesus
- Christ
- Scripture
- Christian
- faith
- theology
- benchmark
- question-answering
- multiple-choice
- evaluation
- religion
- llm-eval
size_categories:
- 10K<n<100K
task_categories:
- question-answering
---
# BIBLE: Biblically Informed Bot Learning Evaluation
**BIBLE** (Biblically Informed Bot Learning Evaluation) is a comprehensive **benchmark dataset** designed to **evaluate** AI models on their understanding of the Holy Bible. It covers all 66 books of Scripture and includes additional thematic categories for *People of the Bible*, *Places in the Bible*, and *Measurements in the Bible*.
> ⚠️ This dataset is **not intended for training**. It is strictly for **evaluation and benchmarking** of models on Biblical knowledge and reasoning.
---
## ⚠️ Accuracy Disclaimer
While the questions in this dataset are sourced directly from trusted materials, a significant portion of the content was generated using **NotebookLM** based on the referenced source documents. Many of these generated questions and answers were not manually reviewed for theological or factual accuracy.
As such, **the accuracy, phrasing, and interpretative correctness of some questions and answers cannot be guaranteed**. Users are encouraged to independently verify any content used in formal evaluations, especially in faith-sensitive or doctrinally rigorous contexts.
---
## 📚 Dataset Overview
- ✅ Questions from every book of the Bible (Genesis → Revelation)
- ✅ Additional themed categories:
- **People of the Bible**
- **Places in the Bible**
- **Measurements in the Bible**
- ✅ Structured format with:
- Multiple-choice options (A–D)
- A single correct answer
- Source attribution and extraction method
- ✅ Suitable for:
- Benchmarking model comprehension of Scripture
- Evaluating closed-book Biblical knowledge in LLMs
- Faith-aligned QA assessments
---
## 📊 Model Accuracy on BIBLE Benchmark
The following table summarizes the performance of various quantized models (**Q4_K_M**) evaluated on the BIBLE benchmark. Accuracy reflects the percentage of correct answers across the full benchmark dataset.
| Model Name | Total Accuracy |
|--------------------------------|----------------|
| Gemma 2 2b Instruct | 40.21% |
| Gemma 3 1b Instruct | 27.96% |
| Gemma 3 4b Instruct | 41.52% |
| Granite 3.1 Dense 2b Instruct | 39.99% |
| Granite 3.1 MoE 1b Instruct | 22.21% |
| Granite 3.2 2b Instruct | 40.19% |
| InternLM2.5 1.8b | 28.74% |
| Llama 3.2 1b Instruct | 24.32% |
| Llama 3.2 3b Instruct | 41.73% |
| Phi4-mini Instruct | 40.78% |
| Qwen2.5 1.5b Instruct | 41.03% |
| Qwen2.5 3b Instruct | 47.94% |
| Qwen3 0.6b | 24.18% |
| Qwen3 1.7b | 36.97% |
| Qwen3 4b | 50.43% |
| SmolLM2 1.7b Instruct | 30.38% |
**Note: Qwen3 results are with thinking disabled**
---
## 📁 Dataset Structure
Each example in the dataset is a dictionary with the following fields:
- `question`: A Bible-based question
- `choices`: A list of four possible answers (A–D)
- `answer`: The correct choice, as a letter ("A", "B", "C", or "D")
- `category`: The book of the Bible or theme the question belongs to
- `source`: A URL pointing to the original source material
- `qa_extraction`: Notes on how the question-answer pair was derived (e.g. directly from the source or generated via NotebookLM given the source)
### 🔍 Example
```json
{
"question": "What two things did God create in the beginning (Gen. 1:1)?",
"choices": [
"The light and the darkness",
"The heavens and the earth",
"The land and the sea",
"The world and the stars"
],
"answer": "B",
"category": "Genesis",
"source": "https://biblicalelearning.org/wp-content/uploads/2021/05/01_GenesisMCQuestions.pdf",
"qa_extraction": "Obtained directly from the source."
}
```
---
## 🔗 Data Sources and Attribution
This dataset was built from publicly available resources. Full respect and credit is given to the following original sources:
- **Biblical eLearning**
Developed by Dr. Ted Hildebrandt, [Biblical eLearning](https://biblicalelearning.org/) is dedicated to providing free online Biblical resources to the global Christian community. The site hosts high-quality, Biblically grounded materials from expert teachers, aiming to preserve and share faithful teaching digitally for the glory of God and the good of others. Many of these resources, including the Bible Quizzers material used in this dataset, are freely **downloadable** in PDF format for personal study or educational use.
📖 [Download Bible Quizzers PDFs](https://biblicalelearning.org/quizlet-bible-quizzers/)
- **World English Bible (WEB)** via **[eBible.org](https://ebible.org/)**
[eBible.org](https://ebible.org) is the original home of the World English Bible and a global volunteer movement committed to making the Holy Bible freely available in the languages and formats most useful to people worldwide. Founded by Michael Paul Johnson, who also serves as senior editor of the WEB, the site hosts hundreds of translations, including the original Hebrew and Greek texts, and supports a wide range of digital formats for both reading and development. The mission of eBible.org is rooted in the Great Commission and made possible by a large network of volunteers who work to ensure quality, accessibility, and faithful distribution of Scripture.
📖 [Download the WEB Bible PDFs](https://ebible.org/pdf/eng-web/)
- **GotQuestions Ministries**
[GotQuestions.org](https://www.gotquestions.org/)
A leading online ministry offering Biblical answers to spiritually related questions, GotQuestions.org is a theologically conservative, evangelical resource rooted in Scripture. Since 2002, the site has received over 2.5 billion pageviews, offering articles, Q&A, podcasts, and tools for those seeking to understand the Word of God.
Each question entry includes the corresponding source URL and method of extraction of the data.
If you use this dataset, please ensure these sources are properly cited.
---
## 🔍 Intended Use
The BIBLE dataset is intended for:
- Evaluating **Biblical literacy** in large language models
- Testing for **factual Scriptural grounding**
- Benchmarking theological comprehension
- Identifying hallucination in religious QA settings
It is **not suitable for model training**, and it is recommended that models be evaluated "as-is" without memorization or prior exposure to the benchmark.
---
## ⚖️ License
This dataset is released under the **Creative Commons Attribution 4.0 International (CC BY 4.0)** license.
It contains public domain and freely licensed material, but users are responsible for proper attribution and for complying with the original source usage guidelines.
---
## 🤝 Contributing
Found an issue or want to contribute additional benchmark questions?
Pull requests and community suggestions are welcome — feel free to open an issue or submit a PR.
---
|
Ibisbill/General_English_only_SFT_Filtered_655k | Ibisbill | 2025-06-06T00:01:34Z | 0 | 0 | [
"task_categories:text-generation",
"language:zh",
"language:en",
"size_categories:100K<n<1M",
"region:us",
"english",
"text-generation",
"instruction-following",
"sft",
"filtered"
] | [
"text-generation"
] | 2025-06-06T00:00:46Z | null | ---
language:
- zh
- en
tags:
- english
- text-generation
- instruction-following
- sft
- filtered
size_categories:
- 100K<n<1M
task_categories:
- text-generation
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
- name: category
dtype: string
- name: original_data
dtype: string
splits:
- name: train
num_examples: 268042
configs:
- config_name: default
data_files:
- split: train
path: dataset.jsonl
---
# General_English_only_SFT_Filtered_655k
## 数据集描述
这是一个包含25k条英文指令跟随数据的高质量数据集,经过精心筛选和过滤。
## 文件结构
- `dataset.jsonl`: 主数据文件(JSONL格式)
## 数据格式
数据集包含以下字段:
- **text**: str
- **source**: str
- **category**: str
- **original_data**: dict
## 使用方法
### 方法1: 使用datasets库
```python
from datasets import load_dataset
# 加载数据集
dataset = load_dataset("Ibisbill/General_English_only_SFT_Filtered_655k")
print(dataset)
```
### 方法2: 直接下载JSONL文件
```python
from huggingface_hub import hf_hub_download
import json
# 下载文件
file_path = hf_hub_download(
repo_id="Ibisbill/General_English_only_SFT_Filtered_655k",
filename="dataset.jsonl",
repo_type="dataset"
)
# 读取JSONL
data = []
with open(file_path, 'r', encoding='utf-8') as f:
for line in f:
data.append(json.loads(line))
print(f"加载了 {len(data)} 条记录")
```
## 示例数据
```json
{
"text": "can you go into more detail about it/?",
"source": "tulu3",
"category": "general",
"original_data": {
"id": "ai2-adapt-dev/tulu_v3.9_synthetic_finalresp_wildguardmixtrain_decontaminated_50k_24928",
"messages": [
{
"content": "can you go into more detail about it/?",
"role": "user"
},
{
"content": "",
"role": "assistant"
}
],
"source": "ai2-adapt-dev/tulu_v3.9_synthetic_finalresp_wildguardmixtrain_decontaminated_50k"
}
}
```
## 数据统计
- 总样本数: 268042
- 数据格式: JSONL
- 文件大小: 约 268 MB
|
Ibisbill/General_English_only_SFT_Filtered_25k | Ibisbill | 2025-06-05T23:59:00Z | 0 | 0 | [
"task_categories:text-generation",
"language:zh",
"language:en",
"size_categories:10K<n<100K",
"region:us",
"english",
"text-generation",
"instruction-following",
"sft",
"filtered"
] | [
"text-generation"
] | 2025-06-05T23:52:43Z | null | ---
language:
- zh
- en
tags:
- english
- text-generation
- instruction-following
- sft
- filtered
size_categories:
- 10K<n<100K
task_categories:
- text-generation
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
- name: category
dtype: string
- name: original_data
dtype: string
splits:
- name: train
num_examples: 25000
configs:
- config_name: default
data_files:
- split: train
path: dataset.jsonl
---
# General_English_only_SFT_Filtered_25k
## 数据集描述
这是一个包含25k条英文指令跟随数据的高质量数据集,经过精心筛选和过滤。
## 文件结构
- `dataset.jsonl`: 主数据文件(JSONL格式)
## 数据格式
数据集包含以下字段:
- **text**: str
- **source**: str
- **category**: str
- **original_data**: dict
## 使用方法
### 方法1: 使用datasets库
```python
from datasets import load_dataset
# 加载数据集
dataset = load_dataset("Ibisbill/General_English_only_SFT_Filtered_25k")
print(dataset)
```
### 方法2: 直接下载JSONL文件
```python
from huggingface_hub import hf_hub_download
import json
# 下载文件
file_path = hf_hub_download(
repo_id="Ibisbill/General_English_only_SFT_Filtered_25k",
filename="dataset.jsonl",
repo_type="dataset"
)
# 读取JSONL
data = []
with open(file_path, 'r', encoding='utf-8') as f:
for line in f:
data.append(json.loads(line))
print(f"加载了 {len(data)} 条记录")
```
## 示例数据
```json
{
"text": "Is the premise \"Two young boys are headed toward a bicycle parked next to a brick house.\" true if \"Two boys are heading toward a bike.\"?\nOPTIONS:\n- yes\n- it is not possible to tell\n- no\nyes\nQ: \"Two people are eating something strange, as evidenced by her laugh and his nose-holding.\" Does this mean that \"Three people are eating something strange, as evidenced by her laugh and his nose-holding.\"? OPTIONS:\n- yes\n- it is not possible to tell\n- no\nA: no\nPremise & Hypothesis & Options: A group of students looking over a balcony on a senior trip.\nSome young people peer over a short wall.\nOPTIONS:\n- yes\n- it is not possible to tell\n- no\nIs the hypothesis true or not: yes\nPremise & hypothesis: Is the premise \"A man and small boy are playing with a wooden toy track system on the floor.\" true if \"The man and the boy are playing.\"?\nOPTIONS:\n- yes\n- it is not possible to tell\n- no\nA: yes\nPremise & hypothesis.\nA little girl runs on the wet sand near the ocean.\n\nHer feet sink into the sand.\nOPTIONS:\n- yes\n- it is not possible to tell\n- no\n++++++++++\ntrue or not.\nyes\nIs the premise \"A little girl in a red dress is standing on a trail in the forest with a horse in the background.\" true if \"a girl is waiting to ride her horse\"?\nOPTIONS:\n- yes\n- it is not possible to tell\n- no\n it is not possible to tell",
"source": "tulu3",
"category": "general",
"original_data": {
"id": "ai2-adapt-dev/flan_v2_converted_26714",
"messages": [
{
"content": "Is the premise \"Two young boys are headed toward a bicycle parked next to a brick house.\" true if \"Two boys are heading toward a bike.\"?\nOPTIONS:\n- yes\n- it is not possible to tell\n- no\nyes\nQ: \"Two people are eating something strange, as evidenced by her laugh and his nose-holding.\" Does this mean that \"Three people are eating something strange, as evidenced by her laugh and his nose-holding.\"? OPTIONS:\n- yes\n- it is not possible to tell\n- no\nA: no\nPremise & Hypothesis & Options: A group of students looking over a balcony on a senior trip.\nSome young people peer over a short wall.\nOPTIONS:\n- yes\n- it is not possible to tell\n- no\nIs the hypothesis true or not: yes\nPremise & hypothesis: Is the premise \"A man and small boy are playing with a wooden toy track system on the floor.\" true if \"The man and the boy are playing.\"?\nOPTIONS:\n- yes\n- it is not possible to tell\n- no\nA: yes\nPremise & hypothesis.\nA little girl runs on the wet sand near the ocean.\n\nHer feet sink into the sand.\nOPTIONS:\n- yes\n- it is not possible to tell\n- no\n++++++++++\ntrue or not.\nyes\nIs the premise \"A little girl in a red dress is standing on a trail in the forest with a horse in the background.\" true if \"a girl is waiting to ride her horse\"?\nOPTIONS:\n- yes\n- it is not possible to tell\n- no\n",
"role": "user"
},
{
"content": "it is not possible to tell",
"role": "assistant"
}
],
"source": "ai2-adapt-dev/flan_v2_converted"
}
}
```
## 数据统计
- 总样本数: 25000
- 数据格式: JSONL
- 文件大小: 约 25 MB
|
AnnaelleMyriam/MNLP_M3_dpo_dataset | AnnaelleMyriam | 2025-06-05T23:58:36Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T23:58:23Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: source
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 140392393.41902313
num_examples: 51464
- name: validation
num_bytes: 7799274.304076387
num_examples: 2859
- name: test
num_bytes: 7802002.276900478
num_examples: 2860
download_size: 87025058
dataset_size: 155993669.99999997
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
sincostangerines/stack_cubes_50 | sincostangerines | 2025-06-05T23:55:24Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-05T23:55:17Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 10,
"total_frames": 8632,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
tonijhanel/interior_design_roboflow-train | tonijhanel | 2025-06-05T23:52:29Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T23:52:24Z | null | ---
dataset_info:
features:
- name: __index_level_0__
dtype: int64
- name: image
dtype: image
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 86163796.0838672
num_examples: 1373
download_size: 85996890
dataset_size: 86163796.0838672
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "interior_design_roboflow-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tonijhanel/interior_design_roboflow | tonijhanel | 2025-06-05T23:52:24Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T23:52:15Z | null | ---
dataset_info:
features:
- name: __index_level_0__
dtype: int64
- name: image
dtype: image
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 107751812.0
num_examples: 1717
download_size: 107573429
dataset_size: 107751812.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "interior_design_roboflow"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fannymissillier/MNLP_M2_mcqa_dataset_cleaned | fannymissillier | 2025-06-05T23:48:43Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T23:48:37Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: explanation
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 65280795
num_examples: 117634
download_size: 36890309
dataset_size: 65280795
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DoniaGasmii/MNLP_M3_full_sft_dataset_split | DoniaGasmii | 2025-06-05T23:36:24Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T23:36:14Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 60578646.31392281
num_examples: 63676
- name: validation
num_bytes: 20193516.343038596
num_examples: 21226
- name: test
num_bytes: 20193516.343038596
num_examples: 21226
download_size: 55346171
dataset_size: 100965679.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ShadowCatLul/clef_fungi_dtd_for_zero-shot | ShadowCatLul | 2025-06-05T23:17:01Z | 0 | 0 | [
"license:apache-2.0",
"modality:image",
"region:us"
] | [] | 2025-06-05T23:14:15Z | null | ---
license: apache-2.0
---
|
joyheyueya/qwen3-32b-sft_star | joyheyueya | 2025-06-05T22:57:48Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T22:57:45Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 51151716
num_examples: 10000
download_size: 16975250
dataset_size: 51151716
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TAUR-dev/SIEXP_sft_data__skill_template__random_sort__budget_forces | TAUR-dev | 2025-06-05T22:57:46Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T22:46:34Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: solution
dtype: string
- name: model_responses
sequence: string
- name: is_model_response_correct__correctness_reasoning
sequence: string
- name: is_model_response_correct__final_answer
sequence: string
- name: is_model_response_correct__correctness_prompt
sequence: string
- name: is_model_response_correct
sequence: bool
- name: args
sequence: string
- name: skill_templated_response
dtype: string
- name: skill_templated_correctness
dtype: bool
splits:
- name: train
num_bytes: 52544572
num_examples: 3721
download_size: 17200919
dataset_size: 52544572
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Blinorot/INF-ORM-Preference-Magnitude-filtered | Blinorot | 2025-06-05T22:48:50Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T22:48:44Z | null | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: magnitude
dtype: float64
splits:
- name: train
num_bytes: 132219837
num_examples: 23293
download_size: 70763465
dataset_size: 132219837
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
akseljoonas/toolagent-traces | akseljoonas | 2025-06-05T22:43:33Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T22:43:12Z | null | ---
dataset_info:
features:
- name: model_id
dtype: string
- name: system_prompt
dtype: string
- name: source
dtype: string
- name: original_question
dtype: string
- name: messages
dtype: string
splits:
- name: train
num_bytes: 56983619
num_examples: 2819
download_size: 16729347
dataset_size: 56983619
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/multi-asset-synth-trades-202506052230 | ChavyvAkvar | 2025-06-05T22:38:52Z | 0 | 0 | [
"region:us"
] | [] | 2025-06-05T22:30:41Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: scenario_id
dtype: int64
- name: asset_source_name
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown_pct
dtype: float64
- name: total_trades
dtype: int64
- name: portfolio_halted
dtype: bool
- name: portfolio_halt_reason
dtype: string
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 9753581076
num_examples: 10560
download_size: 9732553149
dataset_size: 9753581076
---
# Dataset Card for "multi-asset-synth-trades-202506052230"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
parsee-mizuhashi/ide | parsee-mizuhashi | 2025-06-05T22:34:54Z | 73 | 1 | [
"license:mit",
"region:us"
] | [] | 2025-01-25T20:23:14Z | null | ---
license: mit
---
[v29](https://huggingface.co/datasets/parsee-mizuhashi/ide/blob/51f9e372dd5af8d20b08e78daf31c23d8d2613c5/noob_v_29_checkpoint-e0_s4000.safetensors)
[24r+28](https://huggingface.co/datasets/parsee-mizuhashi/ide/blob/454fc8057079f6be7104dcfff85e28a218ecc75f/noob_v_24r2%2B28m.safetensors)
[24r2 -bad contrast](https://huggingface.co/datasets/parsee-mizuhashi/ide/commit/72454682dcd5a538c0af819d073751f9b5df5f30)
|
leeroy-jankins/Appropriations | leeroy-jankins | 2025-06-05T22:30:43Z | 0 | 0 | [
"language:en",
"license:mit",
"size_categories:n<1K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-06-05T22:28:53Z | null | ---
license: mit
pretty_name: U.S. Appropriations Dataset
language:
- en
---
# 💵 U.S. Appropriations Dataset (1995–2025)
This dataset links enacted U.S. Public Laws with their corresponding Explanatory Statements and Appropriations Titles,
covering the major federal appropriations acts from FY1995 through FY2025.
---
## 📊 Structure
Each entry includes:
- `public_law`: Official citation of the enacted appropriations law (e.g. P.L. 117-328)
- `explanatory_statement`: House or Senate report number accompanying the law (e.g. H. Rpt. 117-328)
- `appropriation_title`: Full name of the Appropriations Act or Continuing Resolution
---
## 🗂️ Enacted Appropriations
| Public Law | Explanatory Statement | Appropriation Title |
|---------------|------------------------|--------------------------------------------------------------------------------------|
| P.L. 104-134 | H. Rpt. 104-537 | Omnibus Consolidated Rescissions and Appropriations Act |
| P.L. 104-208 | H. Rpt. 104-863 | Omnibus Consolidated Appropriations Act, 1997 |
| P.L. 105-277 | H. Rpt. 105-825 | Omnibus Consolidated and Emergency Supplemental Appropriations Act |
| P.L. 105-277 | H. Rpt. 106-110 | Omnibus Consolidated and Emergency Supplemental Appropriations Act |
| P.L. 106-113 | H. Rpt. 106-479 | Consolidated Appropriations Act, 2000 |
| P.L. 106-79 | H. Rpt. 106-371 | Department of Defense Appropriations Act, 2000 |
| P.L. 106-554 | H. Rpt. 106-1033 | Consolidated Appropriations Act, 2001 |
| P.L. 106-259 | S. Rpt. 106-298 | Department of Defense Appropriations Act, 2001 |
| P.L. 107-117 | H. Rpt. 107-350 | Department of Defense and Emergency Supplemental Appropriations |
| P.L. 107-206 | H. Rpt. 107-593 | Supplemental Appropriations Act, 2002 |
| P.L. 108-7 | H. Rpt. 108-10 | Consolidated Appropriations Resolution, 2003 |
| P.L. 108-199 | H. Rpt. 108-401 | Consolidated Appropriations Act, 2004 |
| P.L. 108-11 | H. Rpt. 108-55 | Emergency Supplemental Appropriations Act for Defense |
| P.L. 108-447 | H. Rpt. 108-792 | Consolidated Appropriations Act, 2005 |
| P.L. 109-13 | H. Rpt. 109-72 | Emergency Supplemental Appropriations Act for Defense, Global War on Terror, Tsunami Relief |
| P.L. 109-108 | H. Rpt. 109-272 | Science, State, Justice, Commerce Appropriations Act |
| P.L. 109-148 | S. Rpt. 109-141 | Department of Defense Appropriations Act, 2006 |
| P.L. 110-5 | H. Rpt. 110-5 | Revised Continuing Appropriations Resolution, 2007 |
| P.L. 110-161 | H. Rpt. 110-497 | Consolidated Appropriations Act, 2008 |
| P.L. 110-252 | H. Rpt. 110-656 | Supplemental Appropriations Act, 2008 |
| P.L. 111-8 | H. Rpt. 111-8 | Omnibus Appropriations Act, 2009 |
| P.L. 111-32 | H. Rpt. 111-105 | Supplemental Appropriations Act, 2009 |
| P.L. 111-117 | H. Rpt. 111-366 | Consolidated Appropriations Act, 2010 |
| P.L. 112-10 | H. Rpt. 112-331 | Department of Defense and Full-Year Continuing Appropriations Act, 2011 |
| P.L. 112-74 | H. Rpt. 112-331 | Consolidated Appropriations Act, 2012 |
| P.L. 113-6 | H. Rpt. 113-6 | Consolidated and Further Continuing Appropriations Act, 2013 |
| P.L. 113-76 | H. Rpt. 113-76 | Consolidated Appropriations Act, 2014 |
| P.L. 113-235 | H. Rpt. 113-235 | Consolidated and Further Continuing Appropriations Act, 2015 |
| P.L. 114-113 | H. Rpt. 114-113 | Consolidated Appropriations Act, 2016 |
| P.L. 115-31 | H. Rpt. 115-31 | Consolidated Appropriations Act, 2017 |
| P.L. 115-141 | H. Rpt. 115-141 | Consolidated Appropriations Act, 2018 |
| P.L. 116-6 | H. Rpt. 116-6 | Consolidated Appropriations Act, 2019 |
| P.L. 116-93 | H. Rpt. 116-93 | Further Consolidated Appropriations Act, 2020 |
| P.L. 116-260 | H. Rpt. 116-260 | Consolidated Appropriations Act, 2021 |
| P.L. 117-103 | H. Rpt. 117-103 | Consolidated Appropriations Act, 2022 |
| P.L. 117-328 | H. Rpt. 117-328 | Consolidated Appropriations Act, 2023 |
| P.L. 118-42 | H. Rpt. 118-42 | Continuing Appropriations Act, 2024 |
| P.L. 118-83 | H. Rpt. 118-83 | Continuing Appropriations Act, 2025
## 🔍 Use Cases
- 🧠 Train NLP models for legislative reference extraction
- 🧾 Link Appropriations Acts to their respective explanatory documents
- 🗃️ Construct longitudinal appropriations histories for federal program analysis
- 📜 Support research on continuing resolutions and omnibus legislation
---
## 📚 Related Concepts
- Omnibus and Consolidated Appropriations
- Explanatory Statements (House/Senate Reports)
- Continuing Resolutions
- Title-by-Title Budget Authority
---
## 🧠 Example Usage (Python)
```python
from datasets import load_dataset
ds = load_dataset("leeroy-jankins/Regulations", split="train")
for item in ds:
print(f"{item['public_law']} — {item['appropriation_title']}") |
cyh002/sealion-prompt-engineering-inference-results | cyh002 | 2025-06-05T22:15:00Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T18:47:08Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: language
dtype: string
- name: medium
dtype: string
- name: topic
dtype: string
- name: domain
dtype: string
- name: prompt
dtype: string
- name: predicted_label
dtype: string
splits:
- name: inference_dataset
num_bytes: 724716
num_examples: 500
download_size: 160046
dataset_size: 724716
configs:
- config_name: default
data_files:
- split: inference_dataset
path: data/inference_dataset-*
---
|
InAbsentia/trossen_towel_fold_v15 | InAbsentia | 2025-06-05T22:10:31Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-05T22:06:46Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"trossen_subversion": "v1.0",
"robot_type": "trossen_ai_stationary",
"total_episodes": 16,
"total_frames": 85813,
"total_tasks": 1,
"total_videos": 64,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:16"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
14
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5",
"left_joint_6",
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5",
"right_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
14
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5",
"left_joint_6",
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5",
"right_joint_6"
]
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_low": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_left_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_right_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
jvelja/results_3b_backdoored | jvelja | 2025-06-05T22:05:23Z | 103 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-02T09:47:43Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: problem
dtype: string
- name: triggering_condition
dtype: string
- name: backdooring_reasoning
dtype: string
- name: injected_solution
dtype: string
- name: honest_solution
dtype: string
splits:
- name: train
num_bytes: 8372626
num_examples: 2505
download_size: 3886648
dataset_size: 8372626
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
coralieb7/mcqa_custom_sft_50k_sft_focus | coralieb7 | 2025-06-05T21:59:35Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T21:59:30Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: system prompt
dtype: string
splits:
- name: train
num_bytes: 79747251
num_examples: 50000
download_size: 41233830
dataset_size: 79747251
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jesbu1/test | jesbu1 | 2025-06-05T21:56:45Z | 103 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"widowx",
"bridge-v2"
] | [
"robotics"
] | 2025-06-05T06:10:56Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- widowx
- bridge-v2
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "widowx",
"total_episodes": 5,
"total_frames": 197,
"total_tasks": 5,
"total_videos": 60,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 5,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
},
"camera_present": {
"dtype": "bool",
"shape": [
4
],
"names": [
"image_0",
"image_1",
"image_2",
"image_3"
]
},
"observation.images.image_0": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.path.image_0": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.masked_path.image_0": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.image_1": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.path.image_1": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.masked_path.image_1": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.image_2": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.path.image_2": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.masked_path.image_2": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.image_3": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.path.image_3": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"observation.masked_path.image_3": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 256,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 5,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ai2-adapt-dev/tool-use-more-refusals | ai2-adapt-dev | 2025-06-05T21:53:31Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T21:53:21Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: function_calls
dtype: string
- name: functions
dtype: string
- name: role
dtype: string
- name: source
dtype: string
- name: n_turn
dtype: string
- name: n_step
dtype: string
- name: exec_type
dtype: string
- name: is_refusal
dtype: bool
splits:
- name: train
num_bytes: 186716835
num_examples: 79942
download_size: 37004421
dataset_size: 186716835
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ai2-adapt-dev/tool-use-more-multistep | ai2-adapt-dev | 2025-06-05T21:52:08Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T21:52:01Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: function_calls
dtype: string
- name: functions
dtype: string
- name: role
dtype: string
- name: source
dtype: string
- name: n_turn
dtype: string
- name: n_step
dtype: string
- name: exec_type
dtype: string
- name: is_refusal
dtype: bool
splits:
- name: train
num_bytes: 49077920
num_examples: 19978
download_size: 15955320
dataset_size: 49077920
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cyh002/sealion-inference-instruct-results | cyh002 | 2025-06-05T21:42:08Z | 44 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T14:47:57Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: language
dtype: string
- name: medium
dtype: string
- name: topic
dtype: string
- name: domain
dtype: string
- name: prompt
dtype: string
- name: predicted_label
dtype: string
splits:
- name: inference_dataset
num_bytes: 460233
num_examples: 500
download_size: 101350
dataset_size: 460233
configs:
- config_name: default
data_files:
- split: inference_dataset
path: data/inference_dataset-*
---
|
recogna-nlp/fakerecogna2-extrativo | recogna-nlp | 2025-06-05T21:35:08Z | 0 | 0 | [
"task_categories:text-classification",
"language:pt",
"license:mit",
"size_categories:10K<n<100K",
"region:us",
"FakeRecogna ",
"Fake News",
"Portuguese",
"Dataset"
] | [
"text-classification"
] | 2025-06-05T21:33:24Z | null | ---
task_categories:
- text-classification
language:
- pt
tags:
- 'FakeRecogna '
- Fake News
- Portuguese
- Dataset
license: mit
size_categories:
- 10K<n<100K
---
# FakeRecogna 2.0 Extractive
FakeRecogna 2.0 presents the extension for the FakeRecogna dataset in the context of fake news detection. FakeRecogna includes real and fake news texts collected from online media and ten fact-checking sources in Brazil. An important aspect is the lack of relation between the real and fake news samples, i.e., they are not mutually related to each other to avoid intrinsic bias in the data.
## The Dataset
The fake news collection was performed on licensed and verified Brazilian news websites with enrollment in the [Duke Reporters´ Lab Center](https://reporterslab.org/fact-checking/).
The system was designed as a source to fight against fake news spreading worldwide. For real news, we selected well-known media platforms in Brazil. Since real texts are much larger than most of the produced fake content, the genuine news was preprocessed with text summarization. At this stage, there is no further processing of stop words or lemmatization of the text. After trimming and standardizing the real news, we produced textual representations based on Bag of Words (BoW), Term Frequency – Inverse Document Frequency (TF-IDF), FastText, PTT5, and BERTimbau to form the input feature vectors for the ML models. Figure illustrates the steps of the proposed method.
<!--- PROJECT LOGO -->
<p align="center">
<img src="https://huggingface.co/datasets/recogna-nlp/FakeRecogna2/resolve/main/pipeline_proposed_method.jpg" alt="Pipeline FakeRecogna 2.0" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
Fake news sources were selected from nine fact-checking agencies in Brazil. This process provides a broad range of categories and many fake news samples to promote data diversity. Table 1 presents the existing Brazilian fact-checking initiatives and the number of fake news samples collected from each news source. When the search process was concluded, we ended up with 26,569 fake news samples, which, in turn, were further processed to detect and remove possible duplicate samples, thus leading to a final set of 26,400 fake news articles.
| Fact-Check Agency | Web address | # News |
| ------------------ | ------------------------------------ | ------ |
| AFP Checamos | https://checamos.afp.com/afp-brasil | 1,587 |
| Agência Lupa | https://piaui.folha.uol.com.br/lupa | 3,147 |
| Aos Fatos | https://aosfatos.org | 2,720 |
| Boatos.org | https://boatos.org | 8,654 |
| Estadão Verifica | https://politica.estadao.com.br/blogs/estadao-verifica | 1,405 |
| E-farsas | https://www.e-farsas.com | 3,330 |
| Fato ou Fake | https://oglobo.globo.com/fato-ou-fake| 2,270 |
| Projeto Comprova | https://checamos.afp.com/afp-brasil | 887 |
| UOL Confere | https://noticias.uol.com.br/confere | 2,579 |
| Total | -------------------------------------| 26, 569|
## More informations
The FakeRecogna 2 dataset is a single XLSX file that contains 8 columns for the metadata, and each row represents a sample (real or fake news), as described in Table 2.
| Columns | Description |
| ------------------------ | ------------------------------------------ |
| Title | Title of article |
| Sub-title (if available) | Brief description of news |
| News | Information about the article |
| Category | News grouped according to your information |
| Author | Publication author |
| Date | Publication date |
| URL | Article web address |
| Label | 0 for real news and 1 for fake news |
### FakeRecogna v2 - Abstrative
The abstrative summarization version of FakeRecogna 2 can be found [here](https://huggingface.co/datasets/recogna-nlp/fakerecogna2-abstrativo).
# Citation
@inproceedings{garcia-etal-2024-text,
title = "Text Summarization and Temporal Learning Models Applied to {P}ortuguese Fake News Detection in a Novel {B}razilian Corpus Dataset",
author = "Garcia, Gabriel Lino and
Paiola, Pedro Henrique and
Jodas, Danilo Samuel and
Sugi, Luis Afonso and
Papa, Jo{\~a}o Paulo",
editor = "Gamallo, Pablo and
Claro, Daniela and
Teixeira, Ant{\'o}nio and
Real, Livy and
Garcia, Marcos and
Oliveira, Hugo Gon{\c{c}}alo and
Amaro, Raquel",
booktitle = "Proceedings of the 16th International Conference on Computational Processing of Portuguese - Vol. 1",
month = mar,
year = "2024",
address = "Santiago de Compostela, Galicia/Spain",
publisher = "Association for Computational Lingustics",
url = "https://aclanthology.org/2024.propor-1.9/",
pages = "86--96"
} |
Lithium73fr/TEST7split1 | Lithium73fr | 2025-06-05T21:29:38Z | 0 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-06-05T21:29:27Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# TEST7split1
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
Haribot099/so101_d61 | Haribot099 | 2025-06-05T21:20:39Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101"
] | [
"robotics"
] | 2025-06-05T19:13:19Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so101",
"total_episodes": 60,
"total_frames": 42395,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:60"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": null,
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
}
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
LocalResearchGroup/split-avelina-python-edu | LocalResearchGroup | 2025-06-05T21:20:16Z | 42 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-12T05:34:36Z | null | ---
dataset_info:
- config_name: 100k
features:
- name: blob_id
dtype: string
- name: repo_name
dtype: string
- name: path
dtype: string
- name: length_bytes
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 158215278.81484368
num_examples: 90000
- name: test
num_bytes: 17579475.42387152
num_examples: 10000
download_size: 82802877
dataset_size: 175794754.2387152
- config_name: 10k
features:
- name: blob_id
dtype: string
- name: repo_name
dtype: string
- name: path
dtype: string
- name: length_bytes
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 15821527.881484367
num_examples: 9000
- name: test
num_bytes: 1757947.542387152
num_examples: 1000
download_size: 8519514
dataset_size: 17579475.423871517
- config_name: 1M
features:
- name: blob_id
dtype: string
- name: repo_name
dtype: string
- name: path
dtype: string
- name: length_bytes
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1582152788.1484368
num_examples: 900000
- name: test
num_bytes: 175794754.2387152
num_examples: 100000
download_size: 826347573
dataset_size: 1757947542.387152
- config_name: 1k
features:
- name: blob_id
dtype: string
- name: repo_name
dtype: string
- name: path
dtype: string
- name: length_bytes
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1582152.7881484367
num_examples: 900
- name: test
num_bytes: 175794.7542387152
num_examples: 100
download_size: 830939
dataset_size: 1757947.5423871519
- config_name: full
features:
- name: blob_id
dtype: string
- name: repo_name
dtype: string
- name: path
dtype: string
- name: length_bytes
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 12148475802.315737
num_examples: 6910602
- name: test
num_bytes: 1349831230.6842628
num_examples: 767845
download_size: 6343241345
dataset_size: 13498307033.0
configs:
- config_name: 100k
data_files:
- split: train
path: 100k/train-*
- split: test
path: 100k/test-*
- config_name: 10k
data_files:
- split: train
path: 10k/train-*
- split: test
path: 10k/test-*
- config_name: 1M
data_files:
- split: train
path: 1M/train-*
- split: test
path: 1M/test-*
- config_name: 1k
data_files:
- split: train
path: 1k/train-*
- split: test
path: 1k/test-*
- config_name: full
data_files:
- split: train
path: full/train-*
- split: test
path: full/test-*
---
|
royrin/KLOM-models | royrin | 2025-06-05T21:13:12Z | 869 | 2 | [
"license:mit",
"size_categories:10K<n<100K",
"arxiv:2410.23232",
"region:us"
] | [] | 2025-05-04T19:36:56Z | null | ---
license: mit
size_categories:
- 10K<n<100K
---
Dataset for the evaluation of data-unlearning techniques using KLOM (KL-divergence of Margins).
# How KLOM works:
KLOM works by:
1. training N models (original models)
2. Training N fully-retrained models (oracles) on forget set F
3. unlearning forget set F from the original models
4. Comparing the outputs of the unlearned models from the retrained models on different points
(specifically, computing the KL divergence between the distribution of _margins_ of oracle models and distribution of _margins_ of the unlearned models)
Originally proposed in the work Attribute-to-Delete: Machine Unlearning via Datamodel Matching (https://arxiv.org/abs/2410.23232), described in detail in E.1.
**Outline of how KLOM works:**

**Algorithm Description:**

# Structure of Data
The overal structure is as follows:
```
full_models
├── CIFAR10
├── CIFAR10_augmented
└── LIVING17
oracles
└── CIFAR10
├── forget_set_1
├── forget_set_2
├── forget_set_3
├── forget_set_4
├── forget_set_5
├── forget_set_6
├── forget_set_7
├── forget_set_8
├── forget_set_9
└── forget_set_10
```
Each folder has
* train_logits_##.pt - logits at the end of training for model `##` for validation points
* val_logits_##.pt - logits at the end of training for model `##` for train points
* `##__val_margins_#.npy` - margins of model `##` at epoch `#` (this is derived from logits)
* `sd_##____epoch_#.pt` - model `##` checkpoint at epoch `#`
# How to download
Create script `download_folder.sh`
```
#!/bin/bash
REPO_URL=https://huggingface.co/datasets/royrin/KLOM-models
TARGET_DIR=KLOM-models # name it what you wish
FOLDER=$1 # e.g., "oracles/CIFAR10/forget_set_3"
mkdir -p $TARGET_DIR
git clone --filter=blob:none --no-checkout $REPO_URL $TARGET_DIR
cd $TARGET_DIR
git sparse-checkout init --cone
git sparse-checkout set $FOLDER
git checkout main
```
Example how to run script:
```
bash download_folder.sh oracles/CIFAR10/forget_set_3
```
## How forget sets generated
We have 10 different forget sets: sets 1,2,3 are random forget sets of sizes 10,100,1000 respectively; sets 4-9 correspond to semantically coherent subpopulations of examples (e.g., all dogs facing a similar direction) identified using clustering methods.
Specifically, we take a $n \times n$ datamodel matrix constructed by concatenating ``train x train`` datamodels ($n=50,000$). Next, we compute the top principal components (PCs) of the influence matrix and construct the following forget sets:
* Forget set 1: 10 random samples
* Forget set 2: 100 random samples
* Forget set 3: 500 random samples
* Forget set 4: 10 samples with the highest projection onto the 1st PC
* Forget set 5: 100 samples with the highest projection onto the 1st PC
* Forget set 6: 250 samples with the highest projection onto the 1st PC and 250 with lowest projection
* Forget set 7: 10 samples with the highest projection onto the 2nd PC
* Forget set 8: 100 samples with the highest projection onto the 2nd PC
* Forget set 9: 250 samples with the highest projection onto the 2nd PC and 250 with the lowest projection.
* Forget set 10: 100 samples closest in CLIP image space to training example 6 (a cassowary)
\paragraph{ImageNet Living-17.} We use three different forget sets:
* Forget set 1 is random of size 500;
* Forget sets 2 and 3 correspond to 200 examples from a certain subpopulation (corresponding to a single original ImageNet class) within the Living-17 superclass.
|
phospho-ai/dataset_for_testing | phospho-ai | 2025-06-05T21:06:10Z | 968 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-02-07T16:59:08Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# dataset_for_testing
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
DoniaGasmii/final_project_milestone1_preference_pairs | DoniaGasmii | 2025-06-05T21:03:11Z | 55 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:47:38Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: criteria
sequence: string
splits:
- name: train
num_bytes: 51338018
num_examples: 13555
- name: validation
num_bytes: 17097349
num_examples: 4518
- name: test
num_bytes: 17018233
num_examples: 4520
download_size: 41707898
dataset_size: 85453600
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
david-thomas/yourbench | david-thomas | 2025-06-05T20:53:52Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T20:52:11Z | null | ---
dataset_info:
- config_name: chunked
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_chunk_summaries
sequence: string
- name: chunk_summaries
sequence: string
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
- name: chunks
list:
- name: chunk_id
dtype: string
- name: chunk_text
dtype: string
- name: multihop_chunks
list:
- name: chunk_ids
sequence: string
- name: chunks_text
sequence: string
- name: chunk_info_metrics
list:
- name: avg_token_length
dtype: float64
- name: bigram_diversity
dtype: float64
- name: flesch_reading_ease
dtype: float64
- name: gunning_fog
dtype: float64
- name: perplexity
dtype: float64
- name: token_count
dtype: float64
- name: unique_token_ratio
dtype: float64
- name: chunking_model
dtype: string
splits:
- name: train
num_bytes: 231884
num_examples: 1
download_size: 169477
dataset_size: 231884
- config_name: ingested
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
splits:
- name: train
num_bytes: 83883
num_examples: 1
download_size: 48205
dataset_size: 83883
- config_name: lighteval
features:
- name: question
dtype: string
- name: additional_instructions
dtype: string
- name: ground_truth_answer
dtype: string
- name: question_category
dtype: string
- name: kind
dtype: string
- name: estimated_difficulty
dtype: int64
- name: citations
sequence: string
- name: document_id
dtype: string
- name: chunk_ids
sequence: string
- name: question_generating_model
dtype: string
- name: chunks
sequence: string
- name: document
dtype: string
- name: document_summary
dtype: string
- name: answer_citation_score
dtype: float64
- name: chunk_citation_score
dtype: float64
- name: citation_score
dtype: float64
splits:
- name: train
num_bytes: 3403394
num_examples: 38
download_size: 98168
dataset_size: 3403394
- config_name: multi_hop_questions
features:
- name: document_id
dtype: string
- name: source_chunk_ids
sequence: string
- name: additional_instructions
dtype: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: citations
sequence: string
- name: raw_response
dtype: string
splits:
- name: train
num_bytes: 110280
num_examples: 10
download_size: 25536
dataset_size: 110280
- config_name: single_shot_questions
features:
- name: chunk_id
dtype: string
- name: document_id
dtype: string
- name: additional_instructions
dtype: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: raw_response
dtype: string
- name: citations
sequence: string
splits:
- name: train
num_bytes: 229843
num_examples: 28
download_size: 36808
dataset_size: 229843
- config_name: summarized
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_chunk_summaries
sequence: string
- name: chunk_summaries
sequence: string
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
splits:
- name: train
num_bytes: 90206
num_examples: 1
download_size: 73320
dataset_size: 90206
configs:
- config_name: chunked
data_files:
- split: train
path: chunked/train-*
- config_name: ingested
data_files:
- split: train
path: ingested/train-*
- config_name: lighteval
data_files:
- split: train
path: lighteval/train-*
- config_name: multi_hop_questions
data_files:
- split: train
path: multi_hop_questions/train-*
- config_name: single_shot_questions
data_files:
- split: train
path: single_shot_questions/train-*
- config_name: summarized
data_files:
- split: train
path: summarized/train-*
---
|
TAUR-dev/SIE_EVAL__SIEXP_concat_all_lm2d__sft__samples | TAUR-dev | 2025-06-05T20:48:55Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T20:48:52Z | null | ---
dataset_info:
features:
- name: doc_id
dtype: int64
- name: doc
dtype: string
- name: target
dtype: string
- name: arguments
dtype: string
- name: resps
dtype: string
- name: filtered_resps
dtype: string
- name: doc_hash
dtype: string
- name: prompt_hash
dtype: string
- name: target_hash
dtype: string
- name: exact_match
dtype: int64
- name: extracted_answers
dtype: string
- name: source_file
dtype: string
- name: generation
dtype: string
- name: info
dtype: string
- name: evaluation_api_cost
dtype: string
splits:
- name: train
num_bytes: 322517218
num_examples: 3656
download_size: 42576287
dataset_size: 322517218
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/multi-asset-synth-trades-202506052030 | ChavyvAkvar | 2025-06-05T20:38:51Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T20:30:28Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: scenario_id
dtype: int64
- name: asset_source_name
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown_pct
dtype: float64
- name: total_trades
dtype: int64
- name: portfolio_halted
dtype: bool
- name: portfolio_halt_reason
dtype: string
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 9753584443
num_examples: 10560
download_size: 9731003151
dataset_size: 9753584443
---
# Dataset Card for "multi-asset-synth-trades-202506052030"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TAUR-dev/SIE_EVAL__SIEXP_concat_until_correct_and_filter_lm2d__sft__results | TAUR-dev | 2025-06-05T20:35:07Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T20:35:06Z | null | ---
dataset_info:
features:
- name: task
dtype: string
- name: alias
dtype: string
- name: evaluation_api_cost,none
dtype: float64
- name: evaluation_api_cost_stderr,none
dtype: string
- name: exact_match,none
dtype: float64
- name: exact_match_stderr,none
dtype: string
- name: extracted_answers,none
dtype: int64
- name: extracted_answers_stderr,none
dtype: string
splits:
- name: train
num_bytes: 1183
num_examples: 16
download_size: 4296
dataset_size: 1183
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TAUR-dev/SIE_EVAL__SIEXP_concat_until_correct_lm2d__sft__samples | TAUR-dev | 2025-06-05T20:34:40Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T20:34:37Z | null | ---
dataset_info:
features:
- name: doc_id
dtype: int64
- name: doc
dtype: string
- name: target
dtype: string
- name: arguments
dtype: string
- name: resps
dtype: string
- name: filtered_resps
dtype: string
- name: doc_hash
dtype: string
- name: prompt_hash
dtype: string
- name: target_hash
dtype: string
- name: exact_match
dtype: int64
- name: extracted_answers
dtype: string
- name: source_file
dtype: string
- name: generation
dtype: string
- name: info
dtype: string
- name: evaluation_api_cost
dtype: string
splits:
- name: train
num_bytes: 266214405
num_examples: 3656
download_size: 39886313
dataset_size: 266214405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
psg777/gluepickup106 | psg777 | 2025-06-05T20:32:11Z | 90 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-04T17:45:36Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.2",
"robot_type": "so101",
"total_episodes": 50,
"total_frames": 35078,
"total_tasks": 1,
"total_videos": 150,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.base": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.gripper": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.bird": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
clyyuanzi/so101_test | clyyuanzi | 2025-06-05T20:22:49Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-05T20:22:41Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 2,
"total_frames": 1788,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
NaykinYT/reward-bench-allenai_2 | NaykinYT | 2025-06-05T20:16:44Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T20:15:48Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: source
dtype: string
splits:
- name: test
num_bytes: 6457965
num_examples: 1865
download_size: 3554154
dataset_size: 6457965
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
rosbotmay/mnlp_M3_big_corpus_no_filter | rosbotmay | 2025-06-05T20:15:07Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T18:53:03Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1224630233
num_examples: 333897
download_size: 687587217
dataset_size: 1224630233
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/multi-asset-synth-trades-202506052003 | ChavyvAkvar | 2025-06-05T20:11:46Z | 0 | 0 | [
"region:us"
] | [] | 2025-06-05T20:03:24Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: scenario_id
dtype: int64
- name: asset_source_name
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown_pct
dtype: float64
- name: total_trades
dtype: int64
- name: portfolio_halted
dtype: bool
- name: portfolio_halt_reason
dtype: string
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 9753581753
num_examples: 10560
download_size: 9731685081
dataset_size: 9753581753
---
# Dataset Card for "multi-asset-synth-trades-202506052003"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
andresnowak/MNLP_MCQA_dataset | andresnowak | 2025-06-05T20:06:10Z | 404 | 0 | [
"task_categories:question-answering",
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2025-05-27T18:51:52Z | null | ---
dataset_info:
- config_name: ScienceQA
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: 'null'
splits:
- name: train
num_bytes: 640566
num_examples: 3018
- name: validation
num_bytes: 220715
num_examples: 1070
- name: test
num_bytes: 215890
num_examples: 1041
download_size: 942180
dataset_size: 1077171
- config_name: ai2_arc_challenge
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: 'null'
splits:
- name: train
num_bytes: 364307
num_examples: 1119
- name: validation
num_bytes: 100557
num_examples: 299
- name: test
num_bytes: 390752
num_examples: 1172
download_size: 1340985
dataset_size: 855616
- config_name: ai2_arc_easy
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: 'null'
splits:
- name: train
num_bytes: 637018
num_examples: 2251
- name: validation
num_bytes: 161949
num_examples: 570
- name: test
num_bytes: 676537
num_examples: 2376
download_size: 2275662
dataset_size: 1475504
- config_name: all
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 44223440.360226266
num_examples: 103176
- name: validation
num_bytes: 4093645
num_examples: 11065
- name: test
num_bytes: 3015842
num_examples: 9242
download_size: 109409622
dataset_size: 51332927.360226266
- config_name: math_qa
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 14107758
num_examples: 29837
- name: validation
num_bytes: 2112057
num_examples: 4475
download_size: 25514319
dataset_size: 16219815
- config_name: medmcqa
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: 'null'
splits:
- name: train
num_bytes: 5152794.3361073155
num_examples: 24000
- name: validation
num_bytes: 654419
num_examples: 2816
download_size: 31707729
dataset_size: 5807213.3361073155
- config_name: mmlu
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: 'null'
splits:
- name: validation
num_bytes: 103367
num_examples: 335
- name: test
num_bytes: 976533
num_examples: 3153
download_size: 2261684
dataset_size: 1079900
- config_name: mmlu-auxiliary-train-auto-labelled
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: 'null'
splits:
- name: train
num_bytes: 16111661
num_examples: 13168
download_size: 5234820
dataset_size: 16111661
- config_name: mmlu_auxiliary_train_stem_10_choices
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: 'null'
splits:
- name: train
num_bytes: 19071220
num_examples: 13147
download_size: 7003073
dataset_size: 19071220
- config_name: openbookqa
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 1214630
num_examples: 4957
- name: validation
num_bytes: 128573
num_examples: 500
- name: test
num_bytes: 123375
num_examples: 500
download_size: 2288031
dataset_size: 1466578
- config_name: sciq
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 6990554
num_examples: 11679
- name: validation
num_bytes: 591010
num_examples: 1000
- name: test
num_bytes: 600817
num_examples: 1000
download_size: 13881856
dataset_size: 8182381
configs:
- config_name: ScienceQA
data_files:
- split: train
path: ScienceQA/train-*
- split: validation
path: ScienceQA/validation-*
- split: test
path: ScienceQA/test-*
- config_name: ai2_arc_challenge
data_files:
- split: train
path: ai2_arc_challenge/train-*
- split: validation
path: ai2_arc_challenge/validation-*
- split: test
path: ai2_arc_challenge/test-*
- config_name: ai2_arc_easy
data_files:
- split: train
path: ai2_arc_easy/train-*
- split: validation
path: ai2_arc_easy/validation-*
- split: test
path: ai2_arc_easy/test-*
- config_name: all
data_files:
- split: train
path: all/train-*
- split: validation
path: all/validation-*
- split: test
path: all/test-*
- config_name: math_qa
data_files:
- split: train
path: math_qa/train-*
- split: validation
path: math_qa/validation-*
- config_name: medmcqa
data_files:
- split: train
path: medmcqa/train-*
- split: validation
path: medmcqa/validation-*
- config_name: mmlu
data_files:
- split: validation
path: mmlu/validation-*
- split: test
path: mmlu/test-*
- config_name: mmlu-auxiliary-train-auto-labelled
data_files:
- split: train
path: mmlu-auxiliary-train-auto-labelled/train-*
- config_name: mmlu_auxiliary_train_stem_10_choices
data_files:
- split: train
path: mmlu_auxiliary_train_stem_10_choices/train-*
- config_name: openbookqa
data_files:
- split: train
path: openbookqa/train-*
- split: validation
path: openbookqa/validation-*
- split: test
path: openbookqa/test-*
- config_name: sciq
data_files:
- split: train
path: sciq/train-*
- split: validation
path: sciq/validation-*
- split: test
path: sciq/test-*
task_categories:
- question-answering
language:
- en
size_categories:
- 100K<n<1M
---
This MCQA dataset (of only single answer) contains a mixture of train, validation and test from this datasets (**test and validation are only used for testing not for training**):
- [mmlu auxiliary train](https://huggingface.co/datasets/kz919/mmlu-auxiliary-train-auto-labelled) Only the stem subset is used
- [mmlu](https://huggingface.co/datasets/cais/mmlu) Only the stem subset is used
- [mmlu 10 choices auxiliary train stem](https://huggingface.co/datasets/andresnowak/mmlu-auxiliary-train-10-choices)
- [ai2_arc](https://huggingface.co/datasets/allenai/ai2_arc)
- [ScienceQA](https://huggingface.co/datasets/derek-thomas/ScienceQA)
- [math_qa](https://huggingface.co/datasets/allenai/math_qa)
- [openbook_qa](https://huggingface.co/datasets/allenai/openbookqa)
- [sciq](https://huggingface.co/datasets/allenai/sciq)
- [medmcqa](https://huggingface.co/datasets/openlifescienceai/medmcqa) A 26,000 random subset (seed 42) |
aranemini/fleurs-kmr | aranemini | 2025-06-05T20:04:40Z | 0 | 0 | [
"license:cc-by-nc-4.0",
"region:us"
] | [] | 2025-06-05T20:04:40Z | null | ---
license: cc-by-nc-4.0
---
|
ChavyvAkvar/multi-asset-synth-trades-202506051936 | ChavyvAkvar | 2025-06-05T19:45:00Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T19:36:12Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: scenario_id
dtype: int64
- name: asset_source_name
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown_pct
dtype: float64
- name: total_trades
dtype: int64
- name: portfolio_halted
dtype: bool
- name: portfolio_halt_reason
dtype: string
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 9753582755
num_examples: 10560
download_size: 9731891804
dataset_size: 9753582755
---
# Dataset Card for "multi-asset-synth-trades-202506051936"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
VinceEPFL/mmlu_mathphys_only_subset | VinceEPFL | 2025-06-05T19:35:44Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T19:35:40Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 690954.5027061672
num_examples: 1401
download_size: 182694
dataset_size: 690954.5027061672
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
CohenQu/HintGen-withSol.01.01 | CohenQu | 2025-06-05T19:32:40Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T19:32:30Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: suffix
dtype: string
splits:
- name: train
num_bytes: 533126587
num_examples: 24537
- name: test
num_bytes: 43405923
num_examples: 2000
download_size: 249822728
dataset_size: 576532510
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
svjack/genshin_impact_mavuika_audio_sample | svjack | 2025-06-05T19:28:23Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-06-05T19:23:40Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path:
- "*.wav"
- "metadata.csv"
--- |
rweics5cs7/exo3-original-PlotQA-text-deg | rweics5cs7 | 2025-06-05T19:09:52Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T19:09:48Z | null | ---
dataset_info:
config_name: corpus
features:
- name: corpus-id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1185442
num_examples: 9593
download_size: 617437
dataset_size: 1185442
configs:
- config_name: corpus
data_files:
- split: train
path: corpus/train-*
---
|
MING-ZCH/MetaphorQA | MING-ZCH | 2025-06-05T19:04:55Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T19:00:45Z | null | ---
dataset_info:
features:
- name: images
sequence: image
- name: problem
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 79096067.0
num_examples: 984
- name: test
num_bytes: 42877510.0
num_examples: 492
download_size: 13386835
dataset_size: 121973577.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# MetaphorQA
The True-False Question(TFQ) about image implication.
- train: 984
- test: 492
|
koreankiwi99/mnlp_stem_reasoning | koreankiwi99 | 2025-06-05T19:01:35Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T19:01:32Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 2089942
num_examples: 14957
download_size: 1147102
dataset_size: 2089942
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
koreankiwi99/mnlp_stem_math_only | koreankiwi99 | 2025-06-05T19:00:11Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T19:00:04Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 20353964
num_examples: 27500
download_size: 10496797
dataset_size: 20353964
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rweics5cs7/exo3-original-ArxivQA-text-deg | rweics5cs7 | 2025-06-05T18:48:17Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T18:48:14Z | null | ---
dataset_info:
config_name: corpus
features:
- name: corpus-id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2723315
num_examples: 8066
download_size: 1227609
dataset_size: 2723315
configs:
- config_name: corpus
data_files:
- split: train
path: corpus/train-*
---
|
danelbaz/some_name_for_hub | danelbaz | 2025-06-05T18:44:32Z | 761 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-17T07:37:09Z | null | ---
dataset_info:
config_name: None--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 32
num_examples: 1
download_size: 1961
dataset_size: 32
configs:
- config_name: None--evals
data_files:
- split: train
path: None--evals/train-*
---
|
zijian2022/itrgg | zijian2022 | 2025-06-05T18:43:21Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-05T18:36:51Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 10,
"total_frames": 7130,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
WillHeld/paloma_subreddits | WillHeld | 2025-06-05T18:35:12Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T17:55:25Z | null | ---
dataset_info:
- config_name: 00_AskReddit
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 986793
num_examples: 1000
download_size: 607986
dataset_size: 986793
- config_name: 01_politics
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1019751
num_examples: 1019
download_size: 629396
dataset_size: 1019751
- config_name: 02_AmItheAsshole
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1004296
num_examples: 998
download_size: 598999
dataset_size: 1004296
- config_name: 03_worldnews
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 824506
num_examples: 814
download_size: 510374
dataset_size: 824506
- config_name: 04_relationships
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 986384
num_examples: 870
download_size: 580917
dataset_size: 986384
- config_name: 05_relationship_advice
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 996669
num_examples: 945
download_size: 590987
dataset_size: 996669
- config_name: 06_news
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 839390
num_examples: 882
download_size: 513890
dataset_size: 839390
- config_name: 07_leagueoflegends
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 953796
num_examples: 950
download_size: 580339
dataset_size: 953796
- config_name: 08_todayilearned
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 828693
num_examples: 835
download_size: 515421
dataset_size: 828693
- config_name: 09_TwoXChromosomes
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 989758
num_examples: 931
download_size: 607194
dataset_size: 989758
- config_name: 10_personalfinance
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 978998
num_examples: 920
download_size: 586232
dataset_size: 978998
- config_name: 11_changemyview
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1006879
num_examples: 785
download_size: 607528
dataset_size: 1006879
- config_name: 12_unpopularopinion
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1032897
num_examples: 1078
download_size: 624862
dataset_size: 1032897
- config_name: 13_movies
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 990058
num_examples: 1000
download_size: 611157
dataset_size: 990058
- config_name: 14_Games
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 952100
num_examples: 866
download_size: 578147
dataset_size: 952100
- config_name: 15_nba
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 953427
num_examples: 1018
download_size: 585188
dataset_size: 953427
- config_name: 16_pics
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 802780
num_examples: 860
download_size: 501180
dataset_size: 802780
- config_name: 17_gaming
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 981409
num_examples: 1018
download_size: 610161
dataset_size: 981409
- config_name: 18_soccer
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 977485
num_examples: 1004
download_size: 604723
dataset_size: 977485
- config_name: 19_nfl
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 961012
num_examples: 1006
download_size: 593014
dataset_size: 961012
- config_name: 20_explainlikeimfive
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1009086
num_examples: 969
download_size: 606076
dataset_size: 1009086
- config_name: 21_conspiracy
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1000537
num_examples: 917
download_size: 621218
dataset_size: 1000537
- config_name: 22_atheism
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1001715
num_examples: 939
download_size: 609963
dataset_size: 1001715
- config_name: 23_AskMen
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 986553
num_examples: 995
download_size: 597464
dataset_size: 986553
- config_name: 24_videos
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 813034
num_examples: 831
download_size: 501656
dataset_size: 813034
- config_name: 25_sex
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 985561
num_examples: 992
download_size: 584869
dataset_size: 985561
- config_name: 26_raisedbynarcissists
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 973619
num_examples: 851
download_size: 591754
dataset_size: 973619
- config_name: 27_NoStupidQuestions
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1014479
num_examples: 1032
download_size: 619518
dataset_size: 1014479
- config_name: 28_DestinyTheGame
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 963424
num_examples: 952
download_size: 585294
dataset_size: 963424
- config_name: 29_anime
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 924618
num_examples: 865
download_size: 567829
dataset_size: 924618
- config_name: 30_DnD
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 966059
num_examples: 875
download_size: 596075
dataset_size: 966059
- config_name: 31_ukpolitics
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 932020
num_examples: 828
download_size: 574747
dataset_size: 932020
- config_name: 32_funny
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 889846
num_examples: 907
download_size: 551593
dataset_size: 889846
- config_name: 33_europe
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 851080
num_examples: 811
download_size: 525709
dataset_size: 851080
- config_name: 34_canada
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 936212
num_examples: 945
download_size: 574879
dataset_size: 936212
- config_name: 35_Christianity
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 976471
num_examples: 824
download_size: 589418
dataset_size: 976471
- config_name: 36_SquaredCircle
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 965976
num_examples: 1036
download_size: 596643
dataset_size: 965976
- config_name: 37_AskWomen
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 979864
num_examples: 946
download_size: 595617
dataset_size: 979864
- config_name: 38_legaladvice
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1013911
num_examples: 970
download_size: 603400
dataset_size: 1013911
- config_name: 39_JUSTNOMIL
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 965462
num_examples: 857
download_size: 590704
dataset_size: 965462
- config_name: 40_technology
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 850548
num_examples: 861
download_size: 523726
dataset_size: 850548
- config_name: 41_IAmA
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 975769
num_examples: 908
download_size: 607079
dataset_size: 975769
- config_name: 42_wow
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 949279
num_examples: 946
download_size: 579574
dataset_size: 949279
- config_name: 43_Parenting
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 989123
num_examples: 949
download_size: 595017
dataset_size: 989123
- config_name: 44_exmormon
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 979529
num_examples: 879
download_size: 602325
dataset_size: 979529
- config_name: 45_AdviceAnimals
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 818772
num_examples: 864
download_size: 503241
dataset_size: 818772
- config_name: 46_childfree
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 990868
num_examples: 1002
download_size: 608189
dataset_size: 990868
- config_name: 47_unitedkingdom
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 930401
num_examples: 902
download_size: 572800
dataset_size: 930401
- config_name: 48_ffxiv
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 940489
num_examples: 865
download_size: 578790
dataset_size: 940489
- config_name: 49_dndnext
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 967312
num_examples: 853
download_size: 591851
dataset_size: 967312
- config_name: 50_ADHD
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 991119
num_examples: 900
download_size: 596110
dataset_size: 991119
- config_name: 51_loseit
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 956065
num_examples: 882
download_size: 578779
dataset_size: 956065
- config_name: 52_asoiaf
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 945144
num_examples: 875
download_size: 579401
dataset_size: 945144
- config_name: 53_BabyBumps
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 979788
num_examples: 950
download_size: 591095
dataset_size: 979788
- config_name: 54_Advice
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 993906
num_examples: 942
download_size: 589329
dataset_size: 993906
- config_name: 55_australia
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1013266
num_examples: 1019
download_size: 631408
dataset_size: 1013266
- config_name: 56_CFB
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 946237
num_examples: 944
download_size: 583241
dataset_size: 946237
- config_name: 57_offmychest
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 992613
num_examples: 951
download_size: 594113
dataset_size: 992613
- config_name: 58_PublicFreakout
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 832368
num_examples: 928
download_size: 510973
dataset_size: 832368
- config_name: 59_TrueOffMyChest
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 999452
num_examples: 965
download_size: 601369
dataset_size: 999452
- config_name: 60_science
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 862136
num_examples: 822
download_size: 528332
dataset_size: 862136
- config_name: 61_magicTCG
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 952443
num_examples: 873
download_size: 579371
dataset_size: 952443
- config_name: 62_asktransgender
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 976796
num_examples: 865
download_size: 585906
dataset_size: 976796
- config_name: 63_DotA2
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 951217
num_examples: 948
download_size: 582242
dataset_size: 951217
- config_name: 64_neoliberal
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 980043
num_examples: 898
download_size: 607282
dataset_size: 980043
- config_name: 65_whowouldwin
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 925272
num_examples: 806
download_size: 572194
dataset_size: 925272
- config_name: 66_depression
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 989192
num_examples: 920
download_size: 585737
dataset_size: 989192
- config_name: 67_WTF
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 819580
num_examples: 891
download_size: 509902
dataset_size: 819580
- config_name: 68_pathofexile
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 952178
num_examples: 937
download_size: 580521
dataset_size: 952178
- config_name: 69_PoliticalDiscussion
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1022120
num_examples: 895
download_size: 617624
dataset_size: 1022120
- config_name: 70_Libertarian
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1033421
num_examples: 993
download_size: 624116
dataset_size: 1033421
- config_name: 71_PurplePillDebate
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 990114
num_examples: 904
download_size: 600828
dataset_size: 990114
- config_name: 72_Fitness
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 965570
num_examples: 994
download_size: 585311
dataset_size: 965570
- config_name: 73_books
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 995949
num_examples: 989
download_size: 613242
dataset_size: 995949
- config_name: 74_dogs
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 961856
num_examples: 864
download_size: 582778
dataset_size: 961856
- config_name: 75_pcmasterrace
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 953665
num_examples: 1022
download_size: 591599
dataset_size: 953665
- config_name: 76_teenagers
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 898364
num_examples: 901
download_size: 534931
dataset_size: 898364
- config_name: 77_stopdrinking
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 977429
num_examples: 982
download_size: 585156
dataset_size: 977429
- config_name: 78_Overwatch
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 963714
num_examples: 941
download_size: 581646
dataset_size: 963714
- config_name: 79_television
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 991476
num_examples: 1035
download_size: 614894
dataset_size: 991476
- config_name: 80_buildapc
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 908470
num_examples: 961
download_size: 550001
dataset_size: 908470
- config_name: 81_askscience
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1010614
num_examples: 931
download_size: 601360
dataset_size: 1010614
- config_name: 82_programming
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 811625
num_examples: 779
download_size: 495004
dataset_size: 811625
- config_name: 83_Guildwars2
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 939939
num_examples: 872
download_size: 572384
dataset_size: 939939
- config_name: 84_cars
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 952879
num_examples: 989
download_size: 587637
dataset_size: 952879
- config_name: 85_formula1
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 978134
num_examples: 1028
download_size: 597998
dataset_size: 978134
- config_name: 86_sysadmin
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 983522
num_examples: 962
download_size: 597274
dataset_size: 983522
- config_name: 87_hockey
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 961885
num_examples: 1020
download_size: 594776
dataset_size: 961885
- config_name: 88_india
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 978783
num_examples: 913
download_size: 612158
dataset_size: 978783
- config_name: 89_SubredditDrama
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 940899
num_examples: 902
download_size: 564297
dataset_size: 940899
- config_name: 90_DMAcademy
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 982285
num_examples: 872
download_size: 599042
dataset_size: 982285
- config_name: 91_dating_advice
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 991387
num_examples: 961
download_size: 582520
dataset_size: 991387
- config_name: 92_Catholicism
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 981309
num_examples: 817
download_size: 595413
dataset_size: 981309
- config_name: 93_Drugs
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 978965
num_examples: 936
download_size: 597542
dataset_size: 978965
- config_name: 94_trees
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 981692
num_examples: 1027
download_size: 606841
dataset_size: 981692
- config_name: 95_boardgames
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 962047
num_examples: 884
download_size: 580387
dataset_size: 962047
- config_name: 96_Conservative
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 910585
num_examples: 883
download_size: 558488
dataset_size: 910585
- config_name: 97_Futurology
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 928415
num_examples: 889
download_size: 570147
dataset_size: 928415
- config_name: 98_beyondthebump
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 976086
num_examples: 975
download_size: 589314
dataset_size: 976086
- config_name: 99_weddingplanning
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: created
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 975095
num_examples: 943
download_size: 582567
dataset_size: 975095
configs:
- config_name: 00_AskReddit
data_files:
- split: train
path: 00_AskReddit/train-*
- config_name: 01_politics
data_files:
- split: train
path: 01_politics/train-*
- config_name: 02_AmItheAsshole
data_files:
- split: train
path: 02_AmItheAsshole/train-*
- config_name: 03_worldnews
data_files:
- split: train
path: 03_worldnews/train-*
- config_name: 04_relationships
data_files:
- split: train
path: 04_relationships/train-*
- config_name: 05_relationship_advice
data_files:
- split: train
path: 05_relationship_advice/train-*
- config_name: 06_news
data_files:
- split: train
path: 06_news/train-*
- config_name: 07_leagueoflegends
data_files:
- split: train
path: 07_leagueoflegends/train-*
- config_name: 08_todayilearned
data_files:
- split: train
path: 08_todayilearned/train-*
- config_name: 09_TwoXChromosomes
data_files:
- split: train
path: 09_TwoXChromosomes/train-*
- config_name: 10_personalfinance
data_files:
- split: train
path: 10_personalfinance/train-*
- config_name: 11_changemyview
data_files:
- split: train
path: 11_changemyview/train-*
- config_name: 12_unpopularopinion
data_files:
- split: train
path: 12_unpopularopinion/train-*
- config_name: 13_movies
data_files:
- split: train
path: 13_movies/train-*
- config_name: 14_Games
data_files:
- split: train
path: 14_Games/train-*
- config_name: 15_nba
data_files:
- split: train
path: 15_nba/train-*
- config_name: 16_pics
data_files:
- split: train
path: 16_pics/train-*
- config_name: 17_gaming
data_files:
- split: train
path: 17_gaming/train-*
- config_name: 18_soccer
data_files:
- split: train
path: 18_soccer/train-*
- config_name: 19_nfl
data_files:
- split: train
path: 19_nfl/train-*
- config_name: 20_explainlikeimfive
data_files:
- split: train
path: 20_explainlikeimfive/train-*
- config_name: 21_conspiracy
data_files:
- split: train
path: 21_conspiracy/train-*
- config_name: 22_atheism
data_files:
- split: train
path: 22_atheism/train-*
- config_name: 23_AskMen
data_files:
- split: train
path: 23_AskMen/train-*
- config_name: 24_videos
data_files:
- split: train
path: 24_videos/train-*
- config_name: 25_sex
data_files:
- split: train
path: 25_sex/train-*
- config_name: 26_raisedbynarcissists
data_files:
- split: train
path: 26_raisedbynarcissists/train-*
- config_name: 27_NoStupidQuestions
data_files:
- split: train
path: 27_NoStupidQuestions/train-*
- config_name: 28_DestinyTheGame
data_files:
- split: train
path: 28_DestinyTheGame/train-*
- config_name: 29_anime
data_files:
- split: train
path: 29_anime/train-*
- config_name: 30_DnD
data_files:
- split: train
path: 30_DnD/train-*
- config_name: 31_ukpolitics
data_files:
- split: train
path: 31_ukpolitics/train-*
- config_name: 32_funny
data_files:
- split: train
path: 32_funny/train-*
- config_name: 33_europe
data_files:
- split: train
path: 33_europe/train-*
- config_name: 34_canada
data_files:
- split: train
path: 34_canada/train-*
- config_name: 35_Christianity
data_files:
- split: train
path: 35_Christianity/train-*
- config_name: 36_SquaredCircle
data_files:
- split: train
path: 36_SquaredCircle/train-*
- config_name: 37_AskWomen
data_files:
- split: train
path: 37_AskWomen/train-*
- config_name: 38_legaladvice
data_files:
- split: train
path: 38_legaladvice/train-*
- config_name: 39_JUSTNOMIL
data_files:
- split: train
path: 39_JUSTNOMIL/train-*
- config_name: 40_technology
data_files:
- split: train
path: 40_technology/train-*
- config_name: 41_IAmA
data_files:
- split: train
path: 41_IAmA/train-*
- config_name: 42_wow
data_files:
- split: train
path: 42_wow/train-*
- config_name: 43_Parenting
data_files:
- split: train
path: 43_Parenting/train-*
- config_name: 44_exmormon
data_files:
- split: train
path: 44_exmormon/train-*
- config_name: 45_AdviceAnimals
data_files:
- split: train
path: 45_AdviceAnimals/train-*
- config_name: 46_childfree
data_files:
- split: train
path: 46_childfree/train-*
- config_name: 47_unitedkingdom
data_files:
- split: train
path: 47_unitedkingdom/train-*
- config_name: 48_ffxiv
data_files:
- split: train
path: 48_ffxiv/train-*
- config_name: 49_dndnext
data_files:
- split: train
path: 49_dndnext/train-*
- config_name: 50_ADHD
data_files:
- split: train
path: 50_ADHD/train-*
- config_name: 51_loseit
data_files:
- split: train
path: 51_loseit/train-*
- config_name: 52_asoiaf
data_files:
- split: train
path: 52_asoiaf/train-*
- config_name: 53_BabyBumps
data_files:
- split: train
path: 53_BabyBumps/train-*
- config_name: 54_Advice
data_files:
- split: train
path: 54_Advice/train-*
- config_name: 55_australia
data_files:
- split: train
path: 55_australia/train-*
- config_name: 56_CFB
data_files:
- split: train
path: 56_CFB/train-*
- config_name: 57_offmychest
data_files:
- split: train
path: 57_offmychest/train-*
- config_name: 58_PublicFreakout
data_files:
- split: train
path: 58_PublicFreakout/train-*
- config_name: 59_TrueOffMyChest
data_files:
- split: train
path: 59_TrueOffMyChest/train-*
- config_name: 60_science
data_files:
- split: train
path: 60_science/train-*
- config_name: 61_magicTCG
data_files:
- split: train
path: 61_magicTCG/train-*
- config_name: 62_asktransgender
data_files:
- split: train
path: 62_asktransgender/train-*
- config_name: 63_DotA2
data_files:
- split: train
path: 63_DotA2/train-*
- config_name: 64_neoliberal
data_files:
- split: train
path: 64_neoliberal/train-*
- config_name: 65_whowouldwin
data_files:
- split: train
path: 65_whowouldwin/train-*
- config_name: 66_depression
data_files:
- split: train
path: 66_depression/train-*
- config_name: 67_WTF
data_files:
- split: train
path: 67_WTF/train-*
- config_name: 68_pathofexile
data_files:
- split: train
path: 68_pathofexile/train-*
- config_name: 69_PoliticalDiscussion
data_files:
- split: train
path: 69_PoliticalDiscussion/train-*
- config_name: 70_Libertarian
data_files:
- split: train
path: 70_Libertarian/train-*
- config_name: 71_PurplePillDebate
data_files:
- split: train
path: 71_PurplePillDebate/train-*
- config_name: 72_Fitness
data_files:
- split: train
path: 72_Fitness/train-*
- config_name: 73_books
data_files:
- split: train
path: 73_books/train-*
- config_name: 74_dogs
data_files:
- split: train
path: 74_dogs/train-*
- config_name: 75_pcmasterrace
data_files:
- split: train
path: 75_pcmasterrace/train-*
- config_name: 76_teenagers
data_files:
- split: train
path: 76_teenagers/train-*
- config_name: 77_stopdrinking
data_files:
- split: train
path: 77_stopdrinking/train-*
- config_name: 78_Overwatch
data_files:
- split: train
path: 78_Overwatch/train-*
- config_name: 79_television
data_files:
- split: train
path: 79_television/train-*
- config_name: 80_buildapc
data_files:
- split: train
path: 80_buildapc/train-*
- config_name: 81_askscience
data_files:
- split: train
path: 81_askscience/train-*
- config_name: 82_programming
data_files:
- split: train
path: 82_programming/train-*
- config_name: 83_Guildwars2
data_files:
- split: train
path: 83_Guildwars2/train-*
- config_name: 84_cars
data_files:
- split: train
path: 84_cars/train-*
- config_name: 85_formula1
data_files:
- split: train
path: 85_formula1/train-*
- config_name: 86_sysadmin
data_files:
- split: train
path: 86_sysadmin/train-*
- config_name: 87_hockey
data_files:
- split: train
path: 87_hockey/train-*
- config_name: 88_india
data_files:
- split: train
path: 88_india/train-*
- config_name: 89_SubredditDrama
data_files:
- split: train
path: 89_SubredditDrama/train-*
- config_name: 90_DMAcademy
data_files:
- split: train
path: 90_DMAcademy/train-*
- config_name: 91_dating_advice
data_files:
- split: train
path: 91_dating_advice/train-*
- config_name: 92_Catholicism
data_files:
- split: train
path: 92_Catholicism/train-*
- config_name: 93_Drugs
data_files:
- split: train
path: 93_Drugs/train-*
- config_name: 94_trees
data_files:
- split: train
path: 94_trees/train-*
- config_name: 95_boardgames
data_files:
- split: train
path: 95_boardgames/train-*
- config_name: 96_Conservative
data_files:
- split: train
path: 96_Conservative/train-*
- config_name: 97_Futurology
data_files:
- split: train
path: 97_Futurology/train-*
- config_name: 98_beyondthebump
data_files:
- split: train
path: 98_beyondthebump/train-*
- config_name: 99_weddingplanning
data_files:
- split: train
path: 99_weddingplanning/train-*
---
|
sy1998/EarthMind-Bench | sy1998 | 2025-06-05T18:34:50Z | 0 | 1 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-06-05T11:58:20Z | 1 | ---
license: apache-2.0
---
|
Koushim/en-te-kn-translation-dataset | Koushim | 2025-06-05T18:34:48Z | 0 | 0 | [
"task_categories:translation",
"annotations_creators:manual",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:ai4bharat/samanantar",
"language:en",
"language:te",
"language:kn",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation"
] | 2025-06-05T18:26:50Z | null | ---
annotations_creators:
- manual
language_creators:
- found
language:
- en
- te
- kn
license: cc-by-4.0
multilinguality:
- multilingual
pretty_name: Multilingual English-Telugu-Kannada Translation Dataset
size_categories:
- 1M<n<10M
source_datasets:
- ai4bharat/samanantar
task_categories:
- translation
task_ids:
- translation
dataset_info:
features:
- name: target_lang_code
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2231633137
num_examples: 7966485
download_size: 700766001
dataset_size: 2231633137
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# 📚 Multilingual English-Telugu-Kannada Translation Dataset
This dataset is a curated and preprocessed subset of the [AI4Bharat Samanantar](https://huggingface.co/datasets/ai4bharat/samanantar) dataset focused on multilingual translation tasks between English, Telugu (`te_IN`), and Kannada (`kn_IN`).
## ✨ Dataset Features
- Language pairs:
- `en ↔ te_IN`
- `en ↔ kn_IN`
- Preprocessed:
- Filtered for sentence length (min=3, max=128 words)
- Cleaned and normalized
- Tokenized using Hugging Face Transformers tokenizers:
- M2M100Tokenizer (for `en↔kn`)
- MBart50TokenizerFast (for `en↔te`)
## 📦 Dataset Structure
The dataset contains the following fields:
- `src_texts`: Source language sentence (English)
- `tgt_texts`: Target language sentence (Telugu or Kannada)
- `labels`: Tokenized target sequence for model training
- `input_ids`, `attention_mask`: Tokenized source sentence
The dataset is split into:
- `train`: Training samples
- `validation`: Small subset for evaluation
## 📊 Size
- ~7.9M total sentence pairs
- Supports batch training and multilingual fine-tuning
## 💡 Usage Example
```python
from datasets import load_dataset
dataset = load_dataset("Koushim/en-te-kn-translation-dataset")
print(dataset["train"][0])
````
## 🧠 Intended Uses
* Train multilingual translation models (MBart, Marian, M2M100)
* Fine-tune LLMs on Indic translation
* Evaluate BLEU or other metrics for low-resource translation
## 📜 License
CC-BY-4.0
## ✍️ Author
Koushik Reddy
[GitHub](https://github.com/Koushik7893) | [Hugging Face](https://huggingface.co/datasets/Koushim)
## 🙏 Acknowledgements
Thanks to AI4Bharat for providing the [Samanantar](https://huggingface.co/datasets/ai4bharat/samanantar) dataset which served as the base for this project. |
IndoorOutdoor/results | IndoorOutdoor | 2025-06-05T18:32:08Z | 221 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-03T01:00:13Z | null | ---
dataset_info:
features:
- name: Model Name
dtype: string
- name: Group Name
dtype: string
- name: Execution Time (s)
dtype: float64
- name: Accuracy
dtype: float64
- name: TP
dtype: float64
- name: FP
dtype: float64
- name: FN
dtype: float64
- name: TN
dtype: float64
splits:
- name: train
num_bytes: 1434
num_examples: 21
download_size: 3913
dataset_size: 1434
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rweics5cs7/exo3-original-MP-DocVQA-text | rweics5cs7 | 2025-06-05T18:30:19Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T18:29:58Z | null | ---
dataset_info:
config_name: corpus
features:
- name: corpus-id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1302324
num_examples: 741
download_size: 746965
dataset_size: 1302324
configs:
- config_name: corpus
data_files:
- split: train
path: corpus/train-*
---
|
VGS-AI/OpenR1-Cleaned | VGS-AI | 2025-06-05T18:26:19Z | 89 | 0 | [
"task_categories:question-answering",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2505.17373",
"region:us"
] | [
"question-answering"
] | 2025-05-22T21:30:00Z | null | ---
license: cc-by-nc-4.0
task_categories:
- question-answering
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2634118946
num_examples: 48394
- name: validation
num_bytes: 28313171
num_examples: 500
- name: test
num_bytes: 27765044
num_examples: 500
download_size: 1162355943
dataset_size: 2690197161
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
This dataset is used in the paper [Value-Guided Search for Efficient Chain-of-Thought Reasoning](https://huggingface.co/papers/2505.17373). It contains data for training and evaluating value models for improved long-context reasoning.
GitHub Repository: https://github.com/kaiwenw/value-guided-search
Related resources:
* **Dataset (OpenR1-Cleaned):** https://huggingface.co/datasets/VGS-AI/OpenR1-Cleaned
* **Dataset (OpenR1-VM):** https://huggingface.co/datasets/VGS-AI/OpenR1-VM
* **Value Model (DeepSeek-VM-1.5B):** https://huggingface.co/VGS-AI/DeepSeek-VM-1.5B
**How to use:**
To load the released value model, you can use the following code snippet:
```python
import classifier_lib
import torch
model_loading_kwargs = dict(attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, use_cache=False)
classifier = classifier_lib.Qwen2ForClassifier.from_pretrained("VGS-AI/DeepSeek-VM-1.5B", **model_loading_kwargs)
device = torch.device("cuda")
# your input_ids
input_ids = torch.tensor([151646, 151644, 18, 13, 47238, ...], dtype=torch.long, device=device)
attention_mask = torch.ones_like(input_ids)
classifier_outputs = classifier(input_ids.unsqueeze(0), attention_mask=attention_mask.unsqueeze(0))
# use last index of the sequence
scores = classifier_outputs.success_probs.squeeze(0)[-1].item()
``` |
salamnocap/ml-figs | salamnocap | 2025-06-05T18:24:51Z | 43 | 0 | [
"language:en",
"license:cc-by-nc-4.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"doi:10.57967/hf/5251",
"region:us",
"education",
"figure",
"caption",
"books"
] | [] | 2025-04-25T19:34:49Z | null | ---
license: cc-by-nc-4.0
language:
- en
tags:
- education
- figure
- caption
- books
pretty_name: ML-Figs
size_categories:
- 1K<n<10K
---
[](https://github.com/salamnocap/ml-figs-ldm)
# ML-FIGS 📚📊
This dataset comprises a collection of **4,256 figures** and corresponding metadata extracted from **43 different machine learning books**. Each directory represents one book and contains a subdirectory with **images** and a **JSON file** holding metadata for each figure. The dataset is organized hierarchically as follows:
```python
ml-figs/
├── Book_1/
│ ├── image/
│ │ ├── Figure1.png
│ │ ├── Figure2.png
│ │ └── ...
│ └── Book_1.json
├── Book_2/
│ ├── image/
│ │ ├── Figure1.png
│ │ ├── Figure2.png
│ │ └── ...
│ └── Book_2.json
├── ...
│
├── mlfigs_train.json
│
└── mlfigs_test.json
```
Each **JSON file** in the dataset represents the metadata for all the figures within a particular book. A typical entry for a figure in the JSON file includes the following attributes:
- **caption**: Describes the content of the figure. For example, "Figure 25: Gradient Descent (4/4)".
- **captionBoundary**: The bounding box of the caption text within the page, represented as a dictionary:
- **x1, x2**: Horizontal boundaries of the caption.
- **y1, y2**: Vertical boundaries of the caption.
- **figType**: The type of figure (usually "Figure", "Table", or other structural elements).
- **imageText**: A list of any text recognized within the figure image.
- **name**: The unique identifier for each figure within the book (e.g., "25").
- **page**: The page number on which the figure appears.
- **regionBoundary**: The bounding box of the entire figure within the page, defined as:
- **x1, x2**: Horizontal boundaries of the figure.
- **y1, y2**: Vertical boundaries of the figure.
- **renderDpi**: The DPI (dots per inch) resolution used when rendering the image.
- **renderURL**: The path to the corresponding figure image file within the image/ directory.
- **ocr**: OCR (Optical Character Recognition) data, capturing any text detected within the figure.
- **text** : A list of strings, representing the recognized text within the figure. These are typically individual words or symbols extracted by the OCR system.
- **left** : Horizontal coordinates (in pixels) representing the left edge of each recognized text element.
- **top** : Vertical coordinates (in pixels) representing the top edge of each recognized text element.
- **width** : Width of each recognized text element, providing the horizontal span of the text box.
- **height** : Height of each recognized text element.
- **conf** : Each recognized text element is assigned a confidence score ranging from 0 to 100, indicating the OCR system's confidence in its recognition.
# 📕 Citation
```bibtex
@misc{salamat_kuantaiuly_2025,
author = { Salamat Kuantaiuly },
title = { ml-figs (Revision 8695d1b) },
year = 2025,
url = { https://huggingface.co/datasets/salamnocap/ml-figs },
doi = { 10.57967/hf/5251 },
publisher = { Hugging Face }
}
``` |
uhuru-jonathan/lekiwi_test | uhuru-jonathan | 2025-06-05T18:19:46Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-06-05T18:20:45Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "lekiwi",
"total_episodes": 7,
"total_frames": 5356,
"total_tasks": 1,
"total_videos": 7,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:4"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
9
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper",
"x_mm",
"y_mm",
"theta"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper",
"x_mm",
"y_mm",
"theta"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
640,
480,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 640,
"video.width": 480,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Jgold90/sweep_mano | Jgold90 | 2025-06-05T18:16:48Z | 81 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-27T16:07:09Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "human",
"total_episodes": 498,
"total_frames": 96415,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 50,
"splits": {
"train": "0:498"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": null,
"features": {
"lang": {
"dtype": "float32",
"shape": [
512
],
"names": null
},
"observation.image.low": {
"dtype": "image",
"shape": [
480,
640,
3
],
"names": [
"width",
"height",
"channels"
]
},
"observation.image.wrist": {
"dtype": "image",
"shape": [
256,
256,
3
],
"names": [
"width",
"height",
"channels"
]
},
"observation.state.box.center": {
"dtype": "float32",
"shape": [
2
]
},
"observation.state.box.kp2d": {
"dtype": "float32",
"shape": [
21,
2
]
},
"observation.state.box.size": {
"dtype": "float32",
"shape": [
1
]
},
"observation.state.kp2d": {
"dtype": "float32",
"shape": [
21,
2
]
},
"observation.state.kp3d": {
"dtype": "float32",
"shape": [
21,
3
]
},
"observation.state.mano.betas": {
"dtype": "float32",
"shape": [
10
]
},
"observation.state.mano.global_orient": {
"dtype": "float32",
"shape": [
3,
3
]
},
"observation.state.mano.hand_pose": {
"dtype": "float32",
"shape": [
15,
3,
3
]
},
"observation.state.right": {
"dtype": "float32",
"shape": [
1
]
},
"observation.state.scaled_focal_length": {
"dtype": "float32",
"shape": [
1
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Haribot099/so101_15 | Haribot099 | 2025-06-05T18:01:09Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-05T17:44:21Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 2,
"total_frames": 892,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
super-pingouin/wikipedia-stem-articles | super-pingouin | 2025-06-05T17:46:36Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T17:46:33Z | null | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 4911800
num_examples: 887
download_size: 2610695
dataset_size: 4911800
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BlueSeaHoneyBee/13TEST_Rated_HSE_QA_Reviewed | BlueSeaHoneyBee | 2025-06-05T17:44:43Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T17:44:41Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: thinking_step
dtype: string
- name: answer
dtype: string
- name: rating
dtype: int64
splits:
- name: train
num_bytes: 2009
num_examples: 3
download_size: 5717
dataset_size: 2009
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Cameronbarry/cams | Cameronbarry | 2025-06-05T17:39:42Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-06-05T17:39:41Z | null | ---
license: apache-2.0
---
|
jlbaker361/clip-art_coco_captioned-1000 | jlbaker361 | 2025-06-05T17:33:10Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T17:32:57Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: embedding
sequence:
sequence:
sequence: float32
- name: text
sequence:
sequence:
sequence: float16
- name: prompt
dtype: string
- name: posterior
sequence:
sequence:
sequence: float16
splits:
- name: train
num_bytes: 249480845.0
num_examples: 1000
download_size: 242482521
dataset_size: 249480845.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jlbaker361/ssl-coco_captioned-1000 | jlbaker361 | 2025-06-05T17:33:01Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T15:46:54Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: embedding
sequence:
sequence:
sequence: float32
- name: text
sequence:
sequence:
sequence: float16
- name: prompt
dtype: string
- name: posterior
sequence:
sequence:
sequence: float16
splits:
- name: train
num_bytes: 259178898.0
num_examples: 1000
download_size: 254013098
dataset_size: 259178898.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
slprl/TinyStress-15K | slprl | 2025-06-05T17:18:46Z | 119 | 3 | [
"task_categories:audio-classification",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2505.19103",
"region:us"
] | [
"audio-classification"
] | 2025-05-25T07:23:06Z | 2 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: original_sample_index
dtype: int64
- name: sentence_index
dtype: int64
- name: transcription
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: ssml
dtype: string
- name: emphasis_indices
sequence: int64
- name: metadata
struct:
- name: gender
dtype: int64
- name: language_code
dtype: string
- name: voice_name
dtype: string
- name: word_start_timestamps
sequence: float64
- name: aligned_whisper_transcriptions
dtype: string
splits:
- name: train
num_bytes: 5215476174
num_examples: 15000
- name: test
num_bytes: 337636506
num_examples: 1000
download_size: 4817381967
dataset_size: 5553112680
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
language:
- en
task_categories:
- audio-classification
license: cc-by-nc-4.0
---
# 📚 TinyStress-15K Dataset
TinyStress-15K is a synthetic dataset developed as part of our paper: "[***WhiStress***](https://arxiv.org/abs/2505.19103): *Enriching Transcriptions with Sentence Stress Detection*". It is designed to support research of models that understand sentence stress i.e., emphasis on specific words that affect sentence meaning.
Check out our [project page](https://pages.cs.huji.ac.il/adiyoss-lab/whistress/) to access more resources.
## 📦 Dataset Summary
- **Name**: `TinyStress-15K`
- **Type**: Synthetic speech dataset with stress annotations
- **Samples**: 15,000 training and 1,000 testing examples
- **Sampling Rate**: 48 kHz
- **Texts**: Derived from [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories)
---
## 🧩 Dataset Structure
Each sample contains:
| Feature | Description |
|--------|-------------|
| `id` | Unique sample identifier |
| `original_sample_index` | Index of the original TinyStories sample (story) |
| `sentence_index` | Position of the sentence in the original story |
| `transcription` | Text transcription of the spoken audio |
| `audio` | Audio waveform (`.wav`), sampled at 48kHz |
| `ssml` | SSML-formatted version used to manipulate prosodic features |
| `emphasis_indices` | List of word indices in the transcription that contain emphasis |
| `metadata.gender` | Speaker gender (integer-coded) |
| `metadata.language_code` | Language tag (e.g., `"en"`) |
| `metadata.voice_name` | Synthetic voice name |
| `word_start_timestamps` | Start times (in seconds) for each word |
| `aligned_whisper_transcriptions` | Whisper generated transcription |
---
## 📥 How to Use
```python
from datasets import load_dataset
dataset = load_dataset("slprl/TinyStress-15K", split="train")
sample = dataset[0]
words = sample["transcription"].split()
stressed_words = [words[i] for i in sample["emphasis_indices"]]
print(sample["transcription"])
print(sample["emphasis_indices"])
print(stressed_words)
```
---
## Notes
The data is intended for research purposes only.
---
## 🧠 Citation
If you our use our dataset, please cite our work:
```bibtex
@misc{yosha2025whistress,
title={WHISTRESS: Enriching Transcriptions with Sentence Stress Detection},
author={Iddo Yosha and Dorin Shteyman and Yossi Adi},
year={2025},
eprint={2505.19103},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.19103},
}
``` |
livecodebench/code_generation_lite | livecodebench | 2025-06-05T17:18:27Z | 47,439 | 42 | [
"license:cc",
"size_categories:n<1K",
"arxiv:2403.07974",
"region:us",
"code",
"code generation"
] | [] | 2024-04-16T04:46:53Z | null | ---
license: cc
tags:
- code
- code generation
pretty_name: LiveCodeBench
size_categories:
- n<1K
---
## LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code
<p align="center">
<a href="https://livecodebench.github.io/">🏠 Home Page</a> •
<a href="https://github.com/LiveCodeBench/LiveCodeBench">💻 GitHub Repository </a> •
<a href="https://livecodebench.github.io/leaderboard.html">🏆 Leaderboard</a> •
<a href="https://arxiv.org/abs/2403.07974">📄 Paper </a>
</p>

## Change Log
Since LiveCodeBench is a continuously updated benchmark, we provide different versions of the dataset. Particularly, we provide the following versions of the dataset:
- `release_v1`: The initial release of the dataset with problems released between May 2023 and Mar 2024 containing 400 problems.
- `release_v2`: The updated release of the dataset with problems released between May 2023 and May 2024 containing 511 problems.
- `release_v3`: The updated release of the dataset with problems released between May 2023 and Jul 2024 containing 612 problems.
- `release_v4`: The updated release of the dataset with problems released between May 2023 and Sep 2024 containing 713 problems.
- `release_v5`: The updated release of the dataset with problems released between May 2023 and Jan 2025 containing 880 problems.
You can use the `version_tag` argument to load the desired version of the dataset. Additionally, you can use version tags like `v1`, `v2`, `v1_v3`, `v4_v5` to get the problems released in a specific version.
## Dataset Description
LiveCodeBench is a "live" updating benchmark for holistically evaluating code related capabilities of LLMs.
Particularly, it evaluates LLMs across a range of capabilties including code generation, self-repair, test output prediction, and code execution.
This is the code generation scenario of LiveCodeBench. It is also used for evaluating self-repair using test case feedback.
LiveCodeBench problems are collected from competition programming websites with particular focus on maintaining problem quality, test case quality, and problem difficulty diversity.
This scenario currently hosts over 500 problems from LeetCode, AtCoder, and Codeforces.
Each problem instance consists of a problem description, input/output examples, and hidden test cases.
Additionally, every problem is tagged with its difficulty level and release date, which allows measuring model performance across different time windows.
The goal is to generate a correct and efficient solution for each problem instance.
The initial code_generation dataset included a larger number of test cases which leads to a substantially large dataset size. This (lite) version has pruned and sampled tests while trying to ensure similar performances with the original dataset. Going forward, livecodebench will be using this lite version for code generation evaluations.
## Usage
You can use the dataset by loading it from the Hugging Face datasets library. Additionally, the version tag "release_v1" is used to specify the (temporal) version of the dataset. "v1" corresponds to the initial release of the dataset and "release_v2" is the second version.
```python
from datasets import load_dataset
lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2")
``` |
deepghs/game_characters | deepghs | 2025-06-05T17:15:16Z | 1,441 | 27 | [
"license:apache-2.0",
"region:us"
] | [] | 2023-01-28T11:12:51Z | null | ---
license: apache-2.0
---
# Database of Characters in Mobile Games
[](https://huggingface.co/datasets/deepghs/game_characters)
All the character in the following games are supported:
* Arknights (crawled from [https://prts.wiki](https://prts.wiki))
* Fate/Grand Order (crawled from [https://fgo.wiki](https://fgo.wiki))
* Azur Lane (crawled from [https://wiki.biligame.com/blhx](https://wiki.biligame.com/blhx))
* Girls' Front-Line (crawled from [https://iopwiki.com/](https://iopwiki.com/))
* Genshin Impact (crawled from [https://genshin-impact.fandom.com/ja/wiki/%E5%8E%9F%E7%A5%9E_Wiki](https://genshin-impact.fandom.com/ja/wiki/%E5%8E%9F%E7%A5%9E_Wiki))
The source code and python library is hosted on [narugo1992/gchar](https://github.com/narugo1992/gchar), and the scheduled job is configured on Github Action, so the data will be automatically updated to the latest version once a day.
More character data for other games is coming...
|
BlueSeaHoneyBee/10TEST_Rated_HSE_QA_Reviewed | BlueSeaHoneyBee | 2025-06-05T17:12:13Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T16:54:32Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: thinking_step
dtype: string
- name: answer
dtype: string
- name: rating
dtype: int64
splits:
- name: train
num_bytes: 13234
num_examples: 20
download_size: 11591
dataset_size: 13234
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
harrisonenyeartkaleido/defi_training_dataset_6_5_25 | harrisonenyeartkaleido | 2025-06-05T17:10:08Z | 0 | 0 | [
"language:en",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finance"
] | [] | 2025-06-05T17:09:25Z | null | ---
language:
- en
tags:
- finance
size_categories:
- n<1K
--- |
revyu/pulze_intent_unwrapped_and_ranked | revyu | 2025-06-05T16:59:47Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T16:59:41Z | null | ---
dataset_info:
features:
- name: prompt_uid
dtype: string
- name: prompt_category
dtype: string
- name: prompt
dtype: string
- name: claude-3-haiku-20240307_response
dtype: string
- name: claude-3-opus-20240229_response
dtype: string
- name: claude-3-sonnet-20240229_response
dtype: string
- name: command-r_response
dtype: string
- name: command-r-plus_response
dtype: string
- name: dbrx-instruct_response
dtype: string
- name: gpt-3.5-turbo-0125_response
dtype: string
- name: gpt-4-turbo-2024-04-09_response
dtype: string
- name: llama-3-70b-instruct_response
dtype: string
- name: mistral-large_response
dtype: string
- name: mistral-medium_response
dtype: string
- name: mistral-small_response
dtype: string
- name: mixtral-8x7b-instruct_response
dtype: string
- name: response_mixtral-8x7b-instruct_reward
dtype: float64
- name: response_mixtral-8x7b-instruct_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_mistral-small_reward
dtype: float64
- name: response_mistral-small_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_mistral-medium_reward
dtype: float64
- name: response_mistral-medium_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_gpt-3.5-turbo-0125_reward
dtype: float64
- name: response_gpt-3.5-turbo-0125_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_mistral-large_reward
dtype: float64
- name: response_mistral-large_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_gpt-4-turbo-2024-04-09_reward
dtype: float64
- name: response_gpt-4-turbo-2024-04-09_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_claude-3-opus-20240229_reward
dtype: float64
- name: response_claude-3-opus-20240229_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_claude-3-sonnet-20240229_reward
dtype: float64
- name: response_claude-3-sonnet-20240229_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_command-r_reward
dtype: float64
- name: response_command-r_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_command-r-plus_reward
dtype: float64
- name: response_command-r-plus_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_claude-3-haiku-20240307_reward
dtype: float64
- name: response_claude-3-haiku-20240307_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_dbrx-instruct_reward
dtype: float64
- name: response_dbrx-instruct_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
- name: response_llama-3-70b-instruct_reward
dtype: float64
- name: response_llama-3-70b-instruct_by_objective
struct:
- name: argilla-judge_lm
dtype: float64
- name: argilla-overall_quality
dtype: float64
- name: beavertails-is_safe
dtype: float64
- name: code-complexity
dtype: float64
- name: code-explanation
dtype: float64
- name: code-instruction-following
dtype: float64
- name: code-readability
dtype: float64
- name: code-style
dtype: float64
- name: helpsteer-coherence
dtype: float64
- name: helpsteer-complexity
dtype: float64
- name: helpsteer-correctness
dtype: float64
- name: helpsteer-helpfulness
dtype: float64
- name: helpsteer-verbosity
dtype: float64
- name: prometheus-score
dtype: float64
- name: ultrafeedback-helpfulness
dtype: float64
- name: ultrafeedback-honesty
dtype: float64
- name: ultrafeedback-instruction_following
dtype: float64
- name: ultrafeedback-overall_score
dtype: float64
- name: ultrafeedback-truthfulness
dtype: float64
splits:
- name: train
num_bytes: 68280366
num_examples: 2522
download_size: 38750345
dataset_size: 68280366
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cristiano-sartori/rag_ft | cristiano-sartori | 2025-06-05T16:54:20Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T16:54:17Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3753310
num_examples: 1425
download_size: 1374941
dataset_size: 3753310
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fintech-final/FRESH-all_products-with_missing_products-with_cold_item | fintech-final | 2025-06-05T16:48:09Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T13:33:39Z | null | ---
dataset_info:
features:
- name: session_id
dtype: float64
- name: event_sequence
sequence: string
- name: product_sequence
sequence: int64
- name: start_time
dtype: float64
- name: end_time
dtype: float64
splits:
- name: train
num_bytes: 50457
num_examples: 1065
download_size: 9027
dataset_size: 50457
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
carminho/piqa-mt-pt | carminho | 2025-06-05T16:45:51Z | 0 | 0 | [
"language:pt",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T12:57:09Z | null | ---
language:
- pt
configs:
- config_name: default
data_files:
- split: train
path: piqa_train_pt.jsonl
- split: test
path: piqa_test_pt.jsonl
- split: validation
path: piqa_validation_pt.jsonl
--- |
carminho/siqa-mt-pt | carminho | 2025-06-05T16:43:19Z | 0 | 0 | [
"language:pt",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T13:21:38Z | null | ---
language:
- pt
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: siqa_train_pt.jsonl
- split: validation
path: siqa_validation_pt.jsonl
--- |
fannymissillier/balanced-mcqa-dataset-cleaned | fannymissillier | 2025-06-05T16:41:33Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T16:41:25Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: explanation
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 10195500
num_examples: 12308
- name: validation
num_bytes: 1151722
num_examples: 1368
download_size: 6623485
dataset_size: 11347222
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
sugarquark/temp-fashion-1m | sugarquark | 2025-06-05T16:40:31Z | 0 | 0 | [
"region:us"
] | [] | 2025-06-05T16:39:07Z | null | ---
viewer: false
---
Fashion shows from 1990 to 2025. Cloned from and owned by tonyassi.
|
cristiano-sartori/open_mnlp_m1 | cristiano-sartori | 2025-06-05T16:39:07Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T16:39:04Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 538506
num_examples: 475
download_size: 266937
dataset_size: 538506
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Gatescrispy/Largest_therapeutic_molecule_dataset_with_1.4M_compounds_for_scientific_research | Gatescrispy | 2025-06-05T16:34:18Z | 14 | 0 | [
"task_categories:feature-extraction",
"task_categories:text-classification",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"region:us",
"chemistry",
"drug-discovery",
"molecules",
"bioactivity",
"traditional-medicine",
"phytotherapy",
"therapeutic-compounds",
"natural-products",
"machine-learning",
"pharmaceutical-research",
"cheminformatics",
"qsar",
"drug-repurposing"
] | [
"feature-extraction",
"text-classification"
] | 2025-06-05T06:49:33Z | null | ---
license: cc-by-4.0
task_categories:
- feature-extraction
- text-classification
tags:
- chemistry
- drug-discovery
- molecules
- bioactivity
- traditional-medicine
- phytotherapy
- therapeutic-compounds
- natural-products
- machine-learning
- pharmaceutical-research
- cheminformatics
- qsar
- drug-repurposing
language:
- en
size_categories:
- 1M<n<10M
pretty_name: "PhytoAI MEGA Dataset - 1.4M Therapeutic Molecules"
---
# 🧬 PhytoAI MEGA Dataset - 1.4M Therapeutic Molecules
<div align="center">



**A Comprehensive Dataset of Therapeutic Molecules for AI Drug Discovery Research**
</div>
## 🌟 Overview
The **PhytoAI MEGA Dataset** contains **1,600,000+ therapeutic molecules** with comprehensive molecular properties, bioactivity data, and traditional medicine annotations. This dataset bridges traditional pharmaceutical knowledge and modern computational methods, enabling research opportunities in drug discovery.
### 🏆 Dataset Features
- **Large Scale**: 1,600,000+ unique therapeutic molecules
- **Comprehensive Coverage**: Traditional medicine systems + modern pharmacology
- **High Quality**: Curated and validated molecular data
- **Optimized Format**: Apache Arrow for efficient processing
- **Open Access**: CC BY 4.0 license for research and commercial use
## 📊 Dataset Composition
### Scale & Statistics
| Metric | Value | Description |
|--------|--------|-------------|
| **Total Molecules** | 1,600,000+ | Unique therapeutic compounds |
| **Data Size** | 759.9 MB | Optimized Apache Arrow format |
| **Splits** | train/validation/test | 80%/10%/10% distribution |
| **Format** | Apache Arrow | High-performance columnar format |
| **License** | CC BY 4.0 | Open for research and commercial use |
### Data Sources
Our data integration includes:
- **🔬 Scientific Literature**: PubMed research papers
- **💊 Bioactivity Databases**: ChEMBL validated bioactivities
- **🌿 Traditional Medicine**: Traditional use records
- **📚 Pharmacopoeias**: International pharmacopoeias
## 🗂️ Dataset Structure
### File Organization
```
📁 PhytoAI-MEGA-Dataset/
├── 🗂️ train/ # Training split (80% - ~1,280,000 molecules)
│ ├── data-00000-of-00002.arrow (304 MB)
│ └── data-00001-of-00002.arrow (304 MB)
├── 🗂️ validation/ # Validation split (10% - ~160,000 molecules)
│ └── data-00000-of-00001.arrow (76 MB)
├── 🗂️ test/ # Test split (10% - ~160,000 molecules)
│ └── data-00000-of-00001.arrow (76 MB)
└── 📄 README.md # This documentation
```
**Total Size**: 759.9 MB of molecular data
### Molecular Features Schema
Each molecule contains:
```json
{
"id": "unique_identifier",
"name": "compound_name",
"molecular_weight": float, // Molecular weight in Daltons
"molecular_formula": "string", // Chemical formula (e.g., C21H30O2)
"smiles": "string", // Canonical SMILES notation
"inchi": "string", // InChI identifier
"logp": float, // Lipophilicity (octanol-water partition)
"hbd": int, // Hydrogen bond donors
"hba": int, // Hydrogen bond acceptors
"tpsa": float, // Topological polar surface area
"rotatable_bonds": int, // Number of rotatable bonds
"bioactivity_score": float, // Predicted therapeutic potential (0-1)
"safety_index": float, // Predicted safety profile (0-1)
"traditional_use": "string", // Historical therapeutic applications
"bioactivities": ["array"], // Biological activities
"targets": ["array"], // Molecular targets
"pathways": ["array"], // Biological pathways
"collection_date": "iso_date", // Data integration timestamp
"is_champion": boolean, // High therapeutic potential flag
"literature_refs": ["array"], // Supporting research papers
"source_database": "string" // Original data source
}
```
## 🎯 Therapeutic Coverage
### Major Therapeutic Categories
Distribution across therapeutic areas:
| Therapeutic Area | Molecules | Percentage | Key Targets |
|------------------|-----------|------------|-------------|
| **Anti-inflammatory** | ~180,000 | 11.2% | COX-1/2, NF-κB, TNF-α |
| **Antioxidant** | ~220,000 | 13.8% | ROS scavenging, SOD, catalase |
| **Cardiovascular** | ~150,000 | 9.4% | ACE, β-blockers, calcium channels |
| **Neuroprotective** | ~130,000 | 8.1% | AChE, MAO, NMDA receptors |
| **Anti-cancer** | ~160,000 | 10.0% | p53, MDR1, apoptosis pathways |
| **Antimicrobial** | ~140,000 | 8.8% | Cell wall synthesis, protein synthesis |
| **Multi-target** | ~200,000 | 12.5% | Complex polypharmacology |
| **Other activities** | ~420,000 | 26.2% | Metabolic, endocrine, immune |
### Drug-likeness Assessment
Molecular properties distribution:
- **Lipinski's Rule of Five**: 89.3% compliance
- **Veber Rules**: 92.1% compliance
- **PAINS Filters**: 96.8% pass rate
- **Lead-like Properties**: 78.4% compliance
## 💻 Usage Guide
### Quick Start
```python
from datasets import load_dataset
import pandas as pd
# Load the complete dataset
dataset = load_dataset("Gatescrispy/Largest_therapeutic_molecule_dataset_with_1.4M_compounds_for_scientific_research")
# Access different splits
train_data = dataset['train']
validation_data = dataset['validation']
test_data = dataset['test']
print(f"Training molecules: {len(train_data):,}")
print(f"Validation molecules: {len(validation_data):,}")
print(f"Test molecules: {len(test_data):,}")
print(f"Total molecules: {len(train_data) + len(validation_data) + len(test_data):,}")
```
### Analysis Examples
#### Molecular Property Analysis
```python
# Convert to pandas for analysis
df = train_data.to_pandas()
# Molecular weight distribution
import matplotlib.pyplot as plt
plt.hist(df['molecular_weight'], bins=50, alpha=0.7)
plt.xlabel('Molecular Weight (Da)')
plt.ylabel('Frequency')
plt.title('Molecular Weight Distribution')
plt.axvline(500, color='red', linestyle='--', label='Lipinski Limit')
plt.legend()
plt.show()
# Drug-likeness assessment
lipinski_compliant = (
(df['molecular_weight'] <= 500) &
(df['logp'] <= 5) &
(df['hbd'] <= 5) &
(df['hba'] <= 10)
)
print(f"Lipinski compliant: {lipinski_compliant.sum():,} ({lipinski_compliant.mean()*100:.1f}%)")
```
#### Bioactivity Analysis
```python
# Extract bioactivities
bioactivities = df['bioactivities'].explode().value_counts()
print("Top 10 bioactivities:")
print(bioactivities.head(10))
# High-potential compounds
champions = df[df['is_champion'] == True]
print(f"Champion molecules: {len(champions):,}")
# Traditional use categories
traditional_uses = df['traditional_use'].value_counts()
print("Traditional use categories:")
print(traditional_uses.head(10))
```
#### Machine Learning Pipeline
```python
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
# Prepare features for bioactivity prediction
features = ['molecular_weight', 'logp', 'hbd', 'hba', 'tpsa', 'rotatable_bonds']
X = df[features].fillna(df[features].median())
y = df['bioactivity_score']
# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train model
model = RandomForestRegressor(n_estimators=100, random_state=42)
model.fit(X_train, y_train)
# Evaluate
y_pred = model.predict(X_test)
print(f"R² Score: {r2_score(y_test, y_pred):.3f}")
print(f"RMSE: {mean_squared_error(y_test, y_pred, squared=False):.3f}")
```
## 🤝 Citation
### Recommended Citation
```bibtex
@dataset{phytoai_mega_1_6m_2025,
title={PhytoAI MEGA Dataset: 1.6M Therapeutic Molecules for AI Drug Discovery},
author={Tantcheu, Cedric},
year={2025},
month={June 2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/Gatescrispy/Largest_therapeutic_molecule_dataset_with_1.4M_compounds_for_scientific_research},
note={Large-scale curated therapeutic molecule dataset with traditional medicine integration},
keywords={drug discovery, machine learning, traditional medicine, cheminformatics, therapeutic molecules}
}
```
## 🔗 Related Resources
### PhytoAI Ecosystem
- **🤖 AI Models**: [Pre-trained models for molecular analysis](https://huggingface.co/Gatescrispy/Pre-trained_AImodels_for_therapeutic_molecule_analysis_and_bioactivity_prediction)
- **📚 Research Papers**: [Scientific methodology and findings](https://huggingface.co/datasets/Gatescrispy/Scientific_research_papers_and_methodology_for_PhytoAI_therapeutic_discovery)
- **💬 Community**: Join our research community for collaboration
- **🔧 Tools**: Molecular analysis and prediction tools
### External Databases
- **ChEMBL**: Bioactivity data source
- **PubChem**: Chemical structure validation
- **DrugBank**: Pharmaceutical annotations
- **KEGG**: Pathway and target information
## 📄 License
**License**: [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
This dataset is freely available for:
- ✅ **Academic Research**: No restrictions
- ✅ **Commercial Use**: Including pharmaceutical companies
- ✅ **Educational Purposes**: Teaching and training
- ✅ **Open Source Projects**: Community-driven tools
- ✅ **Derivative Works**: Building upon our work
## 🔬 Potential Applications
This dataset can be used for:
- **Machine Learning**: Molecular property prediction, bioactivity modeling
- **Drug Discovery**: Virtual screening, lead optimization
- **Cheminformatics**: Chemical space analysis, QSAR modeling
- **Traditional Medicine**: Validation of traditional therapeutic uses
- **Educational**: Teaching computational drug discovery methods
---
<div align="center">
## 🧬 Advancing Therapeutic Discovery Through Data Science
**A comprehensive molecular dataset for the research community**
[📧 Contact](mailto:research@phytoai.org) | [🌐 Website](https://phytoai.org)
*Last updated: June 2025*
</div>
|
glitchinthematrix/diseases_of_the_eye_and_adnexa | glitchinthematrix | 2025-06-05T16:33:34Z | 8 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-10T18:57:49Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
dtype: string
- name: answer
dtype: string
- name: k_hops
dtype: int64
splits:
- name: test
num_bytes: 313055
num_examples: 345
download_size: 153626
dataset_size: 313055
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
glitchinthematrix/drugs__hormones_and_biological_mediators | glitchinthematrix | 2025-06-05T16:33:14Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-10T18:56:30Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
dtype: string
- name: answer
dtype: string
- name: k_hops
dtype: int64
splits:
- name: test
num_bytes: 327591
num_examples: 345
download_size: 174449
dataset_size: 327591
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Bki54/dahabuyuk | Bki54 | 2025-06-05T16:15:37Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T16:15:18Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2153416.8724832213
num_examples: 1072
- name: test
num_bytes: 241054.12751677854
num_examples: 120
download_size: 839909
dataset_size: 2394471.0
---
# Dataset Card for "dahabuyuk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kunwang2000/eval-gsm8k-DeepSeek-R1-Distill-Qwen-1.5B | kunwang2000 | 2025-06-05T16:12:18Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T16:12:15Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: token_count
dtype: int64
- name: completion
sequence: string
- name: verify
dtype: bool
- name: format_correct
dtype: bool
splits:
- name: test
num_bytes: 37538162
num_examples: 1319
download_size: 3337766
dataset_size: 37538162
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Subsets and Splits