datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-07 16:13:49
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-07 16:09:28
| trending_score
float64 0
40
⌀ | card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jackkuo/LLM-Enzyme-Kinetics-Archive-LLENKA | jackkuo | 2025-03-12T05:51:33Z | 28 | 1 | [
"license:cc",
"region:us"
] | [] | 2024-11-13T06:13:01Z | null | ---
license: cc
---
### 🚩Citation
Please cite the following paper if you use jackkuo/LLM-Enzyme-Kinetics-Archive-LLENKA in your work.
```bibtex
@article {Jiang2025.03.03.641178,
author = {Jiang, Jinling and Hu, Jie and Xie, Siwei and Guo, Menghao and Dong, Yuhang and Fu, Shuai and Jiang, Xianyue and Yue, Zhenlei and Shi, Junchao and Zhang, Xiaoyu and Song, Minghui and Chen, Guangyong and Lu, Hua and Wu, Xindong and Guo, Pei and Han, Da and Sun, Zeyi and Qiu, Jiezhong},
title = {Enzyme Co-Scientist: Harnessing Large Language Models for Enzyme Kinetic Data Extraction from Literature},
elocation-id = {2025.03.03.641178},
year = {2025},
doi = {10.1101/2025.03.03.641178},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The extraction of molecular annotations from scientific literature is critical for advancing data-driven research. However, traditional methods, which primarily rely on human curation, are labor-intensive and error-prone. Here, we present an LLM-based agentic workflow that enables automatic and efficient data extraction from literature with high accuracy. As a demonstration, our workflow successfully delivers a dataset containing over 91,000 enzyme kinetics entries from around 3,500 papers. It achieves an average F1 score above 0.9 on expert-annotated subsets of protein enzymes and can be extended to the ribozyme domain in fewer than 3 days at less than $90. This method opens up new avenues for accelerating the pace of scientific research.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2025/03/11/2025.03.03.641178},
eprint = {https://www.biorxiv.org/content/early/2025/03/11/2025.03.03.641178.full.pdf},
journal = {bioRxiv}
}
``` |
hsurapur/logs | hsurapur | 2025-03-12T05:46:40Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-16T04:00:59Z | null | ---
dataset_info:
features:
- name: query
dtype: string
- name: timestamp
dtype: string
splits:
- name: train
num_bytes: 1098
num_examples: 30
download_size: 1632
dataset_size: 1098
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
quyanh/open-rs | quyanh | 2025-03-12T05:44:50Z | 20 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-10T18:33:31Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 87368061
num_examples: 18615
download_size: 39221115
dataset_size: 87368061
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tarakeshwaran/Hackathon_Weather_Forcast | Tarakeshwaran | 2025-03-12T05:40:14Z | 8 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:23:21Z | null | ---
dataset_info:
features:
- name: Temperature
dtype: float64
- name: Humidity
dtype: float64
- name: Wind_Speed
dtype: float64
- name: Cloud_Cover
dtype: float64
- name: Pressure
dtype: float64
- name: Rain
dtype: string
splits:
- name: train
num_bytes: 126558
num_examples: 2500
download_size: 120963
dataset_size: 126558
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Weather Forecast Dataset
## Description
This dataset contains weather-related data collected for forecasting purposes. It includes various meteorological parameters that can be used for **climate analysis, weather prediction models,** and **machine learning applications** in forecasting.
## Dataset Details
### **Columns:**
- **Temperature**: Measured in degrees Celsius.
- **Humidity**: Percentage of atmospheric humidity.
- **Wind_Speed**: Speed of wind in meters per second.
- **Cloud_Cover**: Percentage of sky covered by clouds.
- **Pressure**: Atmospheric pressure in **hPa (hectopascal)**.
- **Rain**: Categorical label indicating whether it **rained (rain) or not (no rain)**.
### **Notes:**
- The dataset contains **2,500 entries**.
- The **Rain** column is categorical, making it useful for **classification models**.
- This dataset can be used for **time-series analysis** and **supervised learning tasks**.
## Use Cases
- **Predicting rainfall** using meteorological data.
- **Weather forecasting** using machine learning models.
- **Studying correlations** between **temperature, humidity, and pressure**.
## How to Use
You can load the dataset using the `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("Tarakeshwaran/Hackathon_Weather_Forcast")
print(dataset)
|
mlfoundations-dev/train_fasttext_classifier_seed_code_best_codegolf | mlfoundations-dev | 2025-03-12T05:37:27Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:36:15Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 154
num_examples: 1
download_size: 4628
dataset_size: 154
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VGraf/safe_responses_dev_8maxturns_0.3prob | VGraf | 2025-03-12T05:36:17Z | 14 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:35:58Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 555979761
num_examples: 32647
download_size: 156883594
dataset_size: 555979761
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VGraf/safe_responses_dev_3maxturns_0.5prob | VGraf | 2025-03-12T05:34:32Z | 55 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:34:28Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 233568664
num_examples: 20348
download_size: 84524323
dataset_size: 233568664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
passionMan/test_dataset6 | passionMan | 2025-03-12T05:25:22Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:25:17Z | null | ---
dataset_info:
features:
- name: dataset
dtype: string
- name: split_data
dtype: string
- name: task
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: string
splits:
- name: test
num_bytes: 73185855
num_examples: 19482
download_size: 19697439
dataset_size: 73185855
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
esfrankel17/original_HelpSteer_binarized_Qwen2.5-72B-Instruct_preferences | esfrankel17 | 2025-03-12T05:24:04Z | 8 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:02:54Z | null | ---
dataset_info:
features:
- name: original_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_chosen_rating
dtype: float64
- name: original_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_rejected_rating
dtype: float64
- name: model_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_agreed_with_original
dtype: bool
splits:
- name: average_rating
num_bytes: 105362028
num_examples: 8644
- name: average_rating_no_verbosity
num_bytes: 103629578
num_examples: 8556
- name: average_rating_no_verbosity_no_complexity
num_bytes: 102545279
num_examples: 8461
- name: goodness_score
num_bytes: 103404386
num_examples: 8523
download_size: 239314965
dataset_size: 414941271
configs:
- config_name: default
data_files:
- split: average_rating
path: data/average_rating-*
- split: average_rating_no_verbosity
path: data/average_rating_no_verbosity-*
- split: average_rating_no_verbosity_no_complexity
path: data/average_rating_no_verbosity_no_complexity-*
- split: goodness_score
path: data/goodness_score-*
---
|
sihyun77/eval_act_so100_suho5 | sihyun77 | 2025-03-12T05:17:20Z | 24 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-12T05:17:01Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 3801,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
sihyun77/eval_act_so100_suho4 | sihyun77 | 2025-03-12T05:11:14Z | 23 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-12T05:11:04Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 8,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
sihyun77/eval_act_so100_suho3 | sihyun77 | 2025-03-12T05:08:05Z | 24 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-12T05:07:53Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 3797,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
nouhad/multiplication_test_100_3x4 | nouhad | 2025-03-12T05:07:32Z | 24 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:07:31Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 9976
num_examples: 100
download_size: 5572
dataset_size: 9976
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_test_100_4x4 | nouhad | 2025-03-12T05:07:31Z | 26 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:07:23Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 10262
num_examples: 100
download_size: 5875
dataset_size: 10262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_test_100_3x3 | nouhad | 2025-03-12T05:07:21Z | 21 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:07:20Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 9664
num_examples: 100
download_size: 5251
dataset_size: 9664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
eemberda/english-ceb-bible-prompt | eemberda | 2025-03-12T05:07:13Z | 17 | 0 | [
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-08T03:09:39Z | null | ---
language:
- en
license: cc-by-4.0
---
# LLM Benchmark for English-Cebuano Translation
This dataset contains parallel sentences of English and Cebuano extracted from the Bible corpus available at [https://github.com/christos-c/bible-corpus](https://github.com/christos-c/bible-corpus). The dataset is formatted for use in training machine translation models, particularly with the Transformers library from Hugging Face.
## Dataset Description
* **Source:** Bible corpus (https://github.com/christos-c/bible-corpus)
* **Languages:** English (en) and Cebuano (ceb)
* **Format:** CSV files with two columns: `english` and `cebuano`.
* **Extraction:** Sentences were extracted from the XML files inside the `<seg>` tags, and matched by their `id` attribute.
* **Purpose:** To create a benchmark for English-Cebuano machine translation in Large Language Models.
## Usage
This dataset can be used to evaluate the performance of Large Language Models for English-Cebuano machine translation using libraries like Transformers.
**Example (Loading with Pandas):**
```python
import pandas as pd
train_df = pd.read_csv("english-ceb-bible-prompt.csv")
print(train_df.head())
```
**Example (Loading with datsets):**
```python
from datasets import load_dataset
dataset = load_dataset("eemberda/english-ceb-bible-prompt")
```
## Notes
The quality of the sentence alignment may vary.
The dataset is intended for research and educational purposes.
Please cite the original Bible corpus if you use this dataset in your work.
## Citation
If you use this dataset in your research, please include the original Bible corpus in your citations:
```
@misc{bible-corpus,
author = {Christos Christodoulopoulos},
title = {Bible Corpus},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{[https://github.com/christos-c/bible-corpus](https://github.com/christos-c/bible-corpus)}}
} |
nouhad/multiplication_test_100_10x10 | nouhad | 2025-03-12T05:04:40Z | 13 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:04:38Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 13860
num_examples: 100
download_size: 9247
dataset_size: 13860
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_test_100_9x9 | nouhad | 2025-03-12T05:04:38Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:04:37Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 13268
num_examples: 100
download_size: 8660
dataset_size: 13268
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shaquille69/conversation | shaquille69 | 2025-03-12T05:03:54Z | 8 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:question-answering",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [
"text-generation",
"text2text-generation",
"question-answering"
] | 2025-03-12T05:03:51Z | null | ---
size_categories: n<1K
task_categories:
- text-generation
- text2text-generation
- question-answering
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 68978
num_examples: 10
download_size: 35298
dataset_size: 68978
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for conversation
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/shaquille69/conversation/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/shaquille69/conversation/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"messages": [
{
"content": "You are an AI assistant designed to engage in normal, everyday conversations with users. Your purpose is to respond to a wide range of topics, from casual discussions about the weather to more substantial conversations about news, opinions, and interests. Provide engaging, relevant, and coherent responses that keep the conversation flowing smoothly, while also being mindful of the user\u0027s tone and context.\n\nRespond to queries with a mix of factual information, personal anecdotes, and creative insights, all while maintaining a friendly and approachable demeanor. Be prepared to transition between different subjects and topics as the conversation unfolds, and use your understanding of context and tone to ensure that your responses are always relevant and engaging.\n\nWhen possible, offer users suggestions for further conversation or topics to explore, and be prepared to adapt to their interests and preferences. Keep the conversation natural and spontaneous, just as you would with a human friend, and strive to build a rapport with the user over time.\n\nBe aware of nuances in language, such as idioms, colloquialisms, and figurative language, and use your understanding of context to ensure that your responses are always clear and effective.\n\nUser questions are direct and concise.",
"role": "system"
},
{
"content": "What\u0027s the best way to handle stress?",
"role": "user"
},
{
"content": "You know, I think managing stress is a really personal thing, but I\u0027ve found that it\u0027s all about finding those little \"hacks\" \u2013 habits that help you unwind and recharge. For me, it\u0027s been all about exercise, whether it\u0027s going for a walk, hitting the gym, or even just some good old-fashioned yoga. It\u0027s funny, but when I\u0027m feeling really overwhelmed, it\u0027s often because I\u0027ve been stuck inside staring at screens for too long. So, getting outside, even just for a short walk, really helps get my mind clear.\n\nAnother thing that works for me is creative activities, like drawing, painting, or even just doodling. Sometimes, I\u0027ll pull out my sketchbook and just start scribbling, and it helps me relax in the moment. Of course, everyone\u0027s different, but that\u0027s why it\u0027s so crucial to experiment and find what works best for you.\n\nI\u0027ve also found that practicing mindfulness has helped me manage stress better. It\u0027s about being present and letting go of all that mental chatter. It\u0027s not always easy, but it\u0027s something that\u0027s really helped me learn to breathe through those tough moments.\n\nWhat about you, do you have any tricks up your sleeve for managing stress?",
"role": "assistant"
},
{
"content": "Do you believe that one can really achieve significant self-improvement through the power of habits? \n\n",
"role": "user"
},
{
"content": "You know, I think habits are a powerful tool for self-improvement, but it\u0027s not just about forming new habits, it\u0027s also about breaking the old ones that hold us back. I mean, think about it \u2013 most of us have a default mode of behavior that\u0027s been shaped by years of repetition, and that can be hard to shift. But, the thing is, once we develop new habits, they can start to compound and really drive change. For example, when I was trying to get back into running, I started with just a few minutes a day, and before I knew it, I was hooked.\n\nIt\u0027s also worth noting that the key to sustained self-improvement is to find habits that actually align with your goals and values, rather than trying to force change into a preconceived identity. It\u0027s all about creating a sense of identity that\u0027s cohesive and authentic. For me, it\u0027s been about recognizing what\u0027s truly important and finding habits that support that identity.\n\nAnother thing that I\u0027ve found helpful is to focus on small wins, rather than trying to tackle everything at once. It\u0027s really about building momentum and confidence through incremental successes. So, if you\u0027re looking to make significant self-improvement, I\u0027d say start with a small habit change**, and then gradually build on it.\n\nYou know, I also want to mention that it\u0027s all about progress, not perfection. It\u0027s okay to stumble and make mistakes \u2013 it\u0027s all part of the journey, and the important thing is to learn from them and keep moving forward.\n\nWould you like to talk more about habit change or something else related to self-improvement?",
"role": "assistant"
},
{
"content": "Do you know what would be really helpful to have an equipped learning plan to help student learning student learning plan to help students become more self-directed learners increase their academic resilience.",
"role": "user"
},
{
"content": null,
"role": "assistant"
}
]
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("shaquille69/conversation", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("shaquille69/conversation")
```
</details>
|
jjaehyeok2/kimbab_kimchisoup_tteokbokki | jjaehyeok2 | 2025-03-12T05:02:12Z | 12 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:48:34Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 77735601.0
num_examples: 417
download_size: 77606007
dataset_size: 77735601.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
justus27/deepscaler-math-genesys-format-fixed | justus27 | 2025-03-12T04:55:53Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:55:51Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: task_type
dtype: string
- name: prompt
dtype: string
- name: verification_info
dtype: string
splits:
- name: train
num_bytes: 12961652
num_examples: 40315
download_size: 6206252
dataset_size: 12961652
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
esfrankel17/original_ChatbotArena55k_binarized_Qwen2.5-72B-Instruct_preferences | esfrankel17 | 2025-03-12T04:54:31Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:54:23Z | null | ---
dataset_info:
features:
- name: original_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_chosen_rating
dtype: int64
- name: original_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_rejected_rating
dtype: int64
- name: model_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_agreed_with_original
dtype: bool
splits:
- name: winner
num_bytes: 276540217
num_examples: 36999
download_size: 147691819
dataset_size: 276540217
configs:
- config_name: default
data_files:
- split: winner
path: data/winner-*
---
|
kishoreb8220/finetuning_demo | kishoreb8220 | 2025-03-12T04:51:37Z | 55 | 0 | [
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:51:35Z | null | ---
dataset_info:
features:
- name: prompt
dtype: float64
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 548
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sharmax-vikas/results | sharmax-vikas | 2025-03-12T04:51:36Z | 26 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:51:28Z | null | ---
dataset_info:
config_name: '2024-12-26'
features:
- name: model_id
dtype: string
- name: agent_action_type
dtype: string
- name: source
dtype: string
- name: acc
dtype: float64
splits:
- name: train
num_bytes: 5275
num_examples: 93
download_size: 2610
dataset_size: 5275
configs:
- config_name: '2024-12-26'
data_files:
- split: train
path: 2024-12-26/train-*
default: true
---
|
SKIML-ICL/nq_retrieved_adversarial_passage | SKIML-ICL | 2025-03-12T04:32:20Z | 8 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:21:26Z | null | ---
dataset_info:
config_name: adversarial
features:
- name: qid
dtype: int64
- name: question
dtype: string
- name: answers
sequence: string
- name: answer_sentence
dtype: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: nli
dtype: string
- name: pid
dtype: int64
- name: rank
dtype: int64
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: hasanswer
dtype: bool
- name: answerable
dtype: string
- name: qa_cases
list:
- name: answers
sequence: string
- name: context
dtype: string
- name: distance
dtype: string
- name: original_answers
sequence: string
- name: question
dtype: string
- name: unanswerable_cases
list:
- name: answers
sequence: string
- name: context
dtype: string
- name: distance
dtype: string
- name: original_answers
sequence: string
- name: question
dtype: string
- name: conflict_cases
list:
- name: answers
sequence: string
- name: context
dtype: string
- name: distance
dtype: string
- name: original_answers
sequence: string
- name: question
dtype: string
- name: named_entities
sequence: string
- name: adversarial_sentence
dtype: string
- name: cosine_similarity
dtype: float32
- name: demons
sequence: 'null'
- name: prompt
dtype: string
- name: input
dtype: string
- name: adversarial_passage
dtype: string
splits:
- name: train
num_bytes: 1599908723.651
num_examples: 67309
- name: validation
num_bytes: 177646175.4465335
num_examples: 7479
download_size: 862180847
dataset_size: 1777554899.0975335
configs:
- config_name: adversarial
data_files:
- split: train
path: adversarial/train-*
- split: validation
path: adversarial/validation-*
---
|
anshkhandelwal/finance_data_compressed | anshkhandelwal | 2025-03-12T04:26:15Z | 28 | 0 | [
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2403.12968",
"region:us",
"finance"
] | [] | 2025-02-18T05:00:15Z | null | ---
configs:
- config_name: Data
data_files:
- split: train
path: finance_compressed_dataset.json
language:
- en
tags:
- finance
---
# Dataset Card for Finance Data Compressed
This dataset is created using the methodology introduced in [LLMLingua-2 (Pan _et al._, 2024)](https://arxiv.org/abs/2403.12968), and is collected to construct the training data for LLMLingua-2 compressor.
It consists of 5000 instances from [AdaptLLM/finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), with their GPT-3.5-turbo compressed versions.
This dataset consists of pairs of original prompts and their compressed versions (specifically for financial data).
### 🎯 Usage
```python
from datasets import load_dataset
finance_data = load_dataset("anshkhandelwal/finance_data_compressed", split="train")
for sample in finance_data:
# concatenation of all chunks
origin_prompt = sample["prompt"]
compressed_prompt = sample["compressed_prompt"]
# chunk list
origin_prompt_list = sample["prompt_list"]
compressed_prompt_list = sample["compressed_prompt_list"]
```
### 🔎 Details
We segment the collected finance data into a few chunks and then instruct GPT-3.5-turbo to compress each chunk independently.
Please refer to [LLMLingua-2 (Pan _et al._, 2024)](https://arxiv.org/abs/2403.12968) for the prompt used for compression.
There are 5 fields:
1. `idx: int`: index of the instance.
2. `prompt: str`: original text of meeting transcripts.
3. `prompt_list: List[str]`: a List of chunks corresponding to the original instance in `prompt`.
4. `compressed_prompt_list: List[str]`: a List of compressed chunks. Each chunk is compressed by GPT-3.5-turbo independently.
5. `compressed_prompt: str`: GPT-3.5-turbo compressed version of the finance data. Each instance is a concatenation of all compressed chunks in `compressed_prompt_list`. |
mlfoundations-dev/train_fasttext_classifier_seed_code_best_sharegpt | mlfoundations-dev | 2025-03-12T04:15:55Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:15:53Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 140
num_examples: 1
download_size: 4557
dataset_size: 140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dfdsfgf/big_math_dataset | dfdsfgf | 2025-03-12T04:07:03Z | 46 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:06:57Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 16305977
num_examples: 97467
download_size: 9339580
dataset_size: 16305977
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
singsan9song/smhrd_llama3_korea | singsan9song | 2025-03-12T04:05:31Z | 11 | 0 | [
"license:llama3",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T03:36:29Z | null | ---
license: llama3
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 6161
num_examples: 19
download_size: 5159
dataset_size: 6161
---
|
qishenfu/debug_2_0311 | qishenfu | 2025-03-12T04:04:52Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:04:30Z | null | ---
dataset_info:
features:
- name: system
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: gvkey
dtype: int64
- name: fyear
dtype: int64
splits:
- name: train
num_bytes: 47551
num_examples: 5
download_size: 22155
dataset_size: 47551
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
UD-Filipino/cebuano-readability | UD-Filipino | 2025-03-12T04:00:14Z | 26 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2203.17225",
"region:us"
] | [] | 2025-03-12T03:57:02Z | null | ---
dataset_info:
features:
- name: source
dtype: string
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 429614
num_examples: 350
download_size: 233244
dataset_size: 429614
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: cc-by-4.0
---
Source: https://github.com/imperialite/cebuano-readability
> We asked permission from one of the authors to include this dataset to our catalog effort. We copy a portion of the README in this dataset card.
# Baseline Readability Assessment Model for Cebuano
This repository contains the code and datasets from Bloom, Let's Read Asia, and Department of Education (DepEd) websites used for developing the first ML-based baseline for readability assessment in the Cebuano language described in the paper **A Baseline Readability Model for Cebuano**.
Paper Link: https://arxiv.org/abs/2203.17225
All reading materials collected for the study are free-to-download from their original websites and licensed with CC BY 4.0 which means they are also free to distribute in any form given proper citation.
Please also add the following citation to your paper/presentation if you use the resources found in this repository:
```
Reyes, L. L. A., & Ibañez, M. A., & Sapinit, R., & Hussien, M., Imperial, J. M. (2022). A Baseline Readability Model for Cebuano. arXiv preprint arXiv:2203.17225.
```
## Contact
If you need any help reproducing the results, please don't hesitate to contact the author through
**Joseph Marvin Imperial** <br/>
jrimperial@national-u.edu.ph <br/>
www.josephimperial.com |
sotamavenslab/s1_s1k_2wait | sotamavenslab | 2025-03-12T03:56:22Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T16:36:25Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: solution
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: cot
dtype: 'null'
- name: thinking_trajectory
dtype: string
- name: attempt
dtype: string
- name: grade
dtype: string
- name: grade_reason
dtype: string
splits:
- name: train
num_bytes: 26984804
num_examples: 1000
download_size: 8402762
dataset_size: 26984804
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gguava/xiaoshuo | gguava | 2025-03-12T03:54:36Z | 30 | 0 | [
"license:artistic-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T03:53:56Z | null | ---
license: artistic-2.0
---
|
ai4bharat/FERMAT | ai4bharat | 2025-03-12T03:53:49Z | 76 | 0 | [
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2501.07244",
"region:us"
] | [] | 2025-01-02T11:20:03Z | null | ---
license: cc-by-4.0
dataset_info:
features:
- name: new_custom_id
dtype: string
- name: image
dtype: image
- name: grade
dtype: string
- name: domain_code
dtype: string
- name: subdomain_code
dtype: string
- name: is_table
dtype: bool
- name: handwriting_style
dtype: bool
- name: image_quality
dtype: bool
- name: rotation
dtype: string
- name: new_pert_id
dtype: float64
- name: annot_id
dtype: int64
- name: img_id
dtype: int64
- name: orig_q
dtype: string
- name: orig_a
dtype: string
- name: pert_a
dtype: string
- name: pert_reasoning
dtype: string
- name: has_error
dtype: bool
splits:
- name: train
num_bytes: 4769534708.888
num_examples: 2244
download_size: 4787048018
dataset_size: 4769534708.888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
---
# 🧮📐 Can VLMs Evaluate Handwritten Math? ✏️✨
🎉 We present **FERMAT**, a benchmark designed to rigorously test the multimodal reasoning and auto-evaluation capabilities of Vision-Language Models (VLMs) using real-world handwritten math problems! 📝🧠
## 🎯 What’s Inside FERMAT?
**2,244 handwritten math solutions**, carefully annotated across core mathematical domains:
🔢 Arithmetic | 📏 Algebra | 📐 Geometry | 📏 Mensuration | 🎲 Probability | 📊 Statistics | 📐 Trigonometry | 📈 Calculus
Each solution features realistic student mistakes categorized along four key axes:
1. 🧑💻 **Computational Errors**
2. 🤔 **Conceptual Misunderstandings**
3. ✍️ **Notation Errors**
4. 📑 **Presentation Issues**
Additionally, some solutions contain superficial variations that don't actually affect correctness (e.g., "16 cm" vs. "16.0 cm")—perfect for testing the subtlety of your models!
## 📦 Dataset Features
Each instance in **FERMAT** consists of:
### Main Content
- **`orig_q`**: Original math question (LaTeX)
- **`orig_a`**: Correct solution (LaTeX)
- **`image`**: Handwritten image of the question and erroneous solution
- **`pert_a`**: Erroneous solution (LaTeX)
- **`pert_reasoning`**: Explanation of introduced error
### Metadata
- **`grade`**: Grade level (`c07-c12`, `c00` Grade-agnostic)
- **`domain_code`**: Domain information ([see paper appendix](https://arxiv.org/abs/2501.07244))
- **`subdomain_code`**: Subdomain information ([see paper appendix](https://arxiv.org/abs/2501.07244))
- **`handwriting_style`**: Legibility of handwriting (`True`: legible, `False`: challenging)
- **`image_quality`**: Image quality; True if good, False if illumination, shadow, blurring, contrast issues
- **`rotation`**: Image orientation
- **`annot_id`**: Annotator’s unique ID
- **`has_error`**: Actual error vs superficial variation (`True`: actual, `False`: superficial)
## 🚀 Why Use FERMAT?
FERMAT is perfect for evaluating multimodal reasoning and auto-evaluation capabilities of Vision-Language Models (VLMs) on handwritten math:
- ✅ High-quality handwritten images
- ✅ Clearly defined error taxonomy
- ✅ Manually vetted, accurate annotations
📖 **[Read the Paper 📚](https://arxiv.org/abs/2501.07244)**
🐙 **[GitHub Repo 🚀](https://github.com/AI4Bharat/FERMAT)**
💻 **[HF Dataset 🌟](https://huggingface.co/datasets/ai4bharat/FERMAT)**
If the benchmark helps your research, kindly cite our work for further reach:
```
@article{nath2025vision1language,
title = {Can Vision-Language Models Evaluate Handwritten Math?},
author = {Oikantik Nath and Hanani Bathina and Mohammed Safi Ur Rahman Khan and Mitesh M. Khapra},
year = {2025},
journal = {arXiv preprint arXiv: 2501.07244}
}
```
|
passionMan/dataset_seed_evol_cherry208 | passionMan | 2025-03-12T03:49:19Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T03:49:06Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 12962311
num_examples: 5223
download_size: 6613115
dataset_size: 12962311
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
passionMan/dataset_seed_evol_cherry204 | passionMan | 2025-03-12T03:48:59Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T03:48:56Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 5084080
num_examples: 2611
download_size: 2501289
dataset_size: 5084080
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HeyixInn0/Reorganized-transcoder-java | HeyixInn0 | 2025-03-12T03:39:35Z | 29 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T07:35:47Z | null | ---
dataset_info:
features:
- name: suffix
dtype: 'null'
- name: compare_func
sequence: 'null'
- name: demos
sequence: 'null'
- name: tgt_lang
dtype: string
- name: task_name
dtype: string
- name: solution
dtype: string
- name: src_lang
dtype: string
- name: doc_string
dtype: string
- name: test_cases
sequence:
sequence: string
- name: entry_func
dtype: string
- name: data_id
dtype: string
- name: dataset_name
dtype: string
- name: prefix
dtype: string
- name: import_str
sequence: string
splits:
- name: train
num_bytes: 1703828
num_examples: 610
download_size: 717232
dataset_size: 1703828
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sotamavenslab/s1_s1k_1wait | sotamavenslab | 2025-03-12T03:39:09Z | 14 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T17:19:18Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: solution
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: cot
dtype: 'null'
- name: thinking_trajectory
dtype: string
- name: attempt
dtype: string
- name: grade
dtype: string
- name: grade_reason
dtype: string
splits:
- name: train
num_bytes: 25713126
num_examples: 1000
download_size: 8111617
dataset_size: 25713126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HeyixInn0/Reorganized-transcoder-cpp | HeyixInn0 | 2025-03-12T03:39:09Z | 39 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T07:45:36Z | null | ---
dataset_info:
features:
- name: dataset_name
dtype: string
- name: src_lang
dtype: string
- name: entry_func
dtype: string
- name: prefix
dtype: string
- name: import_str
sequence: string
- name: suffix
dtype: 'null'
- name: demos
sequence: 'null'
- name: data_id
dtype: string
- name: doc_string
dtype: string
- name: tgt_lang
dtype: string
- name: compare_func
sequence: 'null'
- name: solution
dtype: string
- name: task_name
dtype: string
- name: test_cases
sequence:
sequence: string
splits:
- name: train
num_bytes: 1585566
num_examples: 613
download_size: 683642
dataset_size: 1585566
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sabrinaxinli/historical_cloze | sabrinaxinli | 2025-03-12T03:24:38Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T03:20:34Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: word
dtype: string
- name: link
dtype: string
- name: sense_start_year
dtype: int64
- name: sense_end_year
dtype: int64
splits:
- name: test
num_bytes: 5202720
num_examples: 28657
download_size: 2477830
dataset_size: 5202720
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Ignisylvan/bean | Ignisylvan | 2025-03-12T03:24:04Z | 16 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-03-12T03:23:01Z | null | ---
license: apache-2.0
---
|
yuebanlaosiji/e-girl | yuebanlaosiji | 2025-03-12T03:03:03Z | 22 | 0 | [
"language:zh",
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-08T02:23:53Z | null | ---
language:
- zh
license: apache-2.0
---
|
SKIML-ICL/nq_retrieved_adversarial_sentence_sim | SKIML-ICL | 2025-03-12T03:00:08Z | 7 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:54:56Z | null | ---
dataset_info:
config_name: adversarial
features:
- name: qid
dtype: int64
- name: question
dtype: string
- name: answers
sequence: string
- name: answer_sentence
dtype: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: nli
dtype: string
- name: pid
dtype: int64
- name: rank
dtype: int64
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: hasanswer
dtype: bool
- name: answerable
dtype: string
- name: qa_cases
list:
- name: answers
sequence: string
- name: context
dtype: string
- name: distance
dtype: string
- name: original_answers
sequence: string
- name: question
dtype: string
- name: unanswerable_cases
list:
- name: answers
sequence: string
- name: context
dtype: string
- name: distance
dtype: string
- name: original_answers
sequence: string
- name: question
dtype: string
- name: conflict_cases
list:
- name: answers
sequence: string
- name: context
dtype: string
- name: distance
dtype: string
- name: original_answers
sequence: string
- name: question
dtype: string
- name: named_entities
sequence: string
- name: input
dtype: string
- name: prompt
dtype: string
- name: adversarial_sentence
dtype: string
- name: cosine_similarity
dtype: float32
splits:
- name: train
num_bytes: 1637167855.0
num_examples: 69106
- name: validation
num_bytes: 181329026.0
num_examples: 7659
download_size: 864882517
dataset_size: 1818496881.0
configs:
- config_name: adversarial
data_files:
- split: train
path: adversarial/train-*
- split: validation
path: adversarial/validation-*
---
|
andy309/eval_act_so100_0311_1152_03121028 | andy309 | 2025-03-12T02:31:01Z | 24 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-12T02:30:42Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1258,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.back": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist_right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
mlfoundations-dev/train_fasttext_classifier_seed_math_worst_mix_5_3_1 | mlfoundations-dev | 2025-03-12T02:13:55Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:13:35Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 134
num_examples: 1
download_size: 4527
dataset_size: 134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_math_worst_3 | mlfoundations-dev | 2025-03-12T02:12:28Z | 55 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:11:59Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 134
num_examples: 1
download_size: 4527
dataset_size: 134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_math_best_automath | mlfoundations-dev | 2025-03-12T02:12:16Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:11:18Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 140
num_examples: 1
download_size: 4557
dataset_size: 140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_math_best_olympiad | mlfoundations-dev | 2025-03-12T02:11:06Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:10:48Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 140
num_examples: 1
download_size: 4557
dataset_size: 140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_math_best_s1 | mlfoundations-dev | 2025-03-12T02:10:48Z | 25 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:10:46Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 134
num_examples: 1
download_size: 4527
dataset_size: 134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zijian2022/llmtest107 | zijian2022 | 2025-03-12T02:05:01Z | 37 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100"
] | [
"robotics"
] | 2025-03-12T02:02:47Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 5,
"total_frames": 5037,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"video_info": {
"video.fps": 30.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"video_info": {
"video.fps": 30.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
infinite-dataset-hub/AsianFacesForBiasDetection | infinite-dataset-hub | 2025-03-12T02:04:43Z | 24 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | [] | 2025-03-12T02:04:42Z | null | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# AsianFacesForBiasDetection
tags: ethics, diversity, AI
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'AsianFacesForBiasDetection' dataset is aimed at identifying and mitigating bias in facial recognition AI systems. It contains a diverse range of images representing various Asian ethnicities to train models to recognize and understand the ethnic diversity within Asian faces. The dataset includes features like age, gender, and ethnicity diversity, which are critical in promoting fairness and diversity in AI. The label column indicates whether the face image shows potential signs of bias in recognition accuracy.
**CSV Content Preview:**
```csv
id, image_url, label
1, https://example.com/asian_face1.jpg, No_Bias
2, https://example.com/asian_face2.jpg, Minor_Bias
3, https://example.com/asian_face3.jpg, Major_Bias
4, https://example.com/asian_face4.jpg, No_Bias
5, https://example.com/asian_face5.jpg, No_Bias
```
**Dataset Description:**
The 'AsianFacesForBiasDetection' dataset consists of high-resolution images of Asian faces with a focus on promoting diversity in AI facial recognition technologies. The dataset is created with the aim of reducing racial bias and increasing the accuracy of AI systems across different ethnic groups. The images in the dataset are labeled according to the level of detection bias observed in preliminary studies, ranging from no bias to varying degrees of potential bias.
**CSV Content Preview:**
```csv
id, image_url, label
1, https://example.com/asian_face1.jpg, No_Bias
2, https://example.com/asian_face2.jpg, Mild_Bias
3, https://example.com/asian_face3.jpg, Moderate_Bias
4, https://example.com/asian_face4.jpg, No_Bias
5, https://example.com/asian_face5.jpg, No_Bias
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'Asian face':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=Asian+face&dataset=AsianFacesForBiasDetection&tags=ethics,+diversity,+AI
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
zijian2022/llmtest102 | zijian2022 | 2025-03-12T01:29:45Z | 16 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-12T01:29:37Z | null | ---
task_categories:
- robotics
tags:
- LeRobot
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
sukatune/Evol-Instruct-Code-80k-v1-Java | sukatune | 2025-03-12T01:23:54Z | 18 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:55:28Z | null | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 13862866
num_examples: 7442
download_size: 5676023
dataset_size: 13862866
---
|
Eureka7366/ruozhiba-llama3-tt | Eureka7366 | 2025-03-12T01:23:19Z | 16 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T01:21:47Z | null | ---
license: apache-2.0
---
|
LuckyLukke/clembench_REFUEL_base_1_ | LuckyLukke | 2025-03-12T01:17:48Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T01:17:41Z | null | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: has_error
dtype: bool
- name: role
dtype: string
- name: type
dtype: string
- name: reject
list:
- name: content
dtype: string
- name: has_error
dtype: bool
- name: role
dtype: string
- name: type
dtype: string
- name: chosen_reward
dtype: float64
- name: reject_reward
dtype: float64
- name: game
dtype: string
- name: model
dtype: string
- name: experiment
dtype: string
- name: episode
dtype: string
- name: sampled_h
dtype: int64
- name: failed_requests
struct:
- name: chosen
dtype: int64
- name: reject
dtype: int64
- name: was_original
dtype: bool
- name: chosen_turn_scores
struct:
- name: '0'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: 'null'
- name: Violated Request Count
dtype: float64
- name: '1'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '10'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '11'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '12'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '13'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '14'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '15'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '16'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '17'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '18'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '19'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '2'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '20'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '21'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '22'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '23'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '24'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '3'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '4'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '5'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '6'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '7'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '8'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '9'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: reject_turn_scores
struct:
- name: '0'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: 'null'
- name: Violated Request Count
dtype: float64
- name: '1'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '10'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '11'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '12'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '13'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '14'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '15'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '16'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '17'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '18'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '19'
struct:
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '2'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '3'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '4'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '5'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '6'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: closeness score
dtype: float64
- name: strategy score
dtype: float64
- name: '7'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '8'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: '9'
struct:
- name: Accuracy
dtype: float64
- name: Changed Cell Count
dtype: float64
- name: F1
dtype: float64
- name: Generated Expression Length
dtype: float64
- name: Generated Expression Number of Tokens
dtype: float64
- name: Parsed Request Count
dtype: float64
- name: Precision
dtype: float64
- name: Recall
dtype: float64
- name: Request Count
dtype: float64
- name: Slot Filled?
dtype: float64
- name: Violated Request Count
dtype: float64
- name: game_id
dtype: int64
- name: reward_difference
dtype: float64
- name: chosen_token
sequence: int64
- name: chosen_mask
sequence: int64
- name: reject_token
sequence: int64
- name: reject_mask
sequence: int64
- name: chosen_logprob
dtype: float64
- name: reject_logprob
dtype: float64
splits:
- name: train
num_bytes: 359824520
num_examples: 4606
- name: test
num_bytes: 40141982
num_examples: 500
download_size: 9132767
dataset_size: 399966502
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
droid-ai/libero | droid-ai | 2025-03-12T01:15:39Z | 15 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-03-12T01:08:44Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "libero",
"total_episodes": 1693,
"total_frames": 273465,
"total_tasks": 40,
"total_videos": 0,
"total_chunks": 2,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1693"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"is_first": {
"dtype": "bool",
"shape": [
1
]
},
"is_last": {
"dtype": "bool",
"shape": [
1
]
},
"is_terminal": {
"dtype": "bool",
"shape": [
1
]
},
"image": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"wrist_image": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"state": {
"dtype": "float32",
"shape": [
8
]
},
"joint_state": {
"dtype": "float32",
"shape": [
7
]
},
"actions": {
"dtype": "float32",
"shape": [
7
]
},
"discount": {
"dtype": "float32",
"shape": [
1
]
},
"reward": {
"dtype": "float32",
"shape": [
1
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
BroAlanTaps/Pretrain-Stage1-1024 | BroAlanTaps | 2025-03-12T01:12:00Z | 18 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T10:08:33Z | null | ---
dataset_info:
features:
- name: dump
dtype: string
- name: text
dtype: string
- name: target
dtype: string
- name: compress_ids
sequence: int64
- name: llm_ids
sequence: int64
- name: next_ids
sequence: int64
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 55937435446
num_examples: 1689078
- name: test
num_bytes: 22384728
num_examples: 676
download_size: 19806231580
dataset_size: 55959820174
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Svngoku/History-And-Citizenship-Student-Book | Svngoku | 2025-03-12T01:11:08Z | 9 | 0 | [
"language:en",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"history",
"book"
] | [] | 2025-03-12T01:09:22Z | null | ---
dataset_info:
features:
- name: chunk_id
dtype: int64
- name: content
dtype: string
- name: metadata
struct:
- name: author
dtype: string
- name: creationDate
dtype: string
- name: creationdate
dtype: string
- name: creator
dtype: string
- name: file_path
dtype: string
- name: format
dtype: string
- name: keywords
dtype: string
- name: modDate
dtype: string
- name: moddate
dtype: string
- name: page
dtype: int64
- name: producer
dtype: string
- name: source
dtype: string
- name: start_index
dtype: int64
- name: subject
dtype: string
- name: title
dtype: string
- name: total_pages
dtype: int64
- name: trapped
dtype: string
- name: page_image
dtype: string
splits:
- name: train
num_bytes: 58087978
num_examples: 381
download_size: 31972130
dataset_size: 58087978
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
tags:
- history
- book
size_categories:
- n<1K
--- |
dmitriihook/qwq-32b-planning-mystery | dmitriihook | 2025-03-12T01:01:42Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | [] | 2025-03-12T01:01:35Z | null | ---
size_categories: n<1K
dataset_info:
features:
- name: query
dtype: string
- name: instance_id
dtype: string
- name: Number of blocks
dtype: int64
- name: generation
dtype: string
- name: distilabel_metadata
struct:
- name: raw_input_text_generation_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation_0
dtype: string
- name: statistics_text_generation_0
struct:
- name: input_tokens
dtype: int64
- name: output_tokens
dtype: int64
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 130196834
num_examples: 2000
download_size: 44492801
dataset_size: 130196834
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for qwq-32b-planning-mystery
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/dmitriihook/qwq-32b-planning-mystery/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/dmitriihook/qwq-32b-planning-mystery/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"Number of blocks": 4,
"distilabel_metadata": {
"raw_input_text_generation_0": [
{
"content": "I am playing with a set of objects. Here are the actions I can do\n\n Attack object\n Feast object from another object\n Succumb object\n Overcome object from another object\n\nI have the following restrictions on my actions:\n To perform Attack action, the following facts need to be true: Province object, Planet object, Harmony.\n Once Attack action is performed the following facts will be true: Pain object.\n Once Attack action is performed the following facts will be false: Province object, Planet object, Harmony.\n To perform Succumb action, the following facts need to be true: Pain object.\n Once Succumb action is performed the following facts will be true: Province object, Planet object, Harmony. \n Once Succumb action is performed the following facts will be false: Pain object.\n To perform Overcome action, the following needs to be true: Province other object, Pain object.\n Once Overcome action is performed the following will be true: Harmony, Province object, Object Craves other object.\n Once Overcome action is performed the following will be false: Province other object, Pain object.\n To perform Feast action, the following needs to be true: Object Craves other object, Province object, Harmony.\n Once Feast action is performed the following will be true: Pain object, Province other object.\n Once Feast action is performed the following will be false:, Object Craves other object, Province object, Harmony.\n\nHere is an example problem:\n\n[STATEMENT]\nAs initial conditions I have that, province Block B, province Block C, harmony, Block C craves Block A, planet Block A, planet Block B.\nMy goal is to have that Block A craves Block C and Block B craves Block A.\n\nMy plan is as follows:\n\n[PLAN]\nfeast Block C from Block A\nsuccumb Block C\nattack Block A\novercome Block A from Block C\nattack Block B\novercome Block B from Block A\n[PLAN END]\n\nHere is another example:\n\n[STATEMENT]\nAs initial conditions I have that, province Block B, harmony, Block A craves Block D, Block B craves Block C, Block C craves Block A, planet Block D.\nMy goal is to have that Block C craves Block D, Block A craves Block C and Block B craves Block A.\n\nMy plan is as follows:\n\n[PLAN]\nfeast Block B from Block C\nsuccumb Block B\nfeast Block C from Block A\nsuccumb Block C\nfeast Block A from Block D\nsuccumb Block A\nattack Block C\novercome Block C from Block D\nattack Block A\novercome Block A from Block C\nattack Block B\novercome Block B from Block A\n[PLAN END]\nHere is the problem to solve:\n\n[STATEMENT]\nAs initial conditions I have that: Block C craves Block B, harmony, planet Block A, planet Block B, planet Block D, province Block A, province Block C and province Block D.\nMy goal is for the following to be true: Block A craves Block D, Block B craves Block C and Block D craves Block B.\n\nMy plan is as follows:",
"role": "user"
}
],
"raw_output_text_generation_0": "Okay, let\u0027s see. I need to figure out a plan to reach the goal where Block A craves D, Block B craves C, and D craves B. The initial conditions are that Block C craves B, there\u0027s harmony, planets A, B, D, and provinces A, C, D. So first, I should list out all the initial facts and then see how each action can change them step by step.\n\nStarting with the initial state:\n\n- Province Block A\n- Province Block C\n- Province Block D\n- Planet Block A\n- Planet Block B\n- Planet Block D\n- Harmony\n- Block C craves Block B\n\nThe goal is:\n\n- Block A craves D\n- Block B craves C\n- Block D craves B\n\nHmm. Let\u0027s think about the actions available. The actions are Attack, Feast, Succumb, and Overcome. Each has preconditions and effects. Let me recall their details.\n\nFirst, Attack requires Province, Planet, and Harmony. When you attack, you get Pain, but lose Province, Planet, and Harmony. So after attacking, Pain is true, but those three are false. But then Succumb requires Pain, and when you succumb, you regain Province, Planet, Harmony, and lose Pain. So maybe Attack and Succumb can be used to toggle some states?\n\nOvercome requires Province of another object and Pain. Wait, the preconditions for Overcome are \"Province other object\" and \"Pain object\". Wait, the exact wording was: \"To perform Overcome action, the following needs to be true: Province other object, Pain object.\" So the object being overcome is from another object. Wait, the syntax is \"Overcome object from another object\". So maybe the first object is the target, and the second is the other? Not sure, but the preconditions are that the other object has Province, and the object (the first one) has Pain. Then, when you overcome, you get Harmony, Province of the first object, and the first object craves the other. And you lose Province of the other, Pain of the first, and the other\u0027s Province?\n\nWait, the effects of Overcome are:\n\n- True: Harmony, Province object, Object Craves other object.\n- False: Province other object, Pain object.\n\nWait, so after overcoming, the first object (the one being overcome?) gets Province, and the other loses Province. Also, the first object now craves the other. And Pain on the first is removed, and Harmony is restored.\n\nFeast requires Object Craves other, Province of the object, and Harmony. When you feast, you get Pain on the object, Province of the other, and lose Craves, Province, and Harmony.\n\nSo the plan needs to manipulate these to reach the goals.\n\nLet me note the goals again:\n\nGoal 1: Block A craves D \u2192 So we need to have \"Object Craves other object\" where the object is A and other is D.\n\nGoal 2: Block B craves C \u2192 Object is B, other is C.\n\nGoal 3: Block D craves B \u2192 Object is D, other is B.\n\nLooking at the initial conditions, Block C already craves B. So that\u0027s part of the third goal (D needs to crave B, but C already does). Wait, but the third goal is D craves B, so that\u0027s separate.\n\nFirst, let\u0027s see how to get A to crave D. To get A to crave D, perhaps using Overcome? Because Overcome\u0027s effect is that the object (first parameter) craves the other (second). So if we do Overcome A from D, then A would crave D. But to do that, the preconditions are that the other (D) has Province, and the object (A) has Pain. Wait, the preconditions for Overcome are \"Province other object\" and \"Pain object\". So the other (D) must have Province, and the object (A) must have Pain. \n\nSo to set up for Overcome A from D, we need D to have Province (which it does initially), and A to have Pain. How do we get Pain on A? That comes from an Attack action. But Attack requires Province, Planet, and Harmony. Let\u0027s see.\n\nAlternatively, maybe through Feast. Let\u0027s think step by step.\n\nFirst, let\u0027s see the initial state:\n\nProvinces: A, C, D (so B is not a province? Wait, initial conditions say province Block A, C, D. So B is not a province. Wait, the initial conditions are \"province Block A, province Block C, province Block D\" so B is not a province. Planets are A, B, D. Harmony is present. And C craves B.\n\nSo the initial provinces are A, C, D. Planets are A, B, D. So B is a planet but not a province. Hmm.\n\nThe goal requires that Block B craves C. So to get B to crave C, perhaps we need to do an Overcome on B from C? Because Overcome B from C would make B crave C. The preconditions for that would be that the other (C) has Province (which it does), and B has Pain. To get B to have Pain, we need to attack B. But attacking B requires that B has Province, Planet, and Harmony. Wait, but B is not a province (since provinces are A, C, D). So can\u0027t attack B directly. Hmm.\n\nAlternatively, maybe we can first get B to be a province? How?\n\nAlternatively, maybe through some sequence of actions. Let\u0027s think step by step.\n\nFirst, let\u0027s see the goals again. Let\u0027s tackle each craving one by one.\n\nGoal 1: A craves D. To get that, Overcome A from D. To do that, need D has Province (which it does), and A has Pain. To get A to have Pain, we can attack A. But attacking A requires that A has Province, Planet, and Harmony. A does have Province and Planet (since it\u0027s a planet and province). Harmony is present. So attacking A would work. Let\u0027s see:\n\nAction 1: Attack A.\n\nPreconditions: Province A (yes), Planet A (yes), Harmony (yes). So that\u0027s okay.\n\nAfter Attack A:\n\n- Pain A is true.\n\n- Province A, Planet A, Harmony are false.\n\nSo now, Harmony is gone, A has Pain, and no longer has Province or Planet. But Planet is a separate fact. Wait, the attack action\u0027s effects are that Province, Planet, and Harmony become false. So after attacking A, Planet A is still there? Wait, no. Wait the attack action\u0027s effects are that the following will be false: Province object, Planet object, Harmony. Wait, the object here is A. So Planet A would be set to false. Wait, but the initial conditions have Planet A as true. So after attacking A, Planet A becomes false. Hmm, but maybe that\u0027s okay.\n\nBut then, after attacking A, we can do Succumb on A to regain Province, Planet, Harmony, and lose Pain. But maybe that\u0027s a path.\n\nAlternatively, after attacking A, we can proceed to Overcome A from D. Let\u0027s see:\n\nAfter Attack A:\n\n- Pain A is true.\n\n- Province A is false, Planet A is false, Harmony is false.\n\nSo to do Overcome A from D, the preconditions are:\n\n- Province of the other (D) must be true (which it is, since D is still a province? Wait, after attacking A, the only things that changed were A\u0027s Province, Planet, and Harmony. So D\u0027s Province remains true.\n\n- Pain of the object (A) is true (yes, because of the attack).\n\nSo yes, Overcome A from D can be done now. Let\u0027s do that.\n\nAction 2: Overcome A from D.\n\nEffects:\n\n- True: Harmony, Province A, A craves D.\n\n- False: Province D (since D is the other object here), Pain A.\n\nSo after this:\n\n- Harmony is true again.\n\n- Province A is true again.\n\n- A craves D (good for goal 1).\n\n- Province D is now false (since D was the other object in the Overcome).\n\n- Pain A is gone.\n\nSo now, after these two actions (Attack A, Overcome A from D), we have achieved A craves D. That\u0027s one goal done.\n\nNow, moving on to the next goal: Block B craves C. To get B to crave C, we need to do Overcome B from C. Let\u0027s see the preconditions for that:\n\n- Province of the other (C) must be true. Currently, after the previous steps, C\u0027s province status: initially it was province, but after the Overcome A from D, the other object was D, so C\u0027s province wasn\u0027t affected. Wait, in the Overcome A from D, the other object is D, so the Province of D is set to false. But C\u0027s province is still true? Wait, no, in the Overcome A from D, the other is D, so the Province of D is set to false. Wait, the effect of Overcome is that the Province of the other object (D) becomes false. So after that, D is no longer a province. But C\u0027s province is still true, because it wasn\u0027t involved in that Overcome.\n\nSo to do Overcome B from C, the other is C, so we need C to have Province (which it does), and B must have Pain. How to get B to have Pain?\n\nTo get Pain on B, we need to attack B. But attacking B requires that B has Province, Planet, and Harmony. Currently, B is not a province (since provinces were A, C, D initially, and after Overcome A from D, D lost province, so now provinces are A and C). So B is not a province. Therefore, can\u0027t attack B because the precondition requires Province B. So that\u0027s a problem.\n\nHmm. So maybe we need to make B a province first. How?\n\nAlternatively, perhaps we can get B to have Province through some other action. Let\u0027s think.\n\nAlternatively, maybe we can first get B to be a province via some other path. Let\u0027s see.\n\nWait, the only way to get Province back is through Succumb. Because when you succumb an object, you regain Province, Planet, Harmony. So if we can get Pain on B, then we can succumb B to get Province back. But how to get Pain on B?\n\nAlternatively, maybe we can attack another object that has B as a planet? Wait, not sure.\n\nAlternatively, perhaps we need to do some other actions first. Let\u0027s backtrack a bit.\n\nAfter the first two actions (Attack A and Overcome A from D), the current state is:\n\n- Province A (yes), C (yes, since only D\u0027s province was removed), D (no).\n\n- Planet A (no, because after attacking A, Planet A was set to false), Planet B (yes), Planet D (yes).\n\n- Harmony (yes, because Overcome A from D restored it).\n\n- A craves D (good).\n\n- C still craves B (original craving).\n\nNow, to get B to crave C, we need to do Overcome B from C. But to do that, B must have Pain. To get Pain on B, we need to attack B. But B is not a province, so can\u0027t attack it. So maybe we need to make B a province first.\n\nHow can B become a province? The only way is through Succumb. Because when you succumb an object, you regain Province, Planet, Harmony. But to succumb, you need Pain on the object. So to get Pain on B, perhaps we can attack another object that has B as a planet? Wait, not sure.\n\nAlternatively, maybe we can attack another object that has B as a planet? Wait, the attack action\u0027s preconditions are Province, Planet, and Harmony. So if we can get B to have Province, then we can attack it. But how?\n\nAlternatively, maybe we can use Feast. Let\u0027s see Feast\u0027s preconditions: Object Craves other, Province of the object, and Harmony. The effect is Pain on the object, Province of the other, and loses Craves, Province, Harmony.\n\nSuppose we do a Feast on C from B? Wait, but C craves B. So \"Feast C from B\" would require that C craves B (which it does), Province C (yes), and Harmony (yes). So that\u0027s possible.\n\nLet me try that.\n\nAction 3: Feast C from B.\n\nWait, the syntax is \"Feast object from another object\". So the first object is the one doing the feasting, and the second is the other. So Feast C from B means C is the object, and B is the other. So the preconditions are:\n\n- C craves B (yes),\n\n- Province C (yes),\n\n- Harmony (yes).\n\nSo that\u0027s okay.\n\nAfter performing Feast C from B:\n\nEffects:\n\n- Pain C is true.\n\n- Province B becomes true (since the other is B, so Province of the other (B) becomes true? Wait, the effect of Feast is:\n\nOnce Feast is performed, the following will be true: Pain object (C), Province other object (B).\n\nAnd the following will be false: Object Craves other (C craves B is removed), Province object (C\u0027s province is lost), Harmony.\n\nSo after this:\n\n- Pain C is true.\n\n- Province B is now true (since the other is B).\n\n- C no longer craves B (so that craving is gone, which is part of the initial condition but not part of the goal, so that\u0027s okay).\n\n- Province C is now false (since it was the object, so its province is removed).\n\n- Harmony is now false.\n\nSo now, after this action, B is a province. That\u0027s useful.\n\nNow, with B being a province, we can attack B. Let\u0027s see.\n\nAction 4: Attack B.\n\nPreconditions: Province B (yes, from the Feast), Planet B (yes, since Planet B was always there), Harmony (currently false, because Feast turned it off). Wait, Harmony is required for Attack. So Harmony is not true here. So can\u0027t attack B yet.\n\nHmm, problem. So after the Feast, Harmony is false. So to attack B, we need Harmony. How to get Harmony back?\n\nWe can do Succumb on an object that has Pain. Currently, Pain C is true (from the Feast). So we can Succumb C.\n\nAction 5: Succumb C.\n\nPreconditions: Pain C (yes).\n\nAfter Succumb C:\n\n- Province C is restored (since it was lost after Feast),\n\n- Planet C? Wait, the Succumb action\u0027s effects are: when you succumb an object, you get Province, Planet, Harmony, and lose Pain.\n\nWait, the exact effects for Succumb are:\n\nOnce Succumb is performed, the following will be true: Province object, Planet object, Harmony.\n\nAnd the following will be false: Pain object.\n\nSo for Succumb C:\n\n- Province C becomes true again,\n\n- Planet C is set to true (but was it a planet before? The initial conditions only mention planets A, B, D. So Planet C was not part of the initial conditions. So after attacking, etc., Planet C\u0027s status? Initially, Planet C wasn\u0027t there. So when we succumb C, does Planet C become true? The action says \"Planet object\", so yes, it would set Planet C to true even if it wasn\u0027t before? Wait, the problem says \"Planet object\" as a fact. So if the object wasn\u0027t a planet before, does the succumb action set it to true? The description says \"Once Succumb action is performed the following will be true: Province object, Planet object, Harmony.\" So yes, regardless of previous state, those are set to true. So after Succumb C, Planet C is now true, even if it wasn\u0027t before.\n\nBut that might complicate things, but let\u0027s proceed.\n\nSo after Succumb C:\n\n- Pain C is false.\n\n- Province C is true again,\n\n- Planet C is now true,\n\n- Harmony is true again.\n\nSo now, Harmony is back. Now, can we attack B?\n\nYes, because:\n\n- Province B is true (from the Feast),\n\n- Planet B is true (original),\n\n- Harmony is true (from Succumb C).\n\nSo Action 6: Attack B.\n\nAfter Attack B:\n\n- Pain B is true.\n\n- Province B becomes false,\n\n- Planet B becomes false,\n\n- Harmony becomes false.\n\nNow, with Pain B, we can do Succumb B to get Province back, but maybe we can use the Pain to do Overcome.\n\nWait, the goal is to get B to crave C. To do that, we need to do Overcome B from C. Let\u0027s see the preconditions for that:\n\n- Province of the other (C) must be true (yes, since after Succumb C, Province C is true),\n\n- Pain of the object (B) must be true (yes, from Attack B).\n\nSo yes, we can do Overcome B from C now.\n\nAction 7: Overcome B from C.\n\nEffects:\n\n- True: Harmony, Province B, B craves C.\n\n- False: Province C (since C is the other object here), Pain B.\n\nSo after this:\n\n- Harmony is true,\n\n- Province B is true,\n\n- B craves C (good for the second goal),\n\n- Province C is now false (since C was the other object in the Overcome),\n\n- Pain B is gone.\n\nNow, after this, we have achieved the second goal (B craves C). Now, moving on to the third goal: D craves B.\n\nTo get D to crave B, we need to do Overcome D from B. Let\u0027s see the preconditions:\n\n- Province of the other (B) must be true (yes, since after Overcome B from C, Province B is true),\n\n- Pain of the object (D) must be true.\n\nSo how to get Pain on D?\n\nTo get Pain on D, we can attack D. To attack D, we need:\n\n- Province D (currently, after previous steps, let\u0027s check D\u0027s province status.\n\nLooking back:\n\nAfter the first Overcome A from D, D\u0027s province was set to false. Then, in subsequent steps, D\u0027s province hasn\u0027t been addressed. So currently, D\u0027s province is still false (since after the first Overcome A from D, D lost province, and no action since then has restored it except maybe through Succumb? Let\u0027s track D\u0027s state.\n\nAfter the first two actions (Attack A and Overcome A from D):\n\n- D\u0027s province was set to false.\n\nThen, after Feast C from B, Province B became true, but D\u0027s province remained false.\n\nAfter Succumb C, Province C was restored, but D\u0027s still false.\n\nAfter Attack B, no effect on D\u0027s province.\n\nAfter Overcome B from C, Province C was set to false, but D\u0027s province is still false.\n\nSo D is not a province. To attack D, we need it to be a province. So how to get D to be a province?\n\nAlternatively, maybe we can get D to have Pain through another action. Let\u0027s think.\n\nAlternatively, perhaps we can make D a province via Succumb. To do that, D must have Pain. To get Pain on D, perhaps through an attack on another object that has D as a planet? Not sure.\n\nAlternatively, maybe we can do a Feast on D from someone else. Let\u0027s see.\n\nWait, to do Feast on D from another object, the preconditions are:\n\n- D craves the other object,\n\n- Province D,\n\n- Harmony.\n\nBut D\u0027s province is currently false, so can\u0027t do that.\n\nAlternatively, maybe we can first get D to have Province again. How?\n\nPerhaps by doing a Feast on another object that has D as the other. Let\u0027s see.\n\nWait, let\u0027s think of another path. Let\u0027s see what\u0027s the current state after the seventh action (Overcome B from C):\n\nCurrent state:\n\n- Province A: yes (from Overcome A from D, which set it back),\n\n- Province B: yes (from Overcome B from C),\n\n- Province C: no (from Overcome B from C),\n\n- Province D: no (still from the first Overcome).\n\nPlanets:\n\n- Planet A: no (from Attack A),\n\n- Planet B: no (from Attack B),\n\n- Planet C: yes (from Succumb C),\n\n- Planet D: yes (still, since only Attack on A and B affected their planets).\n\nHarmony: yes (from Overcome B from C).\n\nCravings:\n\n- A craves D (from Overcome A from D),\n\n- B craves C (from Overcome B from C),\n\n- C no longer craves B (from the Feast C from B).\n\nNow, to get D to crave B, we need Overcome D from B. The preconditions are:\n\n- Province of the other (B) is true (yes),\n\n- Pain of D must be true.\n\nTo get Pain on D, we can attack D. But to attack D, we need:\n\n- Province D (no),\n\n- Planet D (yes),\n\n- Harmony (yes).\n\nSo Province D is needed. How to get D to have Province?\n\nPerhaps by doing a Feast on another object that results in D gaining Province. Let\u0027s see.\n\nSuppose we do a Feast on A from D. Wait, but A craves D (from the first goal). So \"Feast A from D\" requires:\n\n- A craves D (yes),\n\n- Province A (yes),\n\n- Harmony (yes).\n\nSo that\u0027s possible.\n\nAction 8: Feast A from D.\n\nEffects:\n\n- Pain A is true,\n\n- Province D becomes true (since the other is D),\n\n- A no longer craves D (which is bad, because that was a goal),\n\n- Province A is set to false,\n\n- Harmony is set to false.\n\nHmm, but we need A to still crave D. So this would undo the first goal. Not good.\n\nAlternatively, maybe another approach. Let\u0027s see.\n\nAlternatively, to get D to have Province, perhaps we can do a Feast on someone else. Let\u0027s see.\n\nAlternatively, maybe we can attack another object to get Pain on D indirectly. Not sure.\n\nAlternatively, let\u0027s try to get D to have Pain through another action. Since D is not a province, can\u0027t attack it. Maybe through Overcome?\n\nWait, Overcome requires the other to have Province and the object to have Pain. To get D to have Pain, perhaps we can do Overcome on D from someone else, but that would require D has Pain already, which is not the case.\n\nHmm, this is getting complicated. Let\u0027s think differently.\n\nWe need D to crave B. To do that, Overcome D from B requires D has Pain. To get Pain on D, perhaps we can attack D, but need Province D. To get Province D, maybe through Feast.\n\nWait, let\u0027s try another path. Let\u0027s see if we can get D to have Province via Feast.\n\nSuppose we do Feast on C from D. But C currently does not crave D. The initial craving was C craves B, but after Feast C from B, that craving was removed. So unless there\u0027s another craving, we can\u0027t do that.\n\nAlternatively, maybe we can get D to crave someone else first, then Feast.\n\nAlternatively, perhaps we can do Overcome on D from another object to get D to crave that object, but that would require Pain on D, which we can\u0027t get yet.\n\nHmm. Maybe we need to backtrack and see if there\u0027s a different path.\n\nLet me recap the current state after the seventh action (Overcome B from C):\n\n- A craves D (good),\n\n- B craves C (good),\n\n- Need D to crave B.\n\nCurrent provinces: A, B.\n\nPlanets: C and D.\n\nHarmony is true.\n\nD\u0027s province is still false.\n\nTo get D to crave B, need Overcome D from B. To do that, D must have Pain. To get Pain on D, need to attack D, but need Province D. To get Province D, perhaps via Feast.\n\nWait, let\u0027s try to get D to have Province via Feast. Let\u0027s see:\n\nSuppose we do Feast on someone else who craves D. But currently, no one craves D except maybe A, but A\u0027s craving for D was achieved through Overcome, but after that, if we do a Feast on A from D, that would remove the craving. Not good.\n\nAlternatively, maybe we can first get someone to crave D, then Feast them to get D\u0027s province.\n\nWait, to get someone to crave D, perhaps Overcome on that someone from D. But that requires Pain on the someone and D has Province. D\u0027s province is false, so can\u0027t do that.\n\nHmm. Maybe we need to first get D to have Province through another method.\n\nAlternatively, maybe we can do a Feast on D from someone else, but D can\u0027t crave anything unless we set that up.\n\nAlternatively, perhaps we can do a Feast on C from D, but C doesn\u0027t crave D. To make C crave D, we can do Overcome on C from D. But that requires Pain on C and D has Province. D\u0027s province is false, so no.\n\nAlternatively, let\u0027s try to get D\u0027s province via Succumb. To do that, D must have Pain. To get Pain on D, perhaps through an attack on another object that has D as a planet? Not sure.\n\nWait, the only way to get Pain on D is to attack D, but that requires D to be a province. So it\u0027s a loop.\n\nHmm. Maybe I need to try a different approach earlier in the plan.\n\nLet me think again. Let\u0027s see the third goal: D craves B. To get that, Overcome D from B requires D has Pain. To get Pain on D, need to attack D. To attack D, need Province D, Planet D (which it has), and Harmony.\n\nSo to get Province D, perhaps we can do a Feast on someone else that results in D gaining province. Let\u0027s see:\n\nSuppose we do Feast on A from D. But that requires A craves D (which it does), Province A (yes), Harmony (yes). So that\u0027s possible. Let\u0027s try that as an earlier step.\n\nWait, but if we do that before achieving the first goal, it would remove A\u0027s craving for D. So maybe we need to do that after getting the first goal.\n\nWait, let\u0027s try modifying the plan:\n\nAfter achieving the first two goals (A craves D and B craves C), we need to handle D\u0027s craving.\n\nWait, let\u0027s see the current state after the seventh action (Overcome B from C):\n\n- A craves D (good),\n\n- B craves C (good),\n\n- D\u0027s province is false,\n\n- Harmony is true,\n\n- Planets: C and D.\n\nSo to get D to crave B, we need Overcome D from B. To do that, D must have Pain. To get Pain on D, need to attack D. To attack D, need Province D. To get Province D, perhaps via Feast.\n\nLet me try:\n\nAction 8: Feast A from D.\n\nWait, but that would require A craves D (yes), Province A (yes), Harmony (yes). So yes.\n\nAfter this Feast:\n\n- Pain A is true,\n\n- Province D becomes true,\n\n- A no longer craves D (bad),\n\n- Province A is false,\n\n- Harmony is false.\n\nThis undoes the first goal (A craves D), which is not good. So that\u0027s a problem.\n\nHmm. Maybe instead of that, we can do a Feast on another object. Let\u0027s see.\n\nWhat if we do Feast on C from D? But C doesn\u0027t crave D. To make C crave D, we can do Overcome on C from D. But that requires Pain on C and D has Province (which it doesn\u0027t). So not possible.\n\nAlternatively, maybe we can first get D to have Province via another path.\n\nWait, perhaps we can do a Feast on D from someone else. But D can\u0027t crave anything unless we set that up.\n\nAlternatively, maybe we can do a Feast on B from D. But B doesn\u0027t crave D. To make B crave D, we can do Overcome B from D. But that requires Pain on B and D has Province (no).\n\nHmm. This is tricky. Maybe I need to find another way to get D to have Pain without losing the first goal.\n\nAlternatively, maybe we can do a Feast on another object that doesn\u0027t affect the first goal. Let\u0027s see.\n\nWait, what about using the Overcome action on D from another object that has Province. Let\u0027s see:\n\nSuppose we do Overcome D from A. The preconditions are:\n\n- Province of the other (A) must be true (yes, currently A\u0027s province is true? Wait, after the seventh action (Overcome B from C), let\u0027s check A\u0027s province.\n\nAfter Overcome A from D (action 2), Province A was set to true. Then, after Feast C from B (action 3), Province A wasn\u0027t affected. Then, after Succumb C (action5), Province A remained true. After Attack B (action6), Province A still true. After Overcome B from C (action7), Province A is still true. So yes, A has Province.\n\nSo Overcome D from A requires:\n\n- Province of the other (A) is true (yes),\n\n- Pain of D must be true (no).\n\nSo can\u0027t do that yet.\n\nHmm. Maybe I need to find a way to get Pain on D without attacking it. Alternatively, perhaps through another Overcome.\n\nAlternatively, maybe we can do a Feast on D from someone else, but that requires D to crave that someone. Let\u0027s see.\n\nAlternatively, perhaps we can first get D to crave B through another path. Let\u0027s think of the Overcome action again.\n\nWait, the Overcome action\u0027s effect is that the object (first parameter) craves the other (second). So to get D to crave B, we need Overcome D from B. The preconditions are:\n\n- Province of the other (B) is true (yes),\n\n- Pain of D must be true.\n\nSo to get Pain on D, perhaps we can attack D, but need Province D. To get Province D, maybe via Feast on someone else who craves D. But no one does yet.\n\nAlternatively, maybe we can first get someone to crave D, then Feast them to get D\u0027s province.\n\nLet me try:\n\nAction 8: Overcome C from D. Wait, but to do that, the preconditions are:\n\n- Province of the other (D) must be true (no),\n\n- Pain of C must be true (C\u0027s Pain? Let\u0027s see current state.\n\nAfter the seventh action (Overcome B from C), Pain B was removed, but C\u0027s Pain? Let\u0027s see:\n\nAfter Feast C from B (action3), Pain C was true. Then, after Succumb C (action5), Pain C was removed. So currently, Pain C is false.\n\nSo can\u0027t do Overcome C from D because Pain C is false and D\u0027s province is false.\n\nHmm.\n\nAlternatively, maybe we can do a Feast on B from D. But B doesn\u0027t crave D. To make B crave D, we can do Overcome B from D. But that requires Pain on B and D has Province (no).\n\nThis is getting too tangled. Maybe I need to try a different approach in the plan.\n\nLet me think of the third goal again: D craves B. To get that, Overcome D from B requires Pain on D. To get Pain on D, need to attack D. To attack D, need Province D. To get Province D, perhaps via Feast on someone who craves D. But no one does yet. So maybe first, we need to make someone crave D, then Feast them to get D\u0027s province, then attack D to get Pain, then Overcome.\n\nLet\u0027s try this path:\n\nFirst, to make someone crave D, perhaps Overcome on that someone from D. But that requires Pain on them and D has Province (which it doesn\u0027t). So not possible.\n\nAlternatively, maybe we can do a Feast on someone else to get D\u0027s province, then attack.\n\nWait, let\u0027s try:\n\nAction 8: Feast A from D. But that would remove A\u0027s craving for D. Not good. But maybe we can do this and then fix it later.\n\nSo:\n\nAction 8: Feast A from D.\n\nEffects:\n\n- Pain A is true,\n\n- Province D becomes true,\n\n- A no longer craves D,\n\n- Province A is false,\n\n- Harmony is false.\n\nNow, D has Province. Now, we can attack D.\n\nAction9: Attack D.\n\nPreconditions: Province D (yes), Planet D (yes), Harmony (no). Wait, Harmony is false here. So can\u0027t attack D yet.\n\nHmm. Need Harmony. To get Harmony, we can Succumb someone with Pain. Currently, Pain A is true (from Feast). So:\n\nAction10: Succumb A.\n\nPreconditions: Pain A (yes).\n\nAfter Succumb A:\n\n- Province A is true,\n\n- Planet A is true,\n\n- Harmony is true,\n\n- Pain A is false.\n\nNow, Harmony is back. Now, can attack D.\n\nAction11: Attack D.\n\nPreconditions: Province D (yes), Planet D (yes), Harmony (yes).\n\nAfter Attack D:\n\n- Pain D is true,\n\n- Province D becomes false,\n\n- Planet D becomes false,\n\n- Harmony becomes false.\n\nNow, Pain D is true. Now, we can do Overcome D from B.\n\nAction12: Overcome D from B.\n\nPreconditions:\n\n- Province of the other (B) is true (yes),\n\n- Pain of D is true (yes).\n\nSo yes.\n\nEffects:\n\n- Harmony becomes true,\n\n- Province D is set to true (since D is the object, and the other is B),\n\n- D craves B (good),\n\n- Province B is set to false,\n\n- Pain D is false.\n\nWait, the effects of Overcome are:\n\n- True: Harmony, Province object (D), Object (D) craves other (B).\n\n- False: Province other (B), Pain object (D).\n\nSo after this:\n\n- Harmony is true,\n\n- Province D is true,\n\n- D craves B (good),\n\n- Province B is now false,\n\n- Pain D is false.\n\nNow, we have achieved the third goal (D craves B). But now, we have to check the other goals.\n\nThe first goal was A craves D, but after Feast A from D (action8), that was removed. So we need to fix that.\n\nHmm, so this path undoes the first goal. Not good. So this approach is problematic.\n\nHmm. Maybe there\u0027s another way. Let\u0027s think again.\n\nAlternative approach: After achieving the first two goals (A craves D and B craves C), we need to get D to crave B without affecting the first goal.\n\nLet me see the state after the seventh action (Overcome B from C):\n\n- A craves D (good),\n\n- B craves C (good),\n\n- D\u0027s province is false,\n\n- Harmony is true,\n\n- Planets: C and D.\n\nTo get D to crave B, need Overcome D from B. To do that, D must have Pain. To get Pain on D, need to attack D. To attack D, need Province D. To get Province D, perhaps via Feast on someone who cr",
"statistics_text_generation_0": {
"input_tokens": 670,
"output_tokens": 7523
}
},
"generation": "Okay, let\u0027s see. I need to figure out a plan to reach the goal where Block A craves D, Block B craves C, and D craves B. The initial conditions are that Block C craves B, there\u0027s harmony, planets A, B, D, and provinces A, C, D. So first, I should list out all the initial facts and then see how each action can change them step by step.\n\nStarting with the initial state:\n\n- Province Block A\n- Province Block C\n- Province Block D\n- Planet Block A\n- Planet Block B\n- Planet Block D\n- Harmony\n- Block C craves Block B\n\nThe goal is:\n\n- Block A craves D\n- Block B craves C\n- Block D craves B\n\nHmm. Let\u0027s think about the actions available. The actions are Attack, Feast, Succumb, and Overcome. Each has preconditions and effects. Let me recall their details.\n\nFirst, Attack requires Province, Planet, and Harmony. When you attack, you get Pain, but lose Province, Planet, and Harmony. So after attacking, Pain is true, but those three are false. But then Succumb requires Pain, and when you succumb, you regain Province, Planet, Harmony, and lose Pain. So maybe Attack and Succumb can be used to toggle some states?\n\nOvercome requires Province of another object and Pain. Wait, the preconditions for Overcome are \"Province other object\" and \"Pain object\". Wait, the exact wording was: \"To perform Overcome action, the following needs to be true: Province other object, Pain object.\" So the object being overcome is from another object. Wait, the syntax is \"Overcome object from another object\". So maybe the first object is the target, and the second is the other? Not sure, but the preconditions are that the other object has Province, and the object (the first one) has Pain. Then, when you overcome, you get Harmony, Province of the first object, and the first object craves the other. And you lose Province of the other, Pain of the first, and the other\u0027s Province?\n\nWait, the effects of Overcome are:\n\n- True: Harmony, Province object, Object Craves other object.\n- False: Province other object, Pain object.\n\nWait, so after overcoming, the first object (the one being overcome?) gets Province, and the other loses Province. Also, the first object now craves the other. And Pain on the first is removed, and Harmony is restored.\n\nFeast requires Object Craves other, Province of the object, and Harmony. When you feast, you get Pain on the object, Province of the other, and lose Craves, Province, and Harmony.\n\nSo the plan needs to manipulate these to reach the goals.\n\nLet me note the goals again:\n\nGoal 1: Block A craves D \u2192 So we need to have \"Object Craves other object\" where the object is A and other is D.\n\nGoal 2: Block B craves C \u2192 Object is B, other is C.\n\nGoal 3: Block D craves B \u2192 Object is D, other is B.\n\nLooking at the initial conditions, Block C already craves B. So that\u0027s part of the third goal (D needs to crave B, but C already does). Wait, but the third goal is D craves B, so that\u0027s separate.\n\nFirst, let\u0027s see how to get A to crave D. To get A to crave D, perhaps using Overcome? Because Overcome\u0027s effect is that the object (first parameter) craves the other (second). So if we do Overcome A from D, then A would crave D. But to do that, the preconditions are that the other (D) has Province, and the object (A) has Pain. Wait, the preconditions for Overcome are \"Province other object\" and \"Pain object\". So the other (D) must have Province, and the object (A) must have Pain. \n\nSo to set up for Overcome A from D, we need D to have Province (which it does initially), and A to have Pain. How do we get Pain on A? That comes from an Attack action. But Attack requires Province, Planet, and Harmony. Let\u0027s see.\n\nAlternatively, maybe through Feast. Let\u0027s think step by step.\n\nFirst, let\u0027s see the initial state:\n\nProvinces: A, C, D (so B is not a province? Wait, initial conditions say province Block A, C, D. So B is not a province. Wait, the initial conditions are \"province Block A, province Block C, province Block D\" so B is not a province. Planets are A, B, D. Harmony is present. And C craves B.\n\nSo the initial provinces are A, C, D. Planets are A, B, D. So B is a planet but not a province. Hmm.\n\nThe goal requires that Block B craves C. So to get B to crave C, perhaps we need to do an Overcome on B from C? Because Overcome B from C would make B crave C. The preconditions for that would be that the other (C) has Province (which it does), and B has Pain. To get B to have Pain, we need to attack B. But attacking B requires that B has Province, Planet, and Harmony. Wait, but B is not a province (since provinces are A, C, D). So can\u0027t attack B directly. Hmm.\n\nAlternatively, maybe we can first get B to be a province? How?\n\nAlternatively, maybe through some sequence of actions. Let\u0027s think step by step.\n\nFirst, let\u0027s see the goals again. Let\u0027s tackle each craving one by one.\n\nGoal 1: A craves D. To get that, Overcome A from D. To do that, need D has Province (which it does), and A has Pain. To get A to have Pain, we can attack A. But attacking A requires that A has Province, Planet, and Harmony. A does have Province and Planet (since it\u0027s a planet and province). Harmony is present. So attacking A would work. Let\u0027s see:\n\nAction 1: Attack A.\n\nPreconditions: Province A (yes), Planet A (yes), Harmony (yes). So that\u0027s okay.\n\nAfter Attack A:\n\n- Pain A is true.\n\n- Province A, Planet A, Harmony are false.\n\nSo now, Harmony is gone, A has Pain, and no longer has Province or Planet. But Planet is a separate fact. Wait, the attack action\u0027s effects are that Province, Planet, and Harmony become false. So after attacking A, Planet A is still there? Wait, no. Wait the attack action\u0027s effects are that the following will be false: Province object, Planet object, Harmony. Wait, the object here is A. So Planet A would be set to false. Wait, but the initial conditions have Planet A as true. So after attacking A, Planet A becomes false. Hmm, but maybe that\u0027s okay.\n\nBut then, after attacking A, we can do Succumb on A to regain Province, Planet, Harmony, and lose Pain. But maybe that\u0027s a path.\n\nAlternatively, after attacking A, we can proceed to Overcome A from D. Let\u0027s see:\n\nAfter Attack A:\n\n- Pain A is true.\n\n- Province A is false, Planet A is false, Harmony is false.\n\nSo to do Overcome A from D, the preconditions are:\n\n- Province of the other (D) must be true (which it is, since D is still a province? Wait, after attacking A, the only things that changed were A\u0027s Province, Planet, and Harmony. So D\u0027s Province remains true.\n\n- Pain of the object (A) is true (yes, because of the attack).\n\nSo yes, Overcome A from D can be done now. Let\u0027s do that.\n\nAction 2: Overcome A from D.\n\nEffects:\n\n- True: Harmony, Province A, A craves D.\n\n- False: Province D (since D is the other object here), Pain A.\n\nSo after this:\n\n- Harmony is true again.\n\n- Province A is true again.\n\n- A craves D (good for goal 1).\n\n- Province D is now false (since D was the other object in the Overcome).\n\n- Pain A is gone.\n\nSo now, after these two actions (Attack A, Overcome A from D), we have achieved A craves D. That\u0027s one goal done.\n\nNow, moving on to the next goal: Block B craves C. To get B to crave C, we need to do Overcome B from C. Let\u0027s see the preconditions for that:\n\n- Province of the other (C) must be true. Currently, after the previous steps, C\u0027s province status: initially it was province, but after the Overcome A from D, the other object was D, so C\u0027s province wasn\u0027t affected. Wait, in the Overcome A from D, the other object is D, so the Province of D is set to false. But C\u0027s province is still true? Wait, no, in the Overcome A from D, the other is D, so the Province of D is set to false. Wait, the effect of Overcome is that the Province of the other object (D) becomes false. So after that, D is no longer a province. But C\u0027s province is still true, because it wasn\u0027t involved in that Overcome.\n\nSo to do Overcome B from C, the other is C, so we need C to have Province (which it does), and B must have Pain. How to get B to have Pain?\n\nTo get Pain on B, we need to attack B. But attacking B requires that B has Province, Planet, and Harmony. Currently, B is not a province (since provinces were A, C, D initially, and after Overcome A from D, D lost province, so now provinces are A and C). So B is not a province. Therefore, can\u0027t attack B because the precondition requires Province B. So that\u0027s a problem.\n\nHmm. So maybe we need to make B a province first. How?\n\nAlternatively, perhaps we can get B to have Province through some other action. Let\u0027s think.\n\nAlternatively, maybe we can first get B to be a province via some other path. Let\u0027s see.\n\nWait, the only way to get Province back is through Succumb. Because when you succumb an object, you regain Province, Planet, Harmony. So if we can get Pain on B, then we can succumb B to get Province back. But how to get Pain on B?\n\nAlternatively, maybe we can attack another object that has B as a planet? Wait, not sure.\n\nAlternatively, perhaps we need to do some other actions first. Let\u0027s backtrack a bit.\n\nAfter the first two actions (Attack A and Overcome A from D), the current state is:\n\n- Province A (yes), C (yes, since only D\u0027s province was removed), D (no).\n\n- Planet A (no, because after attacking A, Planet A was set to false), Planet B (yes), Planet D (yes).\n\n- Harmony (yes, because Overcome A from D restored it).\n\n- A craves D (good).\n\n- C still craves B (original craving).\n\nNow, to get B to crave C, we need to do Overcome B from C. But to do that, B must have Pain. To get Pain on B, we need to attack B. But B is not a province, so can\u0027t attack it. So maybe we need to make B a province first.\n\nHow can B become a province? The only way is through Succumb. Because when you succumb an object, you regain Province, Planet, Harmony. But to succumb, you need Pain on the object. So to get Pain on B, perhaps we can attack another object that has B as a planet? Wait, not sure.\n\nAlternatively, maybe we can attack another object that has B as a planet? Wait, the attack action\u0027s preconditions are Province, Planet, and Harmony. So if we can get B to have Province, then we can attack it. But how?\n\nAlternatively, maybe we can use Feast. Let\u0027s see Feast\u0027s preconditions: Object Craves other, Province of the object, and Harmony. The effect is Pain on the object, Province of the other, and loses Craves, Province, Harmony.\n\nSuppose we do a Feast on C from B? Wait, but C craves B. So \"Feast C from B\" would require that C craves B (which it does), Province C (yes), and Harmony (yes). So that\u0027s possible.\n\nLet me try that.\n\nAction 3: Feast C from B.\n\nWait, the syntax is \"Feast object from another object\". So the first object is the one doing the feasting, and the second is the other. So Feast C from B means C is the object, and B is the other. So the preconditions are:\n\n- C craves B (yes),\n\n- Province C (yes),\n\n- Harmony (yes).\n\nSo that\u0027s okay.\n\nAfter performing Feast C from B:\n\nEffects:\n\n- Pain C is true.\n\n- Province B becomes true (since the other is B, so Province of the other (B) becomes true? Wait, the effect of Feast is:\n\nOnce Feast is performed, the following will be true: Pain object (C), Province other object (B).\n\nAnd the following will be false: Object Craves other (C craves B is removed), Province object (C\u0027s province is lost), Harmony.\n\nSo after this:\n\n- Pain C is true.\n\n- Province B is now true (since the other is B).\n\n- C no longer craves B (so that craving is gone, which is part of the initial condition but not part of the goal, so that\u0027s okay).\n\n- Province C is now false (since it was the object, so its province is removed).\n\n- Harmony is now false.\n\nSo now, after this action, B is a province. That\u0027s useful.\n\nNow, with B being a province, we can attack B. Let\u0027s see.\n\nAction 4: Attack B.\n\nPreconditions: Province B (yes, from the Feast), Planet B (yes, since Planet B was always there), Harmony (currently false, because Feast turned it off). Wait, Harmony is required for Attack. So Harmony is not true here. So can\u0027t attack B yet.\n\nHmm, problem. So after the Feast, Harmony is false. So to attack B, we need Harmony. How to get Harmony back?\n\nWe can do Succumb on an object that has Pain. Currently, Pain C is true (from the Feast). So we can Succumb C.\n\nAction 5: Succumb C.\n\nPreconditions: Pain C (yes).\n\nAfter Succumb C:\n\n- Province C is restored (since it was lost after Feast),\n\n- Planet C? Wait, the Succumb action\u0027s effects are: when you succumb an object, you get Province, Planet, Harmony, and lose Pain.\n\nWait, the exact effects for Succumb are:\n\nOnce Succumb is performed, the following will be true: Province object, Planet object, Harmony.\n\nAnd the following will be false: Pain object.\n\nSo for Succumb C:\n\n- Province C becomes true again,\n\n- Planet C is set to true (but was it a planet before? The initial conditions only mention planets A, B, D. So Planet C was not part of the initial conditions. So after attacking, etc., Planet C\u0027s status? Initially, Planet C wasn\u0027t there. So when we succumb C, does Planet C become true? The action says \"Planet object\", so yes, it would set Planet C to true even if it wasn\u0027t before? Wait, the problem says \"Planet object\" as a fact. So if the object wasn\u0027t a planet before, does the succumb action set it to true? The description says \"Once Succumb action is performed the following will be true: Province object, Planet object, Harmony.\" So yes, regardless of previous state, those are set to true. So after Succumb C, Planet C is now true, even if it wasn\u0027t before.\n\nBut that might complicate things, but let\u0027s proceed.\n\nSo after Succumb C:\n\n- Pain C is false.\n\n- Province C is true again,\n\n- Planet C is now true,\n\n- Harmony is true again.\n\nSo now, Harmony is back. Now, can we attack B?\n\nYes, because:\n\n- Province B is true (from the Feast),\n\n- Planet B is true (original),\n\n- Harmony is true (from Succumb C).\n\nSo Action 6: Attack B.\n\nAfter Attack B:\n\n- Pain B is true.\n\n- Province B becomes false,\n\n- Planet B becomes false,\n\n- Harmony becomes false.\n\nNow, with Pain B, we can do Succumb B to get Province back, but maybe we can use the Pain to do Overcome.\n\nWait, the goal is to get B to crave C. To do that, we need to do Overcome B from C. Let\u0027s see the preconditions for that:\n\n- Province of the other (C) must be true (yes, since after Succumb C, Province C is true),\n\n- Pain of the object (B) must be true (yes, from Attack B).\n\nSo yes, we can do Overcome B from C now.\n\nAction 7: Overcome B from C.\n\nEffects:\n\n- True: Harmony, Province B, B craves C.\n\n- False: Province C (since C is the other object here), Pain B.\n\nSo after this:\n\n- Harmony is true,\n\n- Province B is true,\n\n- B craves C (good for the second goal),\n\n- Province C is now false (since C was the other object in the Overcome),\n\n- Pain B is gone.\n\nNow, after this, we have achieved the second goal (B craves C). Now, moving on to the third goal: D craves B.\n\nTo get D to crave B, we need to do Overcome D from B. Let\u0027s see the preconditions:\n\n- Province of the other (B) must be true (yes, since after Overcome B from C, Province B is true),\n\n- Pain of the object (D) must be true.\n\nSo how to get Pain on D?\n\nTo get Pain on D, we can attack D. To attack D, we need:\n\n- Province D (currently, after previous steps, let\u0027s check D\u0027s province status.\n\nLooking back:\n\nAfter the first Overcome A from D, D\u0027s province was set to false. Then, in subsequent steps, D\u0027s province hasn\u0027t been addressed. So currently, D\u0027s province is still false (since after the first Overcome A from D, D lost province, and no action since then has restored it except maybe through Succumb? Let\u0027s track D\u0027s state.\n\nAfter the first two actions (Attack A and Overcome A from D):\n\n- D\u0027s province was set to false.\n\nThen, after Feast C from B, Province B became true, but D\u0027s province remained false.\n\nAfter Succumb C, Province C was restored, but D\u0027s still false.\n\nAfter Attack B, no effect on D\u0027s province.\n\nAfter Overcome B from C, Province C was set to false, but D\u0027s province is still false.\n\nSo D is not a province. To attack D, we need it to be a province. So how to get D to be a province?\n\nAlternatively, maybe we can get D to have Pain through another action. Let\u0027s think.\n\nAlternatively, perhaps we can make D a province via Succumb. To do that, D must have Pain. To get Pain on D, perhaps through an attack on another object that has D as a planet? Not sure.\n\nAlternatively, maybe we can do a Feast on D from someone else. Let\u0027s see.\n\nWait, to do Feast on D from another object, the preconditions are:\n\n- D craves the other object,\n\n- Province D,\n\n- Harmony.\n\nBut D\u0027s province is currently false, so can\u0027t do that.\n\nAlternatively, maybe we can first get D to have Province again. How?\n\nPerhaps by doing a Feast on another object that has D as the other. Let\u0027s see.\n\nWait, let\u0027s think of another path. Let\u0027s see what\u0027s the current state after the seventh action (Overcome B from C):\n\nCurrent state:\n\n- Province A: yes (from Overcome A from D, which set it back),\n\n- Province B: yes (from Overcome B from C),\n\n- Province C: no (from Overcome B from C),\n\n- Province D: no (still from the first Overcome).\n\nPlanets:\n\n- Planet A: no (from Attack A),\n\n- Planet B: no (from Attack B),\n\n- Planet C: yes (from Succumb C),\n\n- Planet D: yes (still, since only Attack on A and B affected their planets).\n\nHarmony: yes (from Overcome B from C).\n\nCravings:\n\n- A craves D (from Overcome A from D),\n\n- B craves C (from Overcome B from C),\n\n- C no longer craves B (from the Feast C from B).\n\nNow, to get D to crave B, we need Overcome D from B. The preconditions are:\n\n- Province of the other (B) is true (yes),\n\n- Pain of D must be true.\n\nTo get Pain on D, we can attack D. But to attack D, we need:\n\n- Province D (no),\n\n- Planet D (yes),\n\n- Harmony (yes).\n\nSo Province D is needed. How to get D to have Province?\n\nPerhaps by doing a Feast on another object that results in D gaining Province. Let\u0027s see.\n\nSuppose we do a Feast on A from D. Wait, but A craves D (from the first goal). So \"Feast A from D\" requires:\n\n- A craves D (yes),\n\n- Province A (yes),\n\n- Harmony (yes).\n\nSo that\u0027s possible.\n\nAction 8: Feast A from D.\n\nEffects:\n\n- Pain A is true,\n\n- Province D becomes true (since the other is D),\n\n- A no longer craves D (which is bad, because that was a goal),\n\n- Province A is set to false,\n\n- Harmony is set to false.\n\nHmm, but we need A to still crave D. So this would undo the first goal. Not good.\n\nAlternatively, maybe another approach. Let\u0027s see.\n\nAlternatively, to get D to have Province, perhaps we can do a Feast on someone else. Let\u0027s see.\n\nAlternatively, maybe we can attack another object to get Pain on D indirectly. Not sure.\n\nAlternatively, let\u0027s try to get D to have Pain through another action. Since D is not a province, can\u0027t attack it. Maybe through Overcome?\n\nWait, Overcome requires the other to have Province and the object to have Pain. To get D to have Pain, perhaps we can do Overcome on D from someone else, but that would require D has Pain already, which is not the case.\n\nHmm, this is getting complicated. Let\u0027s think differently.\n\nWe need D to crave B. To do that, Overcome D from B requires D has Pain. To get Pain on D, perhaps we can attack D, but need Province D. To get Province D, maybe through Feast.\n\nWait, let\u0027s try another path. Let\u0027s see if we can get D to have Province via Feast.\n\nSuppose we do Feast on C from D. But C currently does not crave D. The initial craving was C craves B, but after Feast C from B, that craving was removed. So unless there\u0027s another craving, we can\u0027t do that.\n\nAlternatively, maybe we can get D to crave someone else first, then Feast.\n\nAlternatively, perhaps we can do Overcome on D from another object to get D to crave that object, but that would require Pain on D, which we can\u0027t get yet.\n\nHmm. Maybe we need to backtrack and see if there\u0027s a different path.\n\nLet me recap the current state after the seventh action (Overcome B from C):\n\n- A craves D (good),\n\n- B craves C (good),\n\n- Need D to crave B.\n\nCurrent provinces: A, B.\n\nPlanets: C and D.\n\nHarmony is true.\n\nD\u0027s province is still false.\n\nTo get D to crave B, need Overcome D from B. To do that, D must have Pain. To get Pain on D, need to attack D, but need Province D. To get Province D, perhaps via Feast.\n\nWait, let\u0027s try to get D to have Province via Feast. Let\u0027s see:\n\nSuppose we do Feast on someone else who craves D. But currently, no one craves D except maybe A, but A\u0027s craving for D was achieved through Overcome, but after that, if we do a Feast on A from D, that would remove the craving. Not good.\n\nAlternatively, maybe we can first get someone to crave D, then Feast them to get D\u0027s province.\n\nWait, to get someone to crave D, perhaps Overcome on that someone from D. But that requires Pain on the someone and D has Province. D\u0027s province is false, so can\u0027t do that.\n\nHmm. Maybe we need to first get D to have Province through another method.\n\nAlternatively, maybe we can do a Feast on D from someone else, but D can\u0027t crave anything unless we set that up.\n\nAlternatively, perhaps we can do a Feast on C from D, but C doesn\u0027t crave D. To make C crave D, we can do Overcome on C from D. But that requires Pain on C and D has Province. D\u0027s province is false, so no.\n\nAlternatively, let\u0027s try to get D\u0027s province via Succumb. To do that, D must have Pain. To get Pain on D, perhaps through an attack on another object that has D as a planet? Not sure.\n\nWait, the only way to get Pain on D is to attack D, but that requires D to be a province. So it\u0027s a loop.\n\nHmm. Maybe I need to try a different approach earlier in the plan.\n\nLet me think again. Let\u0027s see the third goal: D craves B. To get that, Overcome D from B requires D has Pain. To get Pain on D, need to attack D. To attack D, need Province D, Planet D (which it has), and Harmony.\n\nSo to get Province D, perhaps we can do a Feast on someone else that results in D gaining province. Let\u0027s see:\n\nSuppose we do Feast on A from D. But that requires A craves D (which it does), Province A (yes), Harmony (yes). So that\u0027s possible. Let\u0027s try that as an earlier step.\n\nWait, but if we do that before achieving the first goal, it would remove A\u0027s craving for D. So maybe we need to do that after getting the first goal.\n\nWait, let\u0027s try modifying the plan:\n\nAfter achieving the first two goals (A craves D and B craves C), we need to handle D\u0027s craving.\n\nWait, let\u0027s see the current state after the seventh action (Overcome B from C):\n\n- A craves D (good),\n\n- B craves C (good),\n\n- D\u0027s province is false,\n\n- Harmony is true,\n\n- Planets: C and D.\n\nSo to get D to crave B, we need Overcome D from B. To do that, D must have Pain. To get Pain on D, need to attack D. To attack D, need Province D. To get Province D, perhaps via Feast.\n\nLet me try:\n\nAction 8: Feast A from D.\n\nWait, but that would require A craves D (yes), Province A (yes), Harmony (yes). So yes.\n\nAfter this Feast:\n\n- Pain A is true,\n\n- Province D becomes true,\n\n- A no longer craves D (bad),\n\n- Province A is false,\n\n- Harmony is false.\n\nThis undoes the first goal (A craves D), which is not good. So that\u0027s a problem.\n\nHmm. Maybe instead of that, we can do a Feast on another object. Let\u0027s see.\n\nWhat if we do Feast on C from D? But C doesn\u0027t crave D. To make C crave D, we can do Overcome on C from D. But that requires Pain on C and D has Province (which it doesn\u0027t). So not possible.\n\nAlternatively, maybe we can first get D to have Province via another path.\n\nWait, perhaps we can do a Feast on D from someone else. But D can\u0027t crave anything unless we set that up.\n\nAlternatively, maybe we can do a Feast on B from D. But B doesn\u0027t crave D. To make B crave D, we can do Overcome B from D. But that requires Pain on B and D has Province (no).\n\nHmm. This is tricky. Maybe I need to find another way to get D to have Pain without losing the first goal.\n\nAlternatively, maybe we can do a Feast on another object that doesn\u0027t affect the first goal. Let\u0027s see.\n\nWait, what about using the Overcome action on D from another object that has Province. Let\u0027s see:\n\nSuppose we do Overcome D from A. The preconditions are:\n\n- Province of the other (A) must be true (yes, currently A\u0027s province is true? Wait, after the seventh action (Overcome B from C), let\u0027s check A\u0027s province.\n\nAfter Overcome A from D (action 2), Province A was set to true. Then, after Feast C from B (action 3), Province A wasn\u0027t affected. Then, after Succumb C (action5), Province A remained true. After Attack B (action6), Province A still true. After Overcome B from C (action7), Province A is still true. So yes, A has Province.\n\nSo Overcome D from A requires:\n\n- Province of the other (A) is true (yes),\n\n- Pain of D must be true (no).\n\nSo can\u0027t do that yet.\n\nHmm. Maybe I need to find a way to get Pain on D without attacking it. Alternatively, perhaps through another Overcome.\n\nAlternatively, maybe we can do a Feast on D from someone else, but that requires D to crave that someone. Let\u0027s see.\n\nAlternatively, perhaps we can first get D to crave B through another path. Let\u0027s think of the Overcome action again.\n\nWait, the Overcome action\u0027s effect is that the object (first parameter) craves the other (second). So to get D to crave B, we need Overcome D from B. The preconditions are:\n\n- Province of the other (B) is true (yes),\n\n- Pain of D must be true.\n\nSo to get Pain on D, perhaps we can attack D, but need Province D. To get Province D, maybe via Feast on someone else who craves D. But no one does yet.\n\nAlternatively, maybe we can first get someone to crave D, then Feast them to get D\u0027s province.\n\nLet me try:\n\nAction 8: Overcome C from D. Wait, but to do that, the preconditions are:\n\n- Province of the other (D) must be true (no),\n\n- Pain of C must be true (C\u0027s Pain? Let\u0027s see current state.\n\nAfter the seventh action (Overcome B from C), Pain B was removed, but C\u0027s Pain? Let\u0027s see:\n\nAfter Feast C from B (action3), Pain C was true. Then, after Succumb C (action5), Pain C was removed. So currently, Pain C is false.\n\nSo can\u0027t do Overcome C from D because Pain C is false and D\u0027s province is false.\n\nHmm.\n\nAlternatively, maybe we can do a Feast on B from D. But B doesn\u0027t crave D. To make B crave D, we can do Overcome B from D. But that requires Pain on B and D has Province (no).\n\nThis is getting too tangled. Maybe I need to try a different approach in the plan.\n\nLet me think of the third goal again: D craves B. To get that, Overcome D from B requires Pain on D. To get Pain on D, need to attack D. To attack D, need Province D. To get Province D, perhaps via Feast on someone who craves D. But no one does yet. So maybe first, we need to make someone crave D, then Feast them to get D\u0027s province, then attack D to get Pain, then Overcome.\n\nLet\u0027s try this path:\n\nFirst, to make someone crave D, perhaps Overcome on that someone from D. But that requires Pain on them and D has Province (which it doesn\u0027t). So not possible.\n\nAlternatively, maybe we can do a Feast on someone else to get D\u0027s province, then attack.\n\nWait, let\u0027s try:\n\nAction 8: Feast A from D. But that would remove A\u0027s craving for D. Not good. But maybe we can do this and then fix it later.\n\nSo:\n\nAction 8: Feast A from D.\n\nEffects:\n\n- Pain A is true,\n\n- Province D becomes true,\n\n- A no longer craves D,\n\n- Province A is false,\n\n- Harmony is false.\n\nNow, D has Province. Now, we can attack D.\n\nAction9: Attack D.\n\nPreconditions: Province D (yes), Planet D (yes), Harmony (no). Wait, Harmony is false here. So can\u0027t attack D yet.\n\nHmm. Need Harmony. To get Harmony, we can Succumb someone with Pain. Currently, Pain A is true (from Feast). So:\n\nAction10: Succumb A.\n\nPreconditions: Pain A (yes).\n\nAfter Succumb A:\n\n- Province A is true,\n\n- Planet A is true,\n\n- Harmony is true,\n\n- Pain A is false.\n\nNow, Harmony is back. Now, can attack D.\n\nAction11: Attack D.\n\nPreconditions: Province D (yes), Planet D (yes), Harmony (yes).\n\nAfter Attack D:\n\n- Pain D is true,\n\n- Province D becomes false,\n\n- Planet D becomes false,\n\n- Harmony becomes false.\n\nNow, Pain D is true. Now, we can do Overcome D from B.\n\nAction12: Overcome D from B.\n\nPreconditions:\n\n- Province of the other (B) is true (yes),\n\n- Pain of D is true (yes).\n\nSo yes.\n\nEffects:\n\n- Harmony becomes true,\n\n- Province D is set to true (since D is the object, and the other is B),\n\n- D craves B (good),\n\n- Province B is set to false,\n\n- Pain D is false.\n\nWait, the effects of Overcome are:\n\n- True: Harmony, Province object (D), Object (D) craves other (B).\n\n- False: Province other (B), Pain object (D).\n\nSo after this:\n\n- Harmony is true,\n\n- Province D is true,\n\n- D craves B (good),\n\n- Province B is now false,\n\n- Pain D is false.\n\nNow, we have achieved the third goal (D craves B). But now, we have to check the other goals.\n\nThe first goal was A craves D, but after Feast A from D (action8), that was removed. So we need to fix that.\n\nHmm, so this path undoes the first goal. Not good. So this approach is problematic.\n\nHmm. Maybe there\u0027s another way. Let\u0027s think again.\n\nAlternative approach: After achieving the first two goals (A craves D and B craves C), we need to get D to crave B without affecting the first goal.\n\nLet me see the state after the seventh action (Overcome B from C):\n\n- A craves D (good),\n\n- B craves C (good),\n\n- D\u0027s province is false,\n\n- Harmony is true,\n\n- Planets: C and D.\n\nTo get D to crave B, need Overcome D from B. To do that, D must have Pain. To get Pain on D, need to attack D. To attack D, need Province D. To get Province D, perhaps via Feast on someone who cr",
"instance_id": "4_1",
"model_name": "Qwen/QwQ-32B",
"query": "I am playing with a set of objects. Here are the actions I can do\n\n Attack object\n Feast object from another object\n Succumb object\n Overcome object from another object\n\nI have the following restrictions on my actions:\n To perform Attack action, the following facts need to be true: Province object, Planet object, Harmony.\n Once Attack action is performed the following facts will be true: Pain object.\n Once Attack action is performed the following facts will be false: Province object, Planet object, Harmony.\n To perform Succumb action, the following facts need to be true: Pain object.\n Once Succumb action is performed the following facts will be true: Province object, Planet object, Harmony. \n Once Succumb action is performed the following facts will be false: Pain object.\n To perform Overcome action, the following needs to be true: Province other object, Pain object.\n Once Overcome action is performed the following will be true: Harmony, Province object, Object Craves other object.\n Once Overcome action is performed the following will be false: Province other object, Pain object.\n To perform Feast action, the following needs to be true: Object Craves other object, Province object, Harmony.\n Once Feast action is performed the following will be true: Pain object, Province other object.\n Once Feast action is performed the following will be false:, Object Craves other object, Province object, Harmony.\n\nHere is an example problem:\n\n[STATEMENT]\nAs initial conditions I have that, province Block B, province Block C, harmony, Block C craves Block A, planet Block A, planet Block B.\nMy goal is to have that Block A craves Block C and Block B craves Block A.\n\nMy plan is as follows:\n\n[PLAN]\nfeast Block C from Block A\nsuccumb Block C\nattack Block A\novercome Block A from Block C\nattack Block B\novercome Block B from Block A\n[PLAN END]\n\nHere is another example:\n\n[STATEMENT]\nAs initial conditions I have that, province Block B, harmony, Block A craves Block D, Block B craves Block C, Block C craves Block A, planet Block D.\nMy goal is to have that Block C craves Block D, Block A craves Block C and Block B craves Block A.\n\nMy plan is as follows:\n\n[PLAN]\nfeast Block B from Block C\nsuccumb Block B\nfeast Block C from Block A\nsuccumb Block C\nfeast Block A from Block D\nsuccumb Block A\nattack Block C\novercome Block C from Block D\nattack Block A\novercome Block A from Block C\nattack Block B\novercome Block B from Block A\n[PLAN END]\nHere is the problem to solve:\n\n[STATEMENT]\nAs initial conditions I have that: Block C craves Block B, harmony, planet Block A, planet Block B, planet Block D, province Block A, province Block C and province Block D.\nMy goal is for the following to be true: Block A craves Block D, Block B craves Block C and Block D craves Block B.\n\nMy plan is as follows:"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("dmitriihook/qwq-32b-planning-mystery", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("dmitriihook/qwq-32b-planning-mystery")
```
</details>
|
mlfoundations-dev/scale_up_science_25K | mlfoundations-dev | 2025-03-12T00:51:07Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:50:49Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: __index_level_0__
dtype: int64
- name: problem
dtype: string
- name: __original_row_idx
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: discipline
dtype: string
- name: expert
dtype: string
- name: num_topics
dtype: int64
- name: num_subtopics
dtype: int64
- name: num_questions
dtype: int64
- name: topic
dtype: string
- name: subtopic
dtype: string
- name: score
dtype: int64
- name: year
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 726700099
num_examples: 25002
download_size: 363148840
dataset_size: 726700099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/scale_up_science_2K | mlfoundations-dev | 2025-03-12T00:51:02Z | 13 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:50:49Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: __index_level_0__
dtype: int64
- name: problem
dtype: string
- name: __original_row_idx
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: discipline
dtype: string
- name: expert
dtype: string
- name: num_topics
dtype: int64
- name: num_subtopics
dtype: int64
- name: num_questions
dtype: int64
- name: topic
dtype: string
- name: subtopic
dtype: string
- name: score
dtype: int64
- name: year
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 58131357.41140709
num_examples: 2000
download_size: 29607016
dataset_size: 58131357.41140709
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/base_scale_up_science | mlfoundations-dev | 2025-03-12T00:51:01Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:41:20Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: __index_level_0__
dtype: int64
- name: problem
dtype: string
- name: __original_row_idx
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: discipline
dtype: string
- name: expert
dtype: string
- name: num_topics
dtype: int64
- name: num_subtopics
dtype: int64
- name: num_questions
dtype: int64
- name: topic
dtype: string
- name: subtopic
dtype: string
- name: score
dtype: int64
- name: year
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 726700099
num_examples: 25002
download_size: 363148840
dataset_size: 726700099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/scale_up_science_1K | mlfoundations-dev | 2025-03-12T00:51:00Z | 11 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:50:49Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: __index_level_0__
dtype: int64
- name: problem
dtype: string
- name: __original_row_idx
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: discipline
dtype: string
- name: expert
dtype: string
- name: num_topics
dtype: int64
- name: num_subtopics
dtype: int64
- name: num_questions
dtype: int64
- name: topic
dtype: string
- name: subtopic
dtype: string
- name: score
dtype: int64
- name: year
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 29065678.705703545
num_examples: 1000
download_size: 14816353
dataset_size: 29065678.705703545
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/scale_up_science_4K | mlfoundations-dev | 2025-03-12T00:50:58Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:50:50Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: __index_level_0__
dtype: int64
- name: problem
dtype: string
- name: __original_row_idx
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: discipline
dtype: string
- name: expert
dtype: string
- name: num_topics
dtype: int64
- name: num_subtopics
dtype: int64
- name: num_questions
dtype: int64
- name: topic
dtype: string
- name: subtopic
dtype: string
- name: score
dtype: int64
- name: year
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 116262714.82281418
num_examples: 4000
download_size: 58413798
dataset_size: 116262714.82281418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rd211/eth-mathdial-conversations | rd211 | 2025-03-12T00:49:19Z | 87 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-27T23:13:11Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: ground_truth
dtype: string
- name: student_incorrect_solution
dtype: string
- name: student_profile
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: student_name
dtype: string
splits:
- name: train
num_bytes: 8955666
num_examples: 2261
- name: test
num_bytes: 2338595
num_examples: 599
download_size: 3746870
dataset_size: 11294261
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_code_worst_8 | mlfoundations-dev | 2025-03-12T00:45:55Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:45:53Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 134
num_examples: 1
download_size: 4527
dataset_size: 134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
passionMan/dataset_seed_evol_cherry206 | passionMan | 2025-03-12T00:40:56Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:40:52Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 7844357
num_examples: 3917
download_size: 3830834
dataset_size: 7844357
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Argenis29/Adam | Argenis29 | 2025-03-12T00:25:29Z | 16 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-03-12T00:25:29Z | null | ---
license: apache-2.0
---
|
hanaearg/ARSaSDataOk | hanaearg | 2025-03-12T00:23:38Z | 13 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:23:34Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: sentiment
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 8088700
num_examples: 15368
- name: test
num_bytes: 2047861
num_examples: 3842
download_size: 2156422
dataset_size: 10136561
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
nc33/data_tts_clean_2 | nc33 | 2025-03-12T00:21:18Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:21:10Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcript
dtype: string
- name: description
dtype: string
- name: wer
dtype: float64
- name: hsc
dtype: float32
splits:
- name: train
num_bytes: 164212699.0
num_examples: 120
download_size: 159291526
dataset_size: 164212699.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
esfrankel17/original_ChatbotArena55k_binarized_Qwen2.5-14B-Instruct_preferences | esfrankel17 | 2025-03-12T00:16:01Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:15:50Z | null | ---
dataset_info:
features:
- name: original_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_chosen_rating
dtype: int64
- name: original_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_rejected_rating
dtype: int64
- name: model_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_agreed_with_original
dtype: bool
splits:
- name: winner
num_bytes: 276534512
num_examples: 36997
download_size: 147708930
dataset_size: 276534512
configs:
- config_name: default
data_files:
- split: winner
path: data/winner-*
---
|
rzgar/kurdish_sorani_wikipedia_gemma_2b_it | rzgar | 2025-03-12T00:03:57Z | 57 | 0 | [
"language:ku",
"license:cc-by-sa-4.0",
"modality:text",
"region:us"
] | [] | 2025-02-27T15:22:01Z | null | ---
license: cc-by-sa-4.0
language:
- ku
pretty_name: Lang CKB Gemma-2b-it
---
# Kurdish Sorani Wikipedia Text Dataset for Gemma-2-2b-it Fine-Tuning
## Dataset Description
This dataset contains Kurdish Sorani text extracted from Wikipedia, preprocessed and formatted for fine-tuning the `google/gemma-2-2b-it`language model. The goal of this dataset is to improve the model's ability to generate fluent, grammatically correct, and contextually relevant Kurdish Sorani text, particularly in the style of Wikipedia articles.
## Dataset Statistics
* **Number of lines:** 52,430
* **Total characters:** 99,080,311
* **Language:** Kurdish Sorani (Central Kurdish)
* **Source:** Wikipedia
* **Format:** Plain text, preprocessed with Gemma-compatible templates
## Data Collection and Preprocessing
The dataset was created through the following steps:
1. **Data Extraction:** Kurdish Sorani text was scraped directly from Wikipedia articles.
* List of URLs: [rzgar/ckb_wikipedia_kurdish_sorani_urls](https://huggingface.co/datasets/rzgar/ckb_wikipedia_kurdish_sorani_urls)
2. **Text Cleaning:** The extracted text was cleaned to remove:
* Extra whitespace
* HTML tags (if any were present)
* Unicode Left-to-Right Marks
* Citation Marks
## Licensing Information
This dataset is shared under the [Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/). Users are free to use, adapt, and share the data, provided attribution is given to the source (CKB WikiPedia) and derivative works are shared under the same license.
## Acknowledgements
This dataset was created with the help of open-source tools and resources, including:
* Hugging Face Transformers library
* NLTK (Natural Language Toolkit)
* We gratefully acknowledge the countless contributors to Wikipedia who have dedicated their time and effort to creating, translating, editing, and fact-checking Kurdish Sorani content. Their commitment to making knowledge freely accessible is the foundation of this dataset. Without their contributions, this project would not have been possible.
|
VGraf/safety_pref_stack_dev | VGraf | 2025-03-11T23:50:39Z | 42 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T21:23:47Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 645031379
num_examples: 14547
download_size: 272759205
dataset_size: 645031379
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andrewbai/distilabel-intel-orca-dpo-pairs_filtered_pref-skywork-8B | andrewbai | 2025-03-11T23:44:16Z | 24 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T23:44:09Z | null | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 21345528
num_examples: 5922
- name: test
num_bytes: 1890252
num_examples: 500
download_size: 11963272
dataset_size: 23235780
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
takara-ai/rand-1m-multimodal | takara-ai | 2025-03-11T23:33:08Z | 100 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-10T22:00:56Z | null | ---
dataset_info:
features:
- name: svg
dtype: string
- name: prompt
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 135789832903.0
num_examples: 1000000
download_size: 131282059668
dataset_size: 135789832903.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
esfrankel17/original_ChatbotArena55k_binarized_Qwen2.5-0.5B-Instruct_preferences | esfrankel17 | 2025-03-11T23:27:32Z | 34 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T23:27:24Z | null | ---
dataset_info:
features:
- name: original_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_chosen_rating
dtype: int64
- name: original_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_rejected_rating
dtype: int64
- name: model_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_agreed_with_original
dtype: bool
splits:
- name: winner
num_bytes: 267600836
num_examples: 36149
download_size: 142981504
dataset_size: 267600836
configs:
- config_name: default
data_files:
- split: winner
path: data/winner-*
---
|
QubitPi/wilhelm-vocabulary | QubitPi | 2025-03-11T23:15:46Z | 526 | 0 | [
"language:en",
"language:de",
"language:la",
"language:grc",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"Natural Language Processing",
"NLP",
"Vocabulary",
"German",
"Latin",
"Ancient Greek",
"Knowledge Graph"
] | [] | 2024-10-11T01:42:46Z | null | ---
license: apache-2.0
pretty_name: Wilhelm Vocabulary
language:
- en
- de
- la
- grc
configs:
- config_name: Graph Data
data_files:
- split: German
path: german-graph-data.jsonl
- split: Latin
path: latin-graph-data.jsonl
- split: AncientGreek
path: ancient-greek-graph-data.jsonl
tags:
- Natural Language Processing
- NLP
- Vocabulary
- German
- Latin
- Ancient Greek
- Knowledge Graph
size_categories:
- 1K<n<10K
---
Wilhelm Vocabulary
==================
[![Hugging Face dataset badge]][Hugging Face dataset URL]
[![Vocabulary count - German]][Docker Hub URL]
[![Vocabulary count - Latin]][Docker Hub URL]
[![Vocabulary count - Ancient Greek]][Docker Hub URL]
[![Docker Hub][Docker Pulls Badge]][Docker Hub URL]
[![GitHub workflow status badge][GitHub workflow status badge]][GitHub workflow status URL]
[![Hugging Face sync status badge]][Hugging Face sync status URL]
[![Apache License Badge]][Apache License, Version 2.0]
<!-- TOC -->
* [Wilhelm Vocabulary](#wilhelm-vocabulary)
* [Development](#development)
* [Environment Setup](#environment-setup)
* [Installing Dependencies](#installing-dependencies)
* [Data Format](#data-format)
* [Encoding Table in YAML](#encoding-table-in-yaml)
* [Data Pipeline](#data-pipeline)
* [How Data (Vocabulary) is Stored in a Graph Database](#how-data-vocabulary-is-stored-in-a-graph-database)
* [Why Graph Database](#why-graph-database)
* [Base Schema](#base-schema)
* [Languages](#languages)
* [German](#german)
* [Pronoun](#pronoun)
* [Noun](#noun)
* [Verb](#verb)
* [Ancient Greek](#ancient-greek)
* [Diacritic Mark Convention](#diacritic-mark-convention)
* [Pronoun](#pronoun-1)
* [Noun](#noun-1)
* [Adjective](#adjective)
* [1. Three-Ending Adjectives: 1st and 2nd Declension (2-1-2)](#1-three-ending-adjectives-1st-and-2nd-declension-2-1-2)
* [2. Two-Ending 2nd Declension Adjectives (2-2)](#2-two-ending-2nd-declension-adjectives-2-2)
* [3. Two-Ending 3rd Declension Adjectives (3-3)](#3-two-ending-3rd-declension-adjectives-3-3)
* [4. Three-Ending 1st and 3rd Declension Adjectives (3-1-3)](#4-three-ending-1st-and-3rd-declension-adjectives-3-1-3)
* [Declension Template](#declension-template)
* [Verb Conjugation](#verb-conjugation)
* [Latin](#latin)
* [Classical Hebrew](#classical-hebrew)
* [Classical Sanskrit](#classical-sanskrit)
* [Connection between Hebrew and Sanskrit](#connection-between-hebrew-and-sanskrit)
* [Korean](#korean)
* [License](#license)
<!-- TOC -->
__wilhelm-vocabulary__ is the data sources used for the flashcard contents on [wilhelmlang.com]. Specifically it's a
datasource manually made from the accumulation of the daily language studies of [myself](https://github.com/Qubitpi):
- [German](./german.yaml)
- [Latin](./latin.yaml)
- [Ancient Greek](./ancient-greek.yaml)
The data is available on 🤗 [Hugging Face Datasets][Hugging Face dataset URL]
```python
from datasets import load_dataset
dataset = load_dataset("QubitPi/wilhelm-vocabulary")
```
> [!TIP]
>
> If `dataset = load_dataset("QubitPi/wilhelm-vocabulary")` throws an error, please upgrade the `datasets` package to
> its _latest version_
In addition, a Docker image has been made to allow us exploring the vocabulary in Neo4J browser backed by a Neo4J
database. To get the image and run the container, simply do:
```console
docker run \
--publish=7474:7474 \
--publish=7687:7687 \
--env=NEO4J_AUTH=none \
--env=NEO4J_ACCEPT_LICENSE_AGREEMENT=yes \
-e NEO4JLABS_PLUGINS=\[\"apoc\"\] \
--env NEO4J_browser_remote__content__hostname__whitelist=https://raw.githubusercontent.com \
--env NEO4J_browser_post__connect__cmd="style https://raw.githubusercontent.com/QubitPi/wilhelm-vocabulary/refs/heads/master/graphstyle.grass" \
jack20191124/wilhelm-vocabulary
```
> [!NOTE]
>
> The image is based on Neo4J Enterprise 5.23.0.
- When container starts, access neo4j through browser at http://localhost:7474
- Both __bolt://__ and __neo4j://__ protocols are fine.
- Choose __No authentication__ for _Authentication type_
- Then hit __Connect__ as shown below

We have offered some queries that can be used to quickly explore the vocabulary in graph representations:
- Search for all Synonyms: `MATCH (term:Term)-[r]-(synonym:Term) WHERE r.name = "synonym" RETURN term, r, synonym`
- Finding all [gerunds](https://en.wiktionary.org/wiki/Appendix:Glossary#gerund):
`MATCH (source)-[link:RELATED]->(target) WHERE link.name = "gerund of" RETURN source, link, target;`
- Expanding a word "nämlich" (reveals its relationship to other languages):
```cypher
MATCH (term:Term{label:'nämlich'})
CALL apoc.path.expand(term, "LINK", null, 1, 3)
YIELD path
RETURN path, length(path) AS hops
ORDER BY hops;
```

- In German, "rice" and "travel" are related:
```cypher
MATCH (term:Term{label:'die Reise'})
CALL apoc.path.expand(term, "LINK", null, 1, 3)
YIELD path
RETURN path, length(path) AS hops
ORDER BY hops;
```

- `MATCH (term:Term{label:'die Schwester'}) CALL apoc.path.expand(term, "LINK", null, 1, -1) YIELD path RETURN path, length(path) AS hops ORDER BY hops;`
- How German, Latin, and Ancient greek expresses the conjunction "but":
```cypher
MATCH (node{label:"δέ"})
CALL apoc.path.expand(node, "LINK", null, 1, 4)
YIELD path
RETURN path, length(path) AS hops
ORDER BY hops;
```

Development
-----------
### Environment Setup
Get the source code:
```console
git clone git@github.com:QubitPi/wilhelm-vocabulary.git
cd wilhelm-vocabulary
```
It is strongly recommended to work in an isolated environment. Install virtualenv and create an isolated Python
environment by
```console
python3 -m pip install --user -U virtualenv
python3 -m virtualenv .venv
```
To activate this environment:
```console
source .venv/bin/activate
```
or, on Windows
```console
./venv\Scripts\activate
```
> [!TIP]
>
> To deactivate this environment, use
>
> ```console
> deactivate
> ```
### Installing Dependencies
```console
pip3 install -r requirements.txt
```
### Data Format
The raw data is written in YAML format, because
1. it is machine-readable so that it can be consumed quickly in data pipelines
2. it is human-readable and, thus, easy to read and modify
3. it supports multi-lines value which is very handy for language data
The YAML data files are
- [german.yaml](./german.yaml)
- [latin.yaml](./latin.yaml)
- [ancient-greek.yaml](./ancient-greek.yaml)
These YAML files are then [transformed](huggingface/generate_datasets.py) to Hugging Face Datasets formats in
[CI/CD](https://github.com/QubitPi/wilhelm-vocabulary/actions/workflows/ci-cd.yaml)
### Encoding Table in YAML
To encode the inflections which are common in most Indo-European languages, an
[application-specific YAML](https://stackoverflow.com/q/30894438/14312712) that looks like the following are employed
throughout this repository:
```yaml
- term: der Gegenstand
definition:
- object
- thing
declension:
- ["", singular, plural ]
- [nominative, Gegenstand, Gegenstände ]
- [genitive, "Gegenstandes, Gegenstands", Gegenstände ]
- [dative, Gegenstand, Gegenständen]
- [accusative, Gegenstand, Gegenstände ]
```
> [!NOTE]
>
> - A list under `declension` is a table row
> - All rows have the same number of columns
> - Each element of the list corresponds to a table cell
The declension (inflection) table above is equivalent to
<table><tbody>
<tr>
<td></td>
<td>singular</td>
<td>plural</td>
</tr>
<tr>
<td>nominative</td>
<td>Gegenstand</td>
<td>Gegenstände</td>
</tr>
<tr>
<td>genitive</td>
<td>Gegenstandes, Gegenstands</td>
<td>Gegenstände</td>
</tr>
<tr>
<td>dative</td>
<td>Gegenstand</td>
<td>Gegenständen</td>
</tr>
<tr>
<td>accusative</td>
<td>Gegenstand</td>
<td>Gegenstände</td>
</tr>
</tbody>
</table>
Data Pipeline
-------------
### How Data (Vocabulary) is Stored in a Graph Database
#### Why Graph Database
Graph data representation assumes universal connectivity among world entities. This applies pretty well to the realm of
languages. Multilanguage learners have already seen that Indo-European languages are similar in many aspects. The
similarities not only signify the historical facts about Philology but also surface a great opportunity for
multilanguage learners to take advantages of them and study much more efficiently. What's missing is connecting the dots
using Graph Databases that visually presents these vastly enlightening links between the related languages in a natural
way.
#### Base Schema
```yaml
vocabulary:
- term: string
definition: list
audio: string
```
The `audio` field is an URL that points to a `.mp3` or `.ogg` file that contains the pronunciation of this word.
_The meaning of a word is called the `definition`_. A term has a natural relationship to its definition(s). For example,
the German noun "[Ecke](https://en.wiktionary.org/wiki/Ecke#Noun)" has at least 4 definitions:

<div align="center">
Graph data generated by <a href="https://github.com/QubitPi/wilhelm-data-loader">wilhelm-data-loader</a>
</div>
> [!TIP]
>
> The parenthesized value at the beginning of each `definition` item played an un-ignorable role: it is the label of the
> relationship between `term` and `definition` in graph database dumped by
> [data loader](https://github.com/QubitPi/wilhelm-data-loader). For example, both German words
>
> ```yaml
> - term: denn
> definition:
> - (adv.) then, thus
> - (conj.) because
> ```
>
> and
>
> ```yaml
> - term: nämlich
> definition:
> - (adj.) same
> - (adv.) namely
> - (adv.) because
> ```
>
> can mean "because" acting as different types. This is visualized as follows:
>
> 
>
> __Visualzing synonyms this way presents a big advantage to human brain__ who is exceedingly good at memorizing
> patterns
Languages
---------
### [German](./german.yaml)
#### Pronoun
The declension table of a pronoun follows:
```yaml
declension:
- ["", masclune, feminine, neuter, plural]
- [nominative, ████████, ████████, ██████, ██████]
- [genitive, ████████, ████████, ██████, ██████]
- [dative, ████████, ████████, ██████, ██████]
- [accusative, ████████, ████████, ██████, ██████]
```
#### Noun
`term` with a _definite article_ of `der`/`die`/`das` signifies a __noun__ which has the entry format with the
declension table of the following template:
```yaml
- term:
definition:
audio:
declension:
- ["", singular, plural]
- [nominative, ████████, ██████]
- [genitive, ████████, ██████]
- [dative, ████████, ██████]
- [accusative, ████████, ██████]
```
For example:
```yaml
- term: das Gespräch
definition: the conversation
audio: https://upload.wikimedia.org/wikipedia/commons/f/f5/De-Gespr%C3%A4ch.ogg
declension:
- ["", singular, plural ]
- [nominative, Gespräch, Gespräche ]
- [genitive, "Gespräches, Gesprächs", Gespräche ]
- [dative, Gespräch, Gesprächen]
- [accusative, Gespräch, Gespräche ]
```
> [!TIP]
>
> __The declension tables for nouns are almost all sourced from
> [Wiktionary](https://en.wiktionary.org/wiki/Kaufmann#Declension)__ and tiny from (if not present in Wiktionary)
> [Verbformen](https://www.verbformen.com/)
> [!CAUTION]
>
> [Adjectival nouns](https://en.wikibooks.org/wiki/German/Grammar/Nouns/Adjectival_Nouns), however, do NOT follow the
> template above but employs the following template:
>
> ```yaml
> declension:
> strong:
> - ["", singular, plural]
> - [nominative, ████████, ██████]
> - [genitive, ████████, ██████]
> - [dative, ████████, ██████]
> - [accusative, ████████, ██████]
> weak:
> - ["", singular, plural]
> - [nominative, ████████, ██████]
> - [genitive, ████████, ██████]
> - [dative, ████████, ██████]
> - [accusative, ████████, ██████]
> mixed:
> - ["", singular, plural]
> - [nominative, ████████, ██████]
> - [genitive, ████████, ██████]
> - [dative, ████████, ██████]
> - [accusative, ████████, ██████]
> ```
#### Verb
The conjugation is the inflection paradigm for a German verb. Those with `conjugation` field denotes a __verb__; its
definition also begins with an _indefinite form_, i.e. "to ..."
The reason for choosing [verbformen.com] is because of its comprehensive inflection info of German vocabulary provided.
There are __3__ persons, __2__ numbers, and __4__ moods (indicative, conditional, imperative and subjunctive) to
consider in conjugation. There are __6__ tenses in German: the present and past are conjugated, and there are four
compound tenses. There are two categories of verbs in German:
[weak and strong](https://en.wikipedia.org/wiki/Germanic_strong_verb)[^1]. In addition,
[strong verbs are grouped into 7 "classes"](https://en.wikipedia.org/wiki/Germanic_strong_verb#Strong_verb_classes)
The conjugation table of German verb on Wiktionary is hard to interpret for German beginner.
[Netzverb Dictionary](https://www.verbformen.com/) is the best German dictionary _targeting the vocabulary inflections_.
[Search for "aufwachsen"](https://www.verbformen.com/?w=aufwachsen) and we will see much more intuitive conjugation
tables listed.
This pretty much serves our needs, but what makes Netzverb unpenetrable by other alternatives is that _every_ verb comes
with
1. [A printable version that looks much better than the browser's Control+P export](https://www.verbformen.com/conjugation/aufwachsen.pdf)
- There is also a "Sentences with German verb aufwachsen" section with a
[link](https://www.verbformen.com/conjugation/examples/aufwachsen.htm) that offer a fruitful number of conjugated
examples getting us familiar with the inflections of the verb
2. [An on-the-fly generated flashcard sheet](https://www.verbformen.com/conjugation/worksheets-exercises/lernkarten/aufwachsen.pdf)
which allows us to make a better usage of our random free time
3. [A YouTube video that offers audios of almost every conjugated form](https://www.youtube.com/watch?v=LCtUrSn030A),
which helps with pronunciations a lot
The entry for a German verb, hence, has an extra `verbformen` field that includes the links to the 3 pieces of
information above
```yaml
- term:
definition:
audio:
verbformen:
video:
conjugation:
flashcards:
```
For example:
```yaml
- term: aufwachsen
definition: to grow up
audio: https://upload.wikimedia.org/wikipedia/commons/f/f0/De-aufwachsen.ogg
verbformen:
video: https://youtu.be/LCtUrSn030A
conjugation: https://www.verbformen.com/conjugation/aufwachsen.pdf
flashcards: https://www.verbformen.com/conjugation/worksheets-exercises/lernkarten/aufwachsen.pdf
```
> [!IMPORTANT]
>
> Note that the `verbformen` might not exist for some verbs and any of its sub-fields can be non-existing due to the
> limiting number of verbs on records from [verbformen.com]
### [Ancient Greek](./ancient-greek.yaml)
Unless otherwise mentioned, we are always talking about _Attic_ Greek.
> [!NOTE]
>
> Ancient Greek vocabulary come from the following sources
>
> - [Greek Core Vocabulary of Dickinson College](https://dcc.dickinson.edu/greek-core-list)
> - Aristotle - Logic I: Categories, On Interpretation, Prior Analytics
#### Diacritic Mark Convention
We employ the following 3 diacritic signs only in vocabulary:
1. the __acute__ (ά)
2. the __circumflex__ (ᾶ), and
3. the __grave__ (ὰ)
In fact, it is called the [_medium diacritics_](https://lsj.gr/wiki/ἀγαθός) and the same convention used in
[Loeb Classical Library prints](https://ryanfb.xyz/loebolus/) from Harvard. Notice that, however, the commonly sourced
[Wiktionary uses full diacritics](https://en.wiktionary.org/wiki/ἀγαθός#Declension), including the
[breve diacritic mark](https://en.wikipedia.org/wiki/Breve); we don't do that.
#### Pronoun
The source of pronouns and their declensions are the following
- [Greek Core Vocabulary of Dickinson College](https://dcc.dickinson.edu/greek-core-list)
- [Ancient Greek for Everyone, Pronouns: Part I](https://pressbooks.pub/ancientgreek/chapter/11/)
- [Ancient Greek for Everyone, Pronouns: Part II](https://pressbooks.pub/ancientgreek/chapter/12/)
- [Ancient Greek for Everyone, Pronouns: Part III](https://pressbooks.pub/ancientgreek/chapter/25/)
- [Ancient Greek for Everyone, Pronouns: Part IV](https://pressbooks.pub/ancientgreek/chapter/26/)
- Wiktionary
- [Greek: An Intensive Course, 2nd Revised Edition](https://pdfcoffee.com/4-hansen-hardy-quinn-gerald-m-greek-an-intensive-course-5-pdf-free.html)
- Unit 6, Section 49. The Relative Pronoun
> [!TIP]
>
> More grammar about pronouns can be found in these great articles from _Ancient Greek for Everyone_ above
The declension table of a pronoun follows:
```yaml
declension:
- ["", singular, plural]
- [nominative, ████████, ██████]
- [genitive, ████████, ██████]
- [dative, ████████, ██████]
- [accusative, ████████, ██████]
- [vocative, N/A, N/A ]
```
#### Noun
The vocabulary entry for each noun consists of its nominative and genitive forms, an article which indicates the noun's
gender all in its `term` attribute. The English meaning(s) come as a list under `definition` attribute. For example.
```yaml
- term: τέχνη τέχνης, ἡ
definition:
- art,
- skill,
- craft
declension class: 1st
```
The vocabulary entry above consists of the following 5 items:
1. τέχνη: nominative singular
2. τέχνης: genitive singular
3. ἡ: nominative feminine singular of the article, which shows that the gender of the noun is feminine. Gender will be
indicated by the appropriate form of the definite article "the":
- `ὁ` for the masculine nouns
- `ἡ` for the feminine nouns
- `τό` for the neutor nouns
4. a list of English meanings of the word
5. the noun employs the first declension. The 3 classes of declensions are
1. first declension (`1st`)
2. second declension (`2nd`)
3. third declension (`3rd`)
The declension of the entry is not shown because to decline any noun, we can take the genitive singular, remove the
genitive singular ending to get the stem, and then add the proper set of endings to the stem based on its declension
class[^2].
For example, to decline _τέχνη τέχνης, ἡ, (art)_, take the genitive singular _τέχνης_, remove the genitive singular ending
_-ης_, and add the appropriate endings to the stem which gives following paradigm:
| Case | Singular | Plural |
|:----------:|:--------:|:-------:|
| nominative | τέχνη | τέχναι |
| genitive | τέχνης | τεχνῶν |
| dative | τέχνῃ | τέχναις |
| accusative | τέχνην | τέχνᾱς |
| vocative | τέχνη | τέχναι |
#### Adjective
[^6] Greek adjectives are formed using the [same 3 declensions that are used by Greek nouns](#noun-1). Furthermore, just as
each noun belongs to a particular declension, each adjective belongs to a specific declension family or grouping. There
are 4 main declension families:
1. [Three-Ending 1st and 2nd Declension Adjectives (2-1-2)](#1-three-ending-adjectives-1st-and-2nd-declension-2-1-2)
2. [Two-Ending 2nd Declension Adjectives (2-2)](#2-two-ending-2nd-declension-adjectives-2-2)
3. [Two-Ending 3rd Declension Adjectives (3-3)](#3-two-ending-3rd-declension-adjectives-3-3)
4. [Three-Ending 1st and 3rd Declension Adjectives (3-1-3)](#4-three-ending-1st-and-3rd-declension-adjectives-3-1-3)
##### 1. Three-Ending Adjectives: 1st and 2nd Declension (2-1-2)
The vast majority of adjectives use _masculine_ and _neuter_ 2nd declension endings when modifying nouns of these
genders, and 1st declension endings when modifying _feminine_ nouns. For example,
__ἀγαθός, -ή, -όν__ _good, brave, noble_:
| **Singular** | **Masculine** | **Feminine** | **Neuter** |
|:--------------:|:-------------:|:------------:|:----------:|
| **Nominative** | ἀγαθός | ἀγαθή | ἀγαθόν |
| **Genitive** | ἀγαθοῦ | ἀγαθῆς | ἀγαθοῦ |
| **Dative** | ἀγαθῷ | ἀγαθῇ | ἀγαθῷ |
| **Accusative** | ἀγαθόν | ἀγαθήν | ἀγαθόν |
| **Plural** | **Masculine** | **Feminine** | **Neuter** |
|:--------------:|:-------------:|:------------:|:----------:|
| **Nominative** | ἀγαθοί | ἀγαθαί | ἀγαθά |
| **Genitive** | ἀγαθῶν | ἀγαθῶν | ἀγαθῶν |
| **Dative** | ἀγαθοῖς | ἀγαθαῖς | ἀγαθοῖς |
| **Accusative** | ἀγαθούς | ἀγαθάς | ἀγαθά |
If the stem of the adjective ends in __-ε__, __-ι__, or __-ρ__, the singular forms of the 1st declension change the
__-η-__ to __-ᾱ-__. Note that this change matches that of 1st declension nouns.
For instance, __δίκαιος, -α , -ον__ _just_
| **Singular** | **Masculine** | **Feminine** | **Neuter** |
|:--------------:|:-------------:|:------------:|:----------:|
| **Nominative** | δίκαιος | δικαία | δίκαιον |
| **Genitive** | δικαίου | δικαίας | δικαίου |
| **Dative** | δικαίῳ | δικαίᾳ | δικαίῳ |
| **Accusative** | δίκαιον | δικαίαν | δίκαιον |
Two common adjectives of the 2-1-2 type show additional small changes:
__μέγας, μεγάλη, μέγα__ (stem: __μεγαλ-__) _big_
| **Singular** | **Masculine** | **Feminine** | **Neuter** |
|:--------------:|:-------------:|:------------:|:----------:|
| **Nominative** | μέγας | μεγάλη | μέγα |
| **Genitive** | μεγάλου | μεγάλης | μεγάλου |
| **Dative** | μεγάλῳ | μεγάλῃ | μεγάλῳ |
| **Accusative** | μέγαν | μεγάλην | μέγα |
| **Plural** | **Masculine** | **Feminine** | **Neuter** |
|:--------------:|:-------------:|:------------:|:----------:|
| **Nominative** | μεγάλοι | μεγάλαι | μεγάλα |
| **Genitive** | μεγάλων | μεγάλων | μεγάλων |
| **Dative** | μεγάλοις | μεγάλαις | μεγάλοις |
| **Accusative** | μεγάλους | μεγάλας | μεγάλα |
Note that except for the singular forms μέγας, μέγαν, and μέγα, the adjective declines as a regular 2-1-2 adjective.
__πολύς, πολλή, πολύ__ (stem: __πολλ-__) _much, many_
| **Singular** | **Masculine** | **Feminine** | **Neuter** |
|:--------------:|:-------------:|:------------:|:----------:|
| **Nominative** | πολύς | πολλή | πολύ |
| **Genitive** | πολλοῦ | πολλῆς | πολλοῦ |
| **Dative** | πολλῷ | πολλῇ | πολλῷ |
| **Accusative** | πολύν | πολλήν | πολύ |
| **Plural** | **Masculine** | **Feminine** | **Neuter** |
|:--------------:|:-------------:|:------------:|:----------:|
| **Nominative** | πολλοί | πολλαί | πολλά |
| **Genitive** | πολλῶν | πολλῶν | πολλῶν |
| **Dative** | πολλοῖς | πολλαῖς | πολλοῖς |
| **Accusative** | πολλούς | πολλάς | πολλά |
Note that except for the singular forms πολύς, πολύν, and πολύ, the adjective declines as a regular 2-1-2 adjective.
##### 2. Two-Ending 2nd Declension Adjectives (2-2)
[^7] A handful of adjectives, usually compounds, use 2nd declension endings for all genders. For these adjectives:
- both the masculine and feminine forms share the same endings as 2nd declension masculine nouns
- the neuter form shares the same endings as the 2nd declension neuter nouns.
For instance, __ἄδικος -ον__ _unjust_:
| **Singular** | **Masculine/Feminine** | **Neuter** |
|:--------------:|:----------------------:|:----------:|
| **Nominative** | ἄδικος | ἄδικον |
| **Genitive** | ἀδίκου | ἀδίκου |
| **Dative** | ἀδίκῳ | ἀδίκῳ |
| **Accusative** | ἄδικον | ἄδικον |
| **Plural** | **Masculine/Feminine** | **Neuter** |
|:--------------:|:----------------------:|:----------:|
| **Nominative** | ἄδικοι | ἄδικα |
| **Genitive** | ἀδίκων | ἀδίκων |
| **Dative** | ἀδίκοις | ἀδίκοις |
| **Accusative** | ἀδίκους | ἄδικα |
##### 3. Two-Ending 3rd Declension Adjectives (3-3)
[^7] Another small group of adjectives uses 3rd DECLENSION endings for ALL GENDERS. For these adjectives:
- both the masculine and feminine forms share the same endings as the 3rd declension masculine/feminine nouns
- the neuter form uses the same endings as the 3rd declension neuter nouns.
These adjectives tend to fall into one of 2 groups:
1. Adjectives ending in __-ης -ες__. These adjectives have a stem ending in __-εσ__.
2. Adjectives ending in __-(ί)ων -(ι)ον__. These adjectives have a stem ending in __-(ι)ον__.
##### 4. Three-Ending 1st and 3rd Declension Adjectives (3-1-3)
The final group of adjectives uses the 3rd declension endings for masculine and neuter, but the 1st declension endings
for feminine.
Note, however, that when modifying a feminine noun, these adjectives use SHORT -α- in the _nominative_ and _accusative_
singular. This change must be remembered, since it affects the accent of these adjectives. These adjectives tend to fall
into one of 2 groups:
1. Adjectives ending in __-ς -σα -ν__. These adjectives have a stem ending in __-ντ__.
2. Adjectives ending in __-ύς -εῖα -ύ__. These adjectives have a stem ending in __-ε__.
##### Declension Template
Putting it all together, it can be concluded that Ancient Greek adjectives decline in rules with exceptions.
wilhelm-vocabulary, therefore, still literally list all declined entries of an adjective. The declension template is as
follows:
```yaml
declension:
- ["", singular, singular, singular, dual, dual, dual plural, plural, plural]
- ["", masculine, feminine, neuter, masculine, feminine, neuter, masculine, feminine, neuter]
- [nominative, █████████, ████████, ████████, █████████, ████████, ██████, █████████, ████████, ██████]
- [genitive, █████████, ████████, ████████, █████████, ████████, ██████, █████████, ████████, ██████]
- [dative, █████████, ████████, ████████, █████████, ████████, ██████, █████████, ████████, ██████]
- [accusative, █████████, ████████, ████████, █████████, ████████, ██████, █████████, ████████, ██████]
- [vocative, █████████, ████████, ████████, █████████, ████████, ██████, █████████, ████████, ██████]
```
#### Verb Conjugation
The Greek verb has __6__ principal parts. All 6 must be learned whenever a new verb is encountered:
1. (first person singular) present indicative active
2. (first person singular) future indicative active
3. (first person singular) aorist indicative active
4. (first person singular) perfect indicative active
5. (first person singular) perfect indicative passive
6. (first person singular) aorist indicative passive
> [!TIP]
>
> The minimum number of forms which one must know in order to generate all possible forms of a verb are called the
> __principal parts__ of that verb.
From the 6 forms above, various verb forms (i.e. stems & endings) can be derived by rules[^3]
In practice, however,
[obtaining precise and complete principal parts for some verbs has been proven to be impossible](https://latin.stackexchange.com/a/17432). Best efforts have
been made to find them with URL references being provided in a `references` list field for each verb entry What's also
being recorded here are the reconstructed principal parts with a list of references that validate the
reconstruction. In conclusion, the entry of a verb, thus, has the form of:
```yaml
- term: string
definition: list
conjugation:
principal parts:
- ["", Attic, (Possibly other dialects)]
- [(first person singular) present indicative active, █████, ... ]
- [(first person singular) future indicative active, █████, ... ]
- [(first person singular) aorist indicative active, █████, ... ]
- [(first person singular) perfect indicative active, █████, ... ]
- [(first person singular) perfect indicative passive, █████, ... ]
- [(first person singular) aorist indicative passive, █████, ... ]
references: list
```
For example:
```yaml
- term: λέγω
definition:
- to say, speak
- to pick up
conjugation:
wiktionary: https://en.wiktionary.org/wiki/λέγω#Verb_2
principal parts:
- ["", Attic , Koine ]
- [(first person singular) present indicative active, λέγω , λέγω ]
- [(first person singular) future indicative active, λέξω , ἐρῶ ]
- [(first person singular) aorist indicative active, ἔλεξα , εἶπον/εἶπα ]
- [(first person singular) perfect indicative active, (missing), εἴρηκα ]
- [(first person singular) perfect indicative passive, λέλεγμαι , λέλεγμαι ]
- [(first person singular) aorist indicative passive, ἐλέχθην , ἐρρέθην/ἐρρήθην]
references:
- https://en.wiktionary.org/wiki/λέγω#Inflection
- http://atticgreek.org/downloads/allPPbytypes.pdf
- https://books.openbookpublishers.com/10.11647/obp.0264/ch25.xhtml
- https://www.billmounce.com/greek-dictionary/lego
- https://koine-greek.fandom.com/wiki/Λέγω
```
### [Latin](./latin.yaml)
> [!NOTE]
> The vocabulary and declensions come from the following sources
>
> - [Latin Core Vocabulary of Dickinson College](https://dcc.dickinson.edu/latin-core-list1)
> - Wiktionary
```yaml
vocabulary:
- term: string
definition: list
```
### Classical Hebrew
> [!NOTE]
>
> Unless otherwise stated explicitly, the texts use "Hebrew" as referring to _Classical Hebrew_ only, as opposed to
> modern Hebrew
The vocabulary is presented to help read and understand [Biblical Hebrew](https://mechon-mamre.org/p/pt/pt00.htm#mp3). A
[complementary audio](https://mechon-mamre.org/p/pt/ptmp3prq.htm) helps well with the pronunciation.
### Classical Sanskrit
> [!NOTE]
>
> Unless otherwise stated explicitly, the texts use "Sanskrit" as referring to _Classical Sanskrit_ only, as opposed to
> Vedic Sanskrit
### Connection between Hebrew and Sanskrit
One of the reasons I study both Hebrew and Sanskrit is that they are both
[Sacred languages](https://en.wikipedia.org/wiki/Sacred_language). Not being religiously minded, I am driven by learning
the similarities between the [_Hebrew Bible_](https://mechon-mamre.org/p/pt/pt00.htm#mp3), written in its original
language, and [_Brihadaranyaka Upanishad_](https://en.wikipedia.org/wiki/Brihadaranyaka_Upanishad), written in Sanskrit.
In addition, the linguistic and historical connections of the 2 languages interest me a lot:

Although
[there is no settled agreement on a common ancestor of Indo-European and Afroasiatic language families](https://en.wikipedia.org/wiki/Indo-Semitic_languages),
the two languages as I've been learning them showed amazing similarities. For example, in both Hebrew and Sanskrit,
there is no sign/character indicating the vowel __a__[^4][^5]. It is difficult to convince myself that this is a sheer
coincidence! _wilhelm-vocabulary_, thus on Hebrew and Sanskrit, has another project goal - __revealing the missing
connection between Indo-European and Afroasiatic families through knowledge graph among the vocabularies of their
children languages
### [Korean](./korean.yaml)
中国人学习韩语有先天优势,加之韩语本身也是一门相当简单的语言,所以这里将语法和词汇合并在一起;
每一项也只由 `term`(韩)和 `definition`(中)组成,
```yaml
vocabulary:
- term: string
definition: list of strings
example:
- Korean: 제가 아무렴 그쪽 편에 서겠어요
Chinese: 我无论如何都会站在你这边
- Korean: ...
Chinese: ...
```
不用费太多功夫记牢简单的语法和词汇,剩下的就是拿韩语字幕剧不停练习听说读写既成。`example` 中的例句均来自[韩国本土语料](https://www.amazon.com/Korean-book-%EB%82%98%EC%9D%98-%EC%95%84%EC%A0%80%EC%94%A8-%EC%A0%842%EA%B6%8C/dp/8933871756)
> [!NOTE]
>
> 韩语不属于汉藏语系,因其所属语系非常狭小,无法和其它语言产生足够关联,因此其数据暂时不被存入图数据库进行数据分析
License
-------
The use and distribution terms for [wilhelm-vocabulary]() are covered by the [Apache License, Version 2.0].
[Apache License Badge]: https://img.shields.io/badge/Apache%202.0-F25910.svg?style=for-the-badge&logo=Apache&logoColor=white
[Apache License, Version 2.0]: https://www.apache.org/licenses/LICENSE-2.0
[Docker Pulls Badge]: https://img.shields.io/docker/pulls/jack20191124/wilhelm-vocabulary?style=for-the-badge&logo=docker&color=2596EC
[Docker Hub URL]: https://hub.docker.com/r/jack20191124/wilhelm-vocabulary
[Hugging Face dataset badge]: https://img.shields.io/badge/Datasets-wilhelm--vocabulary-FF9D00?style=for-the-badge&logo=huggingface&logoColor=white&labelColor=6B7280
[Hugging Face dataset URL]: https://huggingface.co/datasets/QubitPi/wilhelm-vocabulary
[Hugging Face sync status badge]: https://img.shields.io/github/actions/workflow/status/QubitPi/wilhelm-vocabulary/ci-cd.yaml?branch=master&style=for-the-badge&logo=github&logoColor=white&label=Hugging%20Face%20Sync%20Up
[Hugging Face sync status URL]: https://github.com/QubitPi/wilhelm-vocabulary/actions/workflows/ci-cd.yaml
[GitHub workflow status badge]: https://img.shields.io/github/actions/workflow/status/QubitPi/wilhelm-vocabulary/ci-cd.yaml?branch=master&style=for-the-badge&logo=github&logoColor=white&label=CI/CD
[GitHub workflow status URL]: https://github.com/QubitPi/wilhelm-vocabulary/actions/workflows/ci-cd.yaml
[verbformen.com]: https://www.verbformen.com/
[Vocabulary count - German]: https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.paion-data.dev%2Fwilhelm%2Flanguages%2Fgerman%2Fcount&query=%24%5B0%5D.count&suffix=%20Words&style=for-the-badge&logo=neo4j&logoColor=white&label=German&color=4581C3
[Vocabulary count - Latin]: https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.paion-data.dev%2Fwilhelm%2Flanguages%2Flatin%2Fcount&query=%24%5B0%5D.count&suffix=%20Words&style=for-the-badge&logo=neo4j&logoColor=white&label=Latin&color=4581C3
[Vocabulary count - Ancient Greek]: https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.paion-data.dev%2Fwilhelm%2Flanguages%2FancientGreek%2Fcount&query=%24%5B0%5D.count&suffix=%20Words&style=for-the-badge&logo=neo4j&logoColor=white&label=Ancient%20Greek&color=4581C3
[wilhelmlang.com]: https://wilhelmlang.com/
[^1]: https://en.wikipedia.org/wiki/German_verbs#Conjugation
[^2]: _[Greek: An Intensive Course, 2nd Revised Edition](https://www.amazon.com/Greek-Intensive-Course-2nd-Revised/dp/0823216632)_, Hansen & Quinn, _p.20_
[^3]: _[Greek: An Intensive Course, 2nd Revised Edition](https://www.amazon.com/Greek-Intensive-Course-2nd-Revised/dp/0823216632)_, Hansen & Quinn, _p.44_
[^4]: A. M. Ruppel, [_The Cambridge Introduction to Sanskrit_](https://trello.com/c/3kJrPbhF), Cornell University, New York, 2017, p.12
[^5]: E. Simon, L. Motzkin, I. Resnikoff, [The First Hebrew Primer: The Adult Beginner's Path to Biblical Hebrew, Third Edition](https://trello.com/c/ht2VRcf7), EKS Publishing, 1992, p.3
[^6]: https://pressbooks.pub/ancientgreek/chapter/29/
[^7]: https://pressbooks.pub/ancientgreek/chapter/30/ |
HahahaFace/PCF | HahahaFace | 2025-03-11T23:12:05Z | 12 | 1 | [
"language:zh",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | [] | 2025-01-14T09:38:35Z | null | ---
license: mit
language:
- zh
- en
---
# 🌱 Tsinghua-PCF-Synergy-Dataset
**🏛️ 机构**: 清华大学
**🔗 开源地址**: [PCF-Synergy HuggingFace](https://huggingface.co/datasets/HahahaFace/PCF/edit/main/README.md)
**📜 证书**: MIT
**📅 版本**: v1.2.0 (2024-12)
## 🌍 数据集简介
面向产品碳足迹(PCF)评估领域的大模型训练需求,本数据集创新性地融合 **知识图谱约束生成** (K-SDG) 与 **人工反馈** (RLHF)技术,覆盖全生命周期评估(LCA)场景。
## 🌐 多源异构数据融合
| 数据类别 | 典型样例 | 占比 |
| ---------- | ------------------------------------- | ---- |
| 国际标准 | ISO 14067, GHG Protocol | 18% |
| 行业数据库 | Ecoinvent 3.8, Exiobase 3.7 | 22% |
| 企业实践 | Apple供应链数据, Lenovo生产日志 | 15% |
| 学术文献 | Nature Sustainability论文数据集 | 12% |
| 专利知识 | 华为CN114996032A, Siemens EP3267377B1 | 8% |
| 合成数据 | GPT-4生成+知识图谱校验 | 25% |
---
## 🧬 数据构成金字塔
```mermaid
graph TD
A[原始数据源] --> B[知识图谱约束生成]
B --> C[人工专家校验]
C --> D[大模型增强合成]
D --> E[多维度质量评估]
E --> F[C-STAR合规认证]
```
---
---
## 📜 引用方式
```bibtex
@dataset{tsinghua_pcf_2024,
title = {Tsinghua-PCF-Synergy Dataset},
author = {Tsinghua Environmental AI Lab},
year = {2024},
publisher = {Tsinghua University Open Data Platform},
version = {1.2},
doi = {10.1145/TSINGHUA-PCF-2024}
}
```
📧 合作咨询: zhen-li@mails.tsinghua.edu.cn
```
---
**✨ 创新亮点**
1. **全球首个融合产业实践与学术标准的PCF数据集**
2. **知识图谱引导的合成数据生成框架(K-SDG)**
3. **支持动态区域化因子更新的生命周期评估体系**
4. **通过C-STAR五维质量认证的可靠数据源**
**🆙 最新动态**
✅ 完成数据库上传
```
|
DanOgh07/eval_icl_sample | DanOgh07 | 2025-03-11T23:05:32Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T23:05:30Z | null | ---
dataset_info:
features:
- name: input
sequence: string
- name: target
dtype: string
splits:
- name: eval
num_bytes: 43176528
num_examples: 731
download_size: 300575
dataset_size: 43176528
configs:
- config_name: default
data_files:
- split: eval
path: data/eval-*
---
|
LatinNLP/LatinSummarizer | LatinNLP | 2025-03-11T22:58:19Z | 60 | 0 | [
"task_categories:translation",
"task_categories:text-generation",
"task_categories:summarization",
"task_ids:news-articles-summarization",
"task_ids:document-retrieval",
"language:en",
"language:la",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"text",
"translation",
"latin",
"english",
"parallel-corpus",
"dataset",
"mt5",
"nlp",
"multilingual",
"summarization"
] | [
"translation",
"text-generation",
"summarization"
] | 2025-03-09T13:42:36Z | null | ---
tags:
- text
- translation
- latin
- english
- parallel-corpus
- dataset
- mt5
- nlp
- multilingual
- summarization
language:
- en
- la
license: "cc-by-4.0"
datasets:
- grosenthal/latin_english_translation
- opus
- Fece228/latin-literature-dataset-170M
pretty_name: "LatinSummarizer"
size_categories:
- 100K<n<500K
task_categories:
- translation
- text-generation
- summarization
task_ids:
- news-articles-summarization
- document-retrieval
---
# **LatinSummarizer Dataset**
## Structure
- `aligned_en_la_data_raw.csv`
- `aligned_en_la_data_cleaned.csv`
- `aligned_en_la_data_cleaned_with_stanza.csv`
- `concat_aligned_data.csv`
- `concat_cleaned.csv`
- `latin_wikipedia_cleaned.csv`
- `latin_wikipedia_raw.csv`
- `latin-literature-dataset-170M_raw_cleaned.csv`
- `latin-literature-dataset-170M_raw_cleaned_chunked.csv`
- `Elsa_aligned/`
- `README.md`
## Details
### `aligned_en_la_data_raw.csv`
This dataset contains aligned **Latin (`la`) - English (`en`)** text extracted from multiple sources to train and fine-tune machine translation models.
#### **Dataset Sources**
The dataset was built from:
- **[HuggingFace Latin-English Dataset](https://huggingface.co/datasets/grosenthal/latin_english_translation)** (101,371 rows)
- **[Bible-uedin.en-la](https://object.pouta.csc.fi/OPUS-bible-uedin/v1/moses/en-la.txt.zip) and [Wikimedia.en-la](https://object.pouta.csc.fi/OPUS-wikimedia/v20230407/moses/en-la.txt.zip)** (163,067 rows)
#### **Dataset Structure**
Each row contains an **aligned sentence pair** in **English (`en`)** and **Latin (`la`)**.
| Column | Description |
|---------|------------|
| `en` | English sentence/document |
| `la` | Corresponding Latin translation |
#### **Statistics**
| Statistic | English (en) | Latin (la) |
|:----------:|:-------------:|:------------:|
| **Mean** | 31.27 | 19.86 |
| **Std Dev** | 20.21 | 11.99 |
| **Min** | 0.00 | 0.00 |
| **Max** | 672.00 | 308.00 |
### `aligned_en_la_data_cleaned.csv`
This file contains the **cleaned version** of the dataset after preprocessing steps such as:
- Removal of unwanted symbols, HTML tags, URLs, and non-Latin characters.
- Filtering out duplicate sentences and ensuring high-quality alignment.
- Removing overly short or non-informative text entries.
#### **Cleaning Statistics**
- Removed **33 rows** with unwanted patterns.
- Removed **44,759 duplicate rows**.
- Removed **5 rows** without Roman letters.
#### **Updated Statistics**
| Statistic | English (`en`) | Latin (`la`) |
|:----------:|:-------------:|:------------:|
| **Mean** | 30.99 | 19.75 |
| **Std Dev** | 19.85 | 11.80 |
| **Min** | 1.00 | 1.00 |
| **Max** | 634.00 | 307.00 |
### `latin-literature-dataset-170M_raw_cleaned.csv`
This dataset is derived from the **[Fece228/latin-literature-dataset-170M](https://huggingface.co/datasets/Fece228/latin-literature-dataset-170M)** and consists of large Latin literary texts. It has been cleaned for **extractive summarization tasks**.
#### **Preprocessing Steps**
- Removed **0 rows** with NaN values.
- Removed **5 rows** with unwanted patterns.
- Removed **0 duplicate rows**.
- Removed **14 rows** with identical values in all columns.
- Removed **1 row** without Roman letters.
#### **Statistics (Before and After Cleaning)**
| Statistic | `text` (Before Cleaning) | `text` (After Cleaning) |
|:----------:|:----------------------:|:----------------------:|
| **Mean** | 60,133.06 | 56,802.40 |
| **Std Dev** | 231,865.86 | 225,328.53 |
| **Min** | 26 | 24 |
| **Max** | 6,545,497 | 6,545,441 |
This dataset contains classical Latin literature from various authors and is designed for **extractive summarization tasks**.
---
### `latin-literature-dataset-170M_raw_cleaned_chunked.csv`
This dataset is a **chunked version** of `latin-literature-dataset-170M_raw_cleaned.csv`. It has been split into **logical chunks of approximately 400 words**, ensuring that each chunk ends at the **end of a sentence** to preserve coherence.
#### **Chunking Methodology**
- Each text is **split into chunks of ~400 words** while ensuring sentence boundaries are respected.
- Texts were segmented logically, avoiding abrupt cutoffs in the middle of sentences.
- Chunks with **fewer than 90% of the target chunk size (360 words)** were removed to maintain consistency.
#### **Dataset Structure**
| Column | Description |
|---------------|------------|
| `original_index` | The index of the original text before chunking |
| `chunk_index` | The index of the chunk within the original text |
| `text_chunk` | The chunked text segment (approx. 400 words) |
| `word_count` | Number of words in the chunk |
#### **Statistics**
| Statistic | `text_chunk` (Word Count) |
|:----------:|:-----------------------:|
| **Mean** | ~400 |
| **Min** | 360 |
| **Max** | 400 |
This dataset is ideal for **extractive summarization** and **document-level processing**, ensuring that longer texts are handled efficiently in NLP tasks.
### `latin-literature-dataset-170M_raw_cleaned_chunked_summaries_graded.csv`
This dataset contains **chunked Latin literary texts** from `latin-literature-dataset-170M_raw_cleaned_chunked.csv`, with **machine-generated extractive summaries** that have been graded on a **0-100** scale using the Mistral-7B-Instruct-v0.3 model.
#### **Dataset Structure**
| Column | Description |
|--------|------------|
| `original_index` | The index of the original text before chunking |
| `chunk_index` | The index of the chunk within the original text |
| `text_chunk` | The chunked text segment (approx. 400 words) |
| `word_count` | Number of words in the chunk |
| `extractive_summary` | Machine-generated summary of the chunk |
| `grade` | Quality score of the summary (0-100) |
#### **Grading Methodology**
- **Logical Coherence & Structural Flow (50 points)**
- Does the summary present information clearly and in a logical order?
- Are ideas connected smoothly?
- Is the summary understandable without referring to the original text?
- **Content Accuracy & Depth (50 points)**
- Does the summary preserve essential details?
- Is it concise yet informative?
- Does it properly represent the original text?
#### **Grading Statistics**
| Statistic | Grade |
|------------|-------|
| **Mean** | 52.16 |
| **Median** | 47.0 |
| **Std Dev** | 14.36 |
| **Min** | 0.0 |
| **Max** | 95.0 |
This dataset allows **evaluation of summary quality** and can be used for **automatic summarization model training**.
---
### `latin_wikipedia_cleaned.csv`
This dataset is a **cleaned** version of Latin Wikipedia articles, ensuring readability and proper structure.
#### **Dataset Structure**
| Column | Description |
|---------|------------|
| `id` | Unique identifier |
| `title` | Article title |
| `url` | Wikipedia URL |
| `text` | Cleaned Latin text |
#### **Cleaning Statistics**
- **Initial dataset shape**: (202,805, 4)
- Removed **63,002 rows** with NaN values.
- Removed **172 rows** with unwanted patterns.
- Removed **568 rows** with identical values in all columns.
- **Final dataset shape**: (139,063, 4)
#### **Text Statistics**
| Statistic | `text` (Token Count) |
|:----------:|:---------------------:|
| **Mean** | 79.74 |
| **Std Dev** | 186.95 |
| **Min** | 1 |
| **Max** | 10,027 |
### `Elsa_aligned/`
This directory contains aligned Latin-English texts from **classical authors**, primarily from **Cicero, Suetonius, and Vitruvius**. Each text is carefully aligned at the sentence level for NLP tasks.
#### **Sources & Structure**
Each file follows the format:
| Column | Description |
|---------|------------|
| `en` | English sentence/document |
| `la` | Corresponding Latin translation |
##### **Included Texts**
- **Cicero's Orations:**
- *Against Publius Servilius Rullus*
- *On Pompey's Command*
- *Divinatio Against Q. Caecilius*
- *Orations Against Verres*
- *For Archias*
- *For Marcus Fonteius*
- *For Publius Quinctius*
- *For Quintus Roscius the Actor*
- *For Sextus Roscius of Ameria*
- *For Sulla*
- *Four Orations Against Lucius Catilina*
- *In Defense of Aulus Cluentius Habitus*
- *In Defense of Caius Rabirius*
- *In Defense of L. Murena*
- *In Defense of Lucius Flaccus*
- *On Behalf of Aulus Caecina*
- *On Behalf of Marcus Tullius*
- **Suetonius' Biographies:**
- *Divus Augustus, Divus Julius, Divus Claudius, Divus Titus*
- *Domitianus, Galba, Nero, Otho, Tiberius, Vitellius*
- *Caligula, Vespasianus*
- **Vitruvius Pollio's *On Architecture***
## **License**
This dataset is released under the **Creative Commons Attribution 4.0 (CC-BY-4.0) License**.
## **Usage Example**
You can load this dataset using Hugging Face's `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("LatinNLP/LatinSummarizer")
print(dataset["train"][0])
```
## **Citation**
```bibtex
@misc{LatinSummarizer,
author = {Axel Delaval, Elsa Lubek},
title = {Latin-English Parallel Dataset},
year = {2025},
url = {https://huggingface.co/datasets/LatinNLP/LatinSummarizer}
}
```
|
aryamankeyora/val_data_H04_chunked | aryamankeyora | 2025-03-11T22:57:47Z | 34 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:57:44Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: publication_number
dtype: string
- name: publication_title
dtype: string
- name: cpc
dtype: string
splits:
- name: train
num_bytes: 7489091
num_examples: 155
download_size: 2211875
dataset_size: 7489091
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aryamankeyora/train_dataset_H04 | aryamankeyora | 2025-03-11T22:56:20Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:56:16Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: publication_number
dtype: string
- name: publication_title
dtype: string
- name: cpc
dtype: string
splits:
- name: train
num_bytes: 30513320
num_examples: 466
download_size: 10721437
dataset_size: 30513320
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
secmlr/noisy_dataset_filtered_QwQ-32B-Preview_small_train_len_16000_inputlen_5000 | secmlr | 2025-03-11T22:55:58Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-16T20:33:49Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: system
dtype: string
- name: idx
dtype: int64
- name: cwe
sequence: string
splits:
- name: train
num_bytes: 56465385
num_examples: 3960
download_size: 16505685
dataset_size: 56465385
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
secmlr/noisy_dataset_filtered_QwQ-32B-Preview_small_train_len_8000_inputlen_5000 | secmlr | 2025-03-11T22:55:46Z | 19 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-16T20:34:00Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: system
dtype: string
- name: idx
dtype: int64
- name: cwe
sequence: string
splits:
- name: train
num_bytes: 55924906
num_examples: 3945
download_size: 16371835
dataset_size: 55924906
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sunyiyou/openr1_dissim_aime2425_1k_lg | sunyiyou | 2025-03-11T22:54:03Z | 7 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:53:57Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: longest_correct_generation
dtype: string
- name: length
dtype: int64
- name: id
dtype: int64
- name: max_similarity_to_aime
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 155857157
num_examples: 1000
download_size: 64252536
dataset_size: 155857157
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gahsilas/llmtwin | gahsilas | 2025-03-11T22:53:16Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:53:10Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1152031
num_examples: 2178
- name: test
num_bytes: 128543
num_examples: 242
download_size: 605000
dataset_size: 1280574
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
nyalpatel/condensed_librispeech_asr | nyalpatel | 2025-03-11T22:43:48Z | 7 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:40:46Z | null | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 1623641436
num_examples: 1000
- name: train.clean.360
num_bytes: 1572285643
num_examples: 1000
- name: train.other.500
num_bytes: 1502809029
num_examples: 1000
- name: validation.clean
num_bytes: 65591952
num_examples: 100
- name: validation.other
num_bytes: 76760504
num_examples: 100
- name: test.clean
num_bytes: 85852252
num_examples: 100
- name: test.other
num_bytes: 58550856
num_examples: 100
download_size: 1181170369
dataset_size: 4985491672
configs:
- config_name: default
data_files:
- split: train.clean.100
path: data/train.clean.100-*
- split: train.clean.360
path: data/train.clean.360-*
- split: train.other.500
path: data/train.other.500-*
- split: validation.clean
path: data/validation.clean-*
- split: validation.other
path: data/validation.other-*
- split: test.clean
path: data/test.clean-*
- split: test.other
path: data/test.other-*
---
# Condensed LibriSpeech ASR
This dataset is a condensed version of the [LibriSpeech ASR dataset](https://huggingface.co/datasets/openslr/librispeech_asr), created by subsampling approximately 10% of the original data from each split. It is intended for quick experimentation, prototyping, and debugging when working with Automatic Speech Recognition (ASR) tasks.
## Dataset Details
- **Original Dataset:** [LibriSpeech ASR](https://huggingface.co/datasets/openslr/librispeech_asr)
- **Condensation Ratio:** Approximately 10% of the full dataset
- **Splits Included:**
- `train.clean.100`
- `train.clean.360`
- `train.other.500`
- `validation.clean`
- `validation.other`
- `test.clean`
- `test.other`
For each split, a default number of examples was extracted:
- **Training Splits:** 1,000 examples each
- **Validation/Test Splits:** 100 examples each
## Data Format
Each sample in the dataset contains the following fields:
- **file:**
A path to the original audio file (FLAC format).
- **audio:**
A dictionary containing:
- `path`: Path to the audio file.
- `array`: The decoded audio waveform as a NumPy array.
- `sampling_rate`: The sampling rate (typically 16 kHz).
- **text:**
The transcription corresponding to the audio.
- **id:**
A unique identifier for the sample.
- **speaker_id:**
A unique identifier for the speaker.
- **chapter_id:**
An identifier corresponding to the audiobook chapter.
## How Was This Dataset Created?
The condensed dataset was generated by streaming the full [LibriSpeech ASR dataset](https://huggingface.co/datasets/openslr/librispeech_asr) using the Hugging Face Datasets library and selecting approximately 10% of each split. This process preserves the original structure and fields, enabling seamless use with models and workflows designed for LibriSpeech.
## Usage Example
Below is a Python snippet to load and inspect the dataset:
```python
from datasets import load_dataset
# Load the condensed dataset from the Hugging Face Hub
dataset = load_dataset("nyalpatel/condensed_librispeech_asr")
# Access a specific split (e.g., test.clean)
test_dataset = dataset["test.clean"]
# Display the first example in the test set
print(test_dataset[0])
|
swzwan/instruction_tuning_50k | swzwan | 2025-03-11T22:36:52Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:36:50Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 74137760
num_examples: 50000
download_size: 34896234
dataset_size: 74137760
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Doub7e/ArtBench-10 | Doub7e | 2025-03-11T22:35:16Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:29:59Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 32332126.0
num_examples: 1000
download_size: 32345450
dataset_size: 32332126.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
qiuweihao/test-dataset-5 | qiuweihao | 2025-03-11T22:30:59Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-10T22:51:05Z | null | ---
dataset_info:
features:
- name: image_path
dtype: string
- name: caption
dtype: string
- name: image_original
dtype: image
- name: image_resized
dtype: image
- name: deepgaze_feature
dtype: image
splits:
- name: train
num_bytes: 119783022.0
num_examples: 29
download_size: 119744667
dataset_size: 119783022.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nyalpatel/librispeech_asr_condensed_100 | nyalpatel | 2025-03-11T22:24:29Z | 7 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:24:13Z | null | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 170910392
num_examples: 100
- name: train.clean.360
num_bytes: 164524216
num_examples: 100
- name: train.other.500
num_bytes: 108779994
num_examples: 100
- name: validation.clean
num_bytes: 65591952
num_examples: 100
- name: validation.other
num_bytes: 76760504
num_examples: 100
- name: test.clean
num_bytes: 85852252
num_examples: 100
- name: test.other
num_bytes: 58550856
num_examples: 100
download_size: 171527987
dataset_size: 730970166
configs:
- config_name: default
data_files:
- split: train.clean.100
path: data/train.clean.100-*
- split: train.clean.360
path: data/train.clean.360-*
- split: train.other.500
path: data/train.other.500-*
- split: validation.clean
path: data/validation.clean-*
- split: validation.other
path: data/validation.other-*
- split: test.clean
path: data/test.clean-*
- split: test.other
path: data/test.other-*
---
|
peterzehan/dmv_multicol_queries | peterzehan | 2025-03-11T22:15:04Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:14:45Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: broad_topical_query
dtype: string
- name: specific_detail_query
dtype: string
- name: visual_element_query
dtype: string
splits:
- name: train
num_bytes: 997424488.5652174
num_examples: 84
download_size: 76995440
dataset_size: 997424488.5652174
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ntsmarkv/BygHeart | ntsmarkv | 2025-03-11T21:56:47Z | 29 | 1 | [
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-09T06:05:36Z | null | ---
license: cc-by-nc-4.0
---
|
AIEXPERT1/python_lib | AIEXPERT1 | 2025-03-11T21:47:16Z | 32 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-03-11T19:50:40Z | null | ---
license: apache-2.0
---
|
Bharatdeep-H/hermes-function-calling-v1-qwen-0.5b-base-cot | Bharatdeep-H | 2025-03-11T21:43:23Z | 18 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T21:43:18Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: task
dtype: string
- name: qwen_0-5B
dtype: string
splits:
- name: train
num_bytes: 16722203
num_examples: 1893
download_size: 6728515
dataset_size: 16722203
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zijian2022/eval_act_noticehuman70_3t | zijian2022 | 2025-03-11T21:41:33Z | 23 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-11T21:41:17Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 3,
"total_frames": 1676,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Daksh1/emotion_max_500_with_instructions | Daksh1 | 2025-03-11T21:38:30Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T21:38:09Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 3657342
num_examples: 16000
- name: test
num_bytes: 457571
num_examples: 2000
- name: validation
num_bytes: 454703
num_examples: 2000
download_size: 1295246
dataset_size: 4569616
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
Bharatdeep-H/hermes-function-calling-v1-qwen-1.5b-base-cot | Bharatdeep-H | 2025-03-11T21:36:56Z | 73 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T21:36:45Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: task
dtype: string
- name: qwen_1-5B
dtype: string
splits:
- name: train
num_bytes: 16003524
num_examples: 1893
download_size: 6337116
dataset_size: 16003524
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
linnykt/latin-translation-reasoningtraces | linnykt | 2025-03-11T21:30:16Z | 14 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T21:30:07Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: latin
dtype: string
- name: english
dtype: string
- name: source
dtype: string
- name: reasoning_trace
dtype: string
splits:
- name: train
num_bytes: 210211272.98732674
num_examples: 97870
download_size: 111328845
dataset_size: 210211272.98732674
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Yusser/vi_sae_wiki_tokenized | Yusser | 2025-03-11T21:14:37Z | 16 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T21:12:51Z | null | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 1426053800.0
num_examples: 347818
download_size: 690813596
dataset_size: 1426053800.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits