datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-06 00:37:09
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-06 00:34:34
| trending_score
float64 0
40
โ | card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pt-sk/City_Scape_Dataset | pt-sk | 2025-03-12T08:04:19Z | 32 | 0 | [
"task_categories:image-segmentation",
"license:mit",
"modality:image",
"region:us"
] | [
"image-segmentation"
] | 2025-03-08T09:37:27Z | null | ---
license: mit
task_categories: [image-segmentation]
---
# Cityscapes Dataset
The Cityscapes dataset is a large-scale dataset designed for urban scene understanding, with a focus on semantic segmentation, instance segmentation, and panoptic segmentation. It contains high-quality pixel-level annotations for images captured in 50 different cities, making it a benchmark for evaluating computer vision models in real-world urban environments.
|
Xuan-World/SubjectSpatial200K | Xuan-World | 2025-03-12T07:59:57Z | 118 | 2 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-03-12T05:31:51Z | null | ---
license: apache-2.0
---
|
aractingi/insertcube_push_20 | aractingi | 2025-03-12T07:55:00Z | 32 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"a",
"r",
"c",
"t",
"i",
"n",
"g",
"/",
"s",
"e",
"u",
"b",
"_",
"p",
"h",
"2",
"0"
] | [
"robotics"
] | 2025-03-10T13:18:48Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- a
- r
- a
- c
- t
- i
- n
- g
- i
- /
- i
- n
- s
- e
- r
- t
- c
- u
- b
- e
- _
- p
- u
- s
- h
- _
- '2'
- '0'
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": null,
"total_episodes": 20,
"total_frames": 1239,
"total_tasks": 1,
"total_videos": 40,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
15
],
"names": null
},
"action": {
"dtype": "float32",
"shape": [
3
],
"names": null
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"observation.images.front": {
"dtype": "video",
"shape": [
287,
458
],
"names": null,
"info": {
"video.fps": 10.0,
"video.height": 128,
"video.width": 128,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.side": {
"dtype": "video",
"shape": [
374,
404
],
"names": null,
"info": {
"video.fps": 10.0,
"video.height": 128,
"video.width": 128,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Tony068/falcon-refined-web-5M-part2 | Tony068 | 2025-03-12T07:48:06Z | 13 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T07:42:56Z | null | ---
dataset_info:
features:
- name: content
dtype: string
- name: url
dtype: string
- name: timestamp
dtype: timestamp[us]
- name: dump
dtype: string
- name: segment
dtype: string
- name: image_urls
sequence:
sequence: string
splits:
- name: train
num_bytes: 14378051661
num_examples: 5000000
download_size: 8690458320
dataset_size: 14378051661
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gdut508/rm_so100_test_3 | gdut508 | 2025-03-12T07:30:37Z | 52 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-03-12T06:50:50Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 18,
"total_frames": 6501,
"total_tasks": 1,
"total_videos": 36,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:18"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
AloePerryi/Multi-scalecommoditydataset | AloePerryi | 2025-03-12T07:27:36Z | 18 | 1 | [
"license:mit",
"modality:image",
"modality:text",
"region:us"
] | [] | 2025-02-14T17:43:02Z | null | ---
license: mit
pretty_name: ่ชๅถๆฐๆฎ้ MSCD
--- |
laharipattabathuni/invoice_Dataset | laharipattabathuni | 2025-03-12T07:07:18Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T07:07:11Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: markdown
dtype: string
splits:
- name: train
num_bytes: 7401128.0
num_examples: 8
- name: validation
num_bytes: 1999923.0
num_examples: 2
download_size: 5408154
dataset_size: 9401051.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
kxxinDave/GEOVQ_Subset_500 | kxxinDave | 2025-03-12T07:05:15Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T07:03:52Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: problem
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 1416119.0
num_examples: 500
download_size: 1389388
dataset_size: 1416119.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nanidhoni/guanaco-llama2-1k | nanidhoni | 2025-03-12T07:00:18Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T07:00:16Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 965614
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yashduhan/DeepFakeDetection | yashduhan | 2025-03-12T07:00:02Z | 27 | 0 | [
"task_categories:image-classification",
"language:en",
"modality:image",
"region:us",
"computer-vision",
"deepfake-detection",
"image-classification",
"image"
] | [
"image-classification"
] | 2025-03-12T06:00:29Z | null | ---
pretty_name: "DeepFakeDetection"
dataset_info:
description: "A dataset for DeepFake detection containing real and fake images. Each image has a resolution of 256x256 pixels."
features:
- name: "image"
type: "image"
- name: "label"
type: "class_label"
names: ["real", "fake"]
splits:
- name: "train"
num_examples: 112000 # 80% of total data
- name: "validation"
num_examples: 14000 # 10% of total data
- name: "test"
num_examples: 14000 # 10% of total data
dataset_size: "4GB"
image_resolution: "256x256"
class_distribution:
total:
real: 70000
fake: 70000
train:
real: 56000
fake: 56000
validation:
real: 7000
fake: 7000
test:
real: 7000
fake: 7000
language: "en"
tags:
- "computer-vision"
- "deepfake-detection"
- "image-classification"
- "image"
task_categories:
- "image-classification"
---
# DeepFakeDetection Dataset
This repository contains a comprehensive dataset for DeepFake detection research and development. The dataset consists of 140,000 high-quality images split between real and fake categories.
## Dataset Overview
- Total Images: 140,000
- Real Images: 70,000
- Fake Images: 70,000
## Data Splits
- Training Set (80%): 112,000 images
- 56,000 real
- 56,000 fake
- Validation Set (10%): 14,000 images
- 7,000 real
- 7,000 fake
- Test Set (10%): 14,000 images
- 7,000 real
- 7,000 fake
## Image Specifications
- Resolution: 256x256 pixels
- Format: RGB
- Dataset Size: 4GB
## Tags
- Computer Vision
- DeepFake Detection
- Image Classification
## ๐ฅ Download
[Click here to download the dataset](./DeepFakeDetection.zip)
|
devika12312/eva | devika12312 | 2025-03-12T06:58:46Z | 15 | 0 | [
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-03-12T06:24:26Z | null | ---
size_categories:
- n<1K
---
# [doc] Extracted Code Dataset
This dataset contains images in the `images/` subdirectory, along with a `dataset.csv` file that provides the extracted code from each image.
|
luke-kr/H-colqwen | luke-kr | 2025-03-12T06:52:07Z | 12 | 0 | [
"task_categories:question-answering",
"language:ko",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2025-03-10T05:04:53Z | null | ---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: queries
struct:
- name: broad_topical_explanation
dtype: string
- name: broad_topical_query
dtype: string
- name: parsed_into_json
dtype: bool
- name: specific_detail_explanation
dtype: string
- name: specific_detail_query
dtype: string
- name: visual_element_explanation
dtype: string
- name: visual_element_query
dtype: string
- name: broad_topical_explanation
dtype: string
- name: broad_topical_query
dtype: string
- name: specific_detail_explanation
dtype: string
- name: specific_detail_query
dtype: string
- name: visual_element_explanation
dtype: string
- name: visual_element_query
dtype: string
- name: parsed_into_json
dtype: bool
splits:
- name: train
num_bytes: 59707808.0
num_examples: 98
download_size: 58864918
dataset_size: 59707808.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
language:
- ko
---
|
rokmr/cricket-shot | rokmr | 2025-03-12T06:43:35Z | 36 | 0 | [
"task_categories:video-classification",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:webdataset",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us",
"Sports",
"Cricket",
"Classification"
] | [
"video-classification"
] | 2025-03-10T08:47:49Z | null | ---
license: apache-2.0
task_categories:
- video-classification
language:
- en
tags:
- Sports
- Cricket
- Classification
size_categories:
- n<1K
---
# CricketShotClassification Dataset
## Dataset Description
This dataset is designed for video classification of cricket shots. It contains labeled videos of ten different cricket shots, making it suitable for training and evaluating machine learning models for cricket action recognition.
## Dataset Structure
The dataset contains videos of ten cricket shots:
| Shot Name | Label | Class ID |
|---------------|------------|-----------|
| Cover Drive | cover | 0 |
| Defense Shot | defense | 1 |
| Flick Shot | flick | 2 |
| Hook Shot | hook | 3 |
| Late Cut | late_cut | 4 |
| Lofted Drive | lofted | 5 |
| Pull Shot | pull | 6 |
| Square Cut | square_cut | 7 |
| Straight Drive| straight | 8 |
| Sweep Shot | sweep | 9 |
### Data Splits
- Train: 1,250 samples (125 per class)
- Validation: 250 samples (25 per class)
- Test: 250 samples (25 per class)
### Directory Structure
```
dataset/
โโโ train/
โ โโโ cover/
โ โโโ defense/
โ โโโ flick/
โ โโโ hook/
โ โโโ late_cut/
โ โโโ lofted/
โ โโโ pull/
โ โโโ square_cut/
โ โโโ straight/
โ โโโ sweep/
โโโ val/
โ โโโ cover/
โ โโโ defense/
โ โโโ flick/
โ โโโ hook/
โ โโโ late_cut/
โ โโโ lofted/
โ โโโ pull/
โ โโโ square_cut/
โ โโโ straight/
โ โโโ sweep/
โโโ test/
โโโ cover/
โโโ defense/
โโโ flick/
โโโ hook/
โโโ late_cut/
โโโ lofted/
โโโ pull/
โโโ square_cut/
โโโ straight/
โโโ sweep/
```
## Usage
```python
from datasets import load_dataset
# Load the dataset from the hub
dataset = load_dataset("rokmr/cricketshot")
# Access different splits
train_data = dataset['train']
val_data = dataset['validation']
test_data = dataset['test']
```
## Limitations and Bias
- Limited sample size per class
- May not cover all possible variations of each shot
- Lighting and camera angle variations may affect model performance
## Citation
If you use this dataset, please cite:
```
@misc{cricketshot,
author = {Rohit Kumar},
title = {Cricket Shot Classification Dataset},
year = {2025},
publisher = {Hugging Face}
}
```
## Acknowledgements
The dataset structure and initial inspiration was drawn from the [CricShot10 dataset](https://github.com/ascuet/CricShot10) We thank the authors for making their work publicly available.
## Dataset Card Contact
For questions or concerns about the dataset, please contact the maintainers through the Hugging Face community platform. |
Tony068/falcon-refined-web-5M | Tony068 | 2025-03-12T06:41:31Z | 44 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T06:36:02Z | null | ---
dataset_info:
features:
- name: content
dtype: string
- name: url
dtype: string
- name: timestamp
dtype: timestamp[us]
- name: dump
dtype: string
- name: segment
dtype: string
- name: image_urls
sequence:
sequence: string
splits:
- name: train
num_bytes: 14330816788
num_examples: 5000000
download_size: 8665021107
dataset_size: 14330816788
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rvanova/RedPajama-Data-1K-Sample-For-Test | rvanova | 2025-03-12T06:34:00Z | 13 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T06:31:20Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 5318686001
num_examples: 930514
download_size: 3059217244
dataset_size: 5318686001
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
abdurrahimyilmaz/DermaSynth | abdurrahimyilmaz | 2025-03-12T06:00:39Z | 94 | 0 | [
"task_categories:image-to-text",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.00196",
"region:us",
"dermatology",
"skin_lesion"
] | [
"image-to-text"
] | 2025-02-07T17:31:12Z | null | ---
language:
- en
license: cc-by-nc-4.0
size_categories:
- 10K<n<100K
task_categories:
- image-to-text
dataset_info:
features:
- name: image_id
dtype: string
- name: image_path
dtype: string
- name: dataset_origin
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 43420729
num_examples: 92020
download_size: 14384301
dataset_size: 43420729
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- dermatology
- skin_lesion
---
GitHub Link: https://github.com/abdurrahimyilmaz/DermaSynth
Paper: https://arxiv.org/pdf/2502.00196
Fine-tuned Model: https://huggingface.co/abdurrahimyilmaz/DermatoLlama-1.0
Please cite the paper, if you use the model or dataset:
```bibtex
@article{yilmaz2025dermasynth,
title={DermaSynth: Rich Synthetic Image-Text Pairs Using Open Access Dermatology Datasets},
author={Yilmaz, Abdurrahim and Yuceyalcin, Furkan and Gokyayla, Ece and Choi, Donghee and Erdem, Ozan and Demircali, Ali Anil and Varol, Rahmetullah and Kirabali, Ufuk Gorkem and Gencoglan, Gulsum and Posma, Joram M and Temelkuran, Burak},
journal={arXiv preprint arXiv:2502.00196},
year={2025}
}
```
<p align="center">
<img src="word_wheel_figure.png" width="80%"> <br>
*The most common 20 root verb-noun pairs of question set is shown in Figure 2 which indicates the question set is diverse. Prompts spanned a wide range of question typesโsuch as โWhat does this lesion look like?โ and โCould this be X ?โโand were manually screened and selected for clarity and medical relevance.*
</p>
| DermaSynth Dataset | Size |
| --- | ---: |
| [derma_synth_92k.json](https://huggingface.co/datasets/abdurrahimyilmaz/DermaSynth/resolve/main/data/derma_synth_92k.json?download=true) | 52.4 MiB | |
juyoung-trl/Global-MMLU-Lite-CA | juyoung-trl | 2025-03-12T05:59:31Z | 60 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:32:40Z | null | ---
dataset_info:
- config_name: ar
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 179091.0
num_examples: 200
- name: dev
num_bytes: 166163
num_examples: 215
download_size: 176384
dataset_size: 345254.0
- config_name: bn
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 262771.5
num_examples: 200
- name: dev
num_bytes: 249598
num_examples: 215
download_size: 206535
dataset_size: 512369.5
- config_name: de
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 152937.0
num_examples: 200
- name: dev
num_bytes: 133268
num_examples: 215
download_size: 164891
dataset_size: 286205.0
- config_name: default
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 1463028.5
num_examples: 1800
download_size: 620996
dataset_size: 1463028.5
- config_name: en
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 138009.0
num_examples: 200
- name: dev
num_bytes: 119795
num_examples: 215
download_size: 148671
dataset_size: 257804.0
- config_name: es
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 148890.5
num_examples: 200
- name: dev
num_bytes: 131340
num_examples: 215
download_size: 157157
dataset_size: 280230.5
- config_name: fr
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 154842.0
num_examples: 200
- name: dev
num_bytes: 136111
num_examples: 215
download_size: 163261
dataset_size: 290953.0
- config_name: hi
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 258649.0
num_examples: 200
- name: dev
num_bytes: 255215
num_examples: 215
download_size: 206586
dataset_size: 513864.0
- config_name: id
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 142321.5
num_examples: 200
- name: dev
num_bytes: 127171
num_examples: 215
download_size: 145705
dataset_size: 269492.5
- config_name: it
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 149796.0
num_examples: 200
- name: dev
num_bytes: 128894
num_examples: 215
download_size: 159277
dataset_size: 278690.0
- config_name: ja
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 155404.5
num_examples: 200
- name: dev
num_bytes: 136334
num_examples: 215
download_size: 161358
dataset_size: 291738.5
- config_name: ko
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 148020.0
num_examples: 200
- name: dev
num_bytes: 131241
num_examples: 215
download_size: 155896
dataset_size: 279261.0
- config_name: pt
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 145189.5
num_examples: 200
- name: dev
num_bytes: 126355
num_examples: 215
download_size: 156556
dataset_size: 271544.5
- config_name: sw
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 140300.0
num_examples: 200
- name: dev
num_bytes: 119949
num_examples: 215
download_size: 144527
dataset_size: 260249.0
- config_name: yo
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 172060.0
num_examples: 200
- name: dev
num_bytes: 125389
num_examples: 215
download_size: 157998
dataset_size: 297449.0
- config_name: zh
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: test
num_bytes: 127185.5
num_examples: 200
- name: dev
num_bytes: 103758
num_examples: 215
download_size: 140963
dataset_size: 230943.5
configs:
- config_name: ar
data_files:
- split: test
path: ar/test-*
- split: dev
path: ar/dev-*
- config_name: bn
data_files:
- split: test
path: bn/test-*
- split: dev
path: bn/dev-*
- config_name: de
data_files:
- split: test
path: de/test-*
- split: dev
path: de/dev-*
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: en
data_files:
- split: test
path: en/test-*
- split: dev
path: en/dev-*
- config_name: es
data_files:
- split: test
path: es/test-*
- split: dev
path: es/dev-*
- config_name: fr
data_files:
- split: test
path: fr/test-*
- split: dev
path: fr/dev-*
- config_name: hi
data_files:
- split: test
path: hi/test-*
- split: dev
path: hi/dev-*
- config_name: id
data_files:
- split: test
path: id/test-*
- split: dev
path: id/dev-*
- config_name: it
data_files:
- split: test
path: it/test-*
- split: dev
path: it/dev-*
- config_name: ja
data_files:
- split: test
path: ja/test-*
- split: dev
path: ja/dev-*
- config_name: ko
data_files:
- split: test
path: ko/test-*
- split: dev
path: ko/dev-*
- config_name: pt
data_files:
- split: test
path: pt/test-*
- split: dev
path: pt/dev-*
- config_name: sw
data_files:
- split: test
path: sw/test-*
- split: dev
path: sw/dev-*
- config_name: yo
data_files:
- split: test
path: yo/test-*
- split: dev
path: yo/dev-*
- config_name: zh
data_files:
- split: test
path: zh/test-*
- split: dev
path: zh/dev-*
---
|
DetonateT2I/DetonateT2I_final2 | DetonateT2I | 2025-03-12T05:53:30Z | 40 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:17:07Z | null | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Chosen
dtype: image
- name: Rejected
dtype: image
- name: Category
dtype: string
splits:
- name: train
num_bytes: 91782817017.596
num_examples: 31404
download_size: 91817913847
dataset_size: 91782817017.596
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jackkuo/LLM-Ribozyme-Kinetics-Golden-Benchmark | jackkuo | 2025-03-12T05:52:05Z | 44 | 0 | [
"license:cc",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-24T02:07:43Z | null | ---
license: cc
---
### ๐ฉCitation
Please cite the following paper if you use jackkuo/LLM-Ribozyme-Kinetics-Golden-Benchmark in your work.
```bibtex
@article {Jiang2025.03.03.641178,
author = {Jiang, Jinling and Hu, Jie and Xie, Siwei and Guo, Menghao and Dong, Yuhang and Fu, Shuai and Jiang, Xianyue and Yue, Zhenlei and Shi, Junchao and Zhang, Xiaoyu and Song, Minghui and Chen, Guangyong and Lu, Hua and Wu, Xindong and Guo, Pei and Han, Da and Sun, Zeyi and Qiu, Jiezhong},
title = {Enzyme Co-Scientist: Harnessing Large Language Models for Enzyme Kinetic Data Extraction from Literature},
elocation-id = {2025.03.03.641178},
year = {2025},
doi = {10.1101/2025.03.03.641178},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The extraction of molecular annotations from scientific literature is critical for advancing data-driven research. However, traditional methods, which primarily rely on human curation, are labor-intensive and error-prone. Here, we present an LLM-based agentic workflow that enables automatic and efficient data extraction from literature with high accuracy. As a demonstration, our workflow successfully delivers a dataset containing over 91,000 enzyme kinetics entries from around 3,500 papers. It achieves an average F1 score above 0.9 on expert-annotated subsets of protein enzymes and can be extended to the ribozyme domain in fewer than 3 days at less than $90. This method opens up new avenues for accelerating the pace of scientific research.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2025/03/11/2025.03.03.641178},
eprint = {https://www.biorxiv.org/content/early/2025/03/11/2025.03.03.641178.full.pdf},
journal = {bioRxiv}
}
``` |
hijklmno/jenny-tts-6h-tagged | hijklmno | 2025-03-12T05:51:48Z | 29 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:51:44Z | null | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: text
dtype: string
- name: transcription_normalised
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: text_description
dtype: string
splits:
- name: train
num_bytes: 2261218
num_examples: 4000
download_size: 1054595
dataset_size: 2261218
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jackkuo/LLM-Enzyme-Kinetics-Golden-Benchmark | jackkuo | 2025-03-12T05:50:56Z | 12 | 0 | [
"license:cc",
"region:us"
] | [] | 2024-11-13T06:21:39Z | null | ---
license: cc
---
### ๐ฉCitation
Please cite the following paper if you use jackkuo/LLM-Enzyme-Kinetics-Golden-Benchmark in your work.
```bibtex
@article {Jiang2025.03.03.641178,
author = {Jiang, Jinling and Hu, Jie and Xie, Siwei and Guo, Menghao and Dong, Yuhang and Fu, Shuai and Jiang, Xianyue and Yue, Zhenlei and Shi, Junchao and Zhang, Xiaoyu and Song, Minghui and Chen, Guangyong and Lu, Hua and Wu, Xindong and Guo, Pei and Han, Da and Sun, Zeyi and Qiu, Jiezhong},
title = {Enzyme Co-Scientist: Harnessing Large Language Models for Enzyme Kinetic Data Extraction from Literature},
elocation-id = {2025.03.03.641178},
year = {2025},
doi = {10.1101/2025.03.03.641178},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The extraction of molecular annotations from scientific literature is critical for advancing data-driven research. However, traditional methods, which primarily rely on human curation, are labor-intensive and error-prone. Here, we present an LLM-based agentic workflow that enables automatic and efficient data extraction from literature with high accuracy. As a demonstration, our workflow successfully delivers a dataset containing over 91,000 enzyme kinetics entries from around 3,500 papers. It achieves an average F1 score above 0.9 on expert-annotated subsets of protein enzymes and can be extended to the ribozyme domain in fewer than 3 days at less than $90. This method opens up new avenues for accelerating the pace of scientific research.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2025/03/11/2025.03.03.641178},
eprint = {https://www.biorxiv.org/content/early/2025/03/11/2025.03.03.641178.full.pdf},
journal = {bioRxiv}
}
``` |
hsurapur/logs | hsurapur | 2025-03-12T05:46:40Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-16T04:00:59Z | null | ---
dataset_info:
features:
- name: query
dtype: string
- name: timestamp
dtype: string
splits:
- name: train
num_bytes: 1098
num_examples: 30
download_size: 1632
dataset_size: 1098
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VGraf/safe_responses_dev_8maxturns_0.7prob | VGraf | 2025-03-12T05:38:17Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:38:06Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1295603716
num_examples: 75744
download_size: 192505334
dataset_size: 1295603716
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tarakeshwaran/Hackathon_Sentiment_analysis | Tarakeshwaran | 2025-03-12T05:37:01Z | 44 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:16:42Z | null | ---
dataset_info:
features:
- name: Text
dtype: string
- name: Timestamp
dtype: string
- name: User
dtype: string
- name: Platform
dtype: string
- name: Hashtags
dtype: string
- name: Retweets
dtype: float64
- name: Likes
dtype: float64
- name: Country
dtype: string
- name: Year
dtype: int64
- name: Month
dtype: int64
- name: Day
dtype: int64
- name: Hour
dtype: int64
- name: Sentiment
dtype: string
splits:
- name: train
num_bytes: 198093
num_examples: 732
download_size: 78808
dataset_size: 198093
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Sentiment Analysis Dataset
## Description
This dataset contains social media posts labeled with sentiment categories. It includes metadata such as user details, timestamps, engagement metrics, and hashtags, making it useful for sentiment analysis, natural language processing (NLP), and social media analytics.
## Dataset Details
### **Columns:**
- **Text**: The content of the social media post.
- **Sentiment**: The sentiment classification (**Positive, Negative, Neutral**).
- **Timestamp**: The date and time when the post was made.
- **User**: The username of the person who posted the content.
- **Platform**: The social media platform (**Twitter, Instagram, Facebook, etc.**).
- **Hashtags**: Hashtags used in the post.
- **Retweets**: Number of retweets (for Twitter) or shares.
- **Likes**: Number of likes the post received.
- **Country**: The country from which the post originated.
- **Year, Month, Day, Hour**: Extracted datetime components for time-based analysis.
### **Notes:**
- The dataset contains **732 entries**.
- The **Unnamed: 0** and **Unnamed: 0.1** columns appear to be redundant and can be ignored.
- This dataset can be used for **training sentiment classification models** or **analyzing engagement trends**.
## Use Cases
- **Sentiment analysis** of social media content.
- **Engagement analysis** of posts based on likes and retweets.
- **Trend analysis** of public opinion over time.
## How to Use
You can load the dataset using the `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("Tarakeshwaran/Hackathon_Sentiment_analysis")
print(dataset)
|
VGraf/safe_responses_dev_8maxturns_0.4prob | VGraf | 2025-03-12T05:35:25Z | 35 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:35:17Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 746999327
num_examples: 43728
download_size: 176232910
dataset_size: 746999327
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hijklmno/jenny-tts-tags-6h | hijklmno | 2025-03-12T05:34:31Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:32:11Z | null | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: text
dtype: string
- name: transcription_normalised
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
splits:
- name: train
num_bytes: 1832348
num_examples: 4000
download_size: 928052
dataset_size: 1832348
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VGraf/safe_responses_dev_4maxturns_0.5prob | VGraf | 2025-03-12T05:34:01Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:33:55Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 340638169
num_examples: 27096
download_size: 105494154
dataset_size: 340638169
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
XUHAN8088/xhh | XUHAN8088 | 2025-03-12T05:30:44Z | 47 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T09:46:15Z | null | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: text_label
dtype: string
splits:
- name: train
num_bytes: 33911
num_examples: 161
download_size: 5230
dataset_size: 33911
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Leon-Leee/unofficial-pyedu | Leon-Leee | 2025-03-12T05:23:02Z | 34 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2502.02737",
"region:us"
] | [] | 2025-03-12T02:52:05Z | null | ---
dataset_info:
features:
- name: blob_id
dtype: string
- name: repo_name
dtype: string
- name: path
dtype: string
- name: length_bytes
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: text
dtype: string
- name: download_success
dtype: bool
splits:
- name: train
num_bytes: 13499266964
num_examples: 7678448
download_size: 6086016638
dataset_size: 13499266964
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# About This Dataset
The HuggingFaceTB team has released an impressive series of models called smollm (V1/V2) (paper๏ผ https://arxiv.org/abs/2502.02737).
According to their documentation, they used Stack-Edu as the code field corpus for pretraining and published https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus.
However for some reason, only a Python-Edu subset is accessible and there's no content/text field in it.
The full dataset is stored on AWS S3; downloading it requires an AWS EC2 instance, or you will be blocked by AWS's rate limits under which you will never download it.
Fortunately, the py-edu subset is relatively small (~7 million files) for me to afford; downloading the entire set takes approximately one hour.
I am publishing the complete py-edu dataset here for anyone who needs it.
If this release inadvertently causes any issues for the HuggingFaceTB team, please reach out to me and I will remove it immediately. |
mteb/SpokeN-100-English | mteb | 2025-03-12T05:18:52Z | 18 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:18:43Z | null | ---
dataset_info:
features:
- name: file
dtype: string
- name: label
dtype: int64
- name: speaker
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 90801340.0
num_examples: 3200
download_size: 85243025
dataset_size: 90801340.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
distillslm/alpaca-Qwen2.5-7B-Instruct-generated | distillslm | 2025-03-12T05:14:04Z | 13 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T08:03:01Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 67859621
num_examples: 46698
- name: test
num_bytes: 7460810
num_examples: 5192
download_size: 38090687
dataset_size: 75320431
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
sihyun77/eval_act_so100_suho4 | sihyun77 | 2025-03-12T05:11:14Z | 23 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-12T05:11:04Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 8,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
sihyun77/eval_act_so100_suho3 | sihyun77 | 2025-03-12T05:08:05Z | 24 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-12T05:07:53Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 3797,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
nouhad/multiplication_test_100_4x4 | nouhad | 2025-03-12T05:07:31Z | 26 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:07:23Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 10262
num_examples: 100
download_size: 5875
dataset_size: 10262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_test_100_10x10 | nouhad | 2025-03-12T05:04:40Z | 13 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:04:38Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 13860
num_examples: 100
download_size: 9247
dataset_size: 13860
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_test_100_9x9 | nouhad | 2025-03-12T05:04:38Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T05:04:37Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 13268
num_examples: 100
download_size: 8660
dataset_size: 13268
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shaquille69/conversation | shaquille69 | 2025-03-12T05:03:54Z | 8 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:question-answering",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [
"text-generation",
"text2text-generation",
"question-answering"
] | 2025-03-12T05:03:51Z | null | ---
size_categories: n<1K
task_categories:
- text-generation
- text2text-generation
- question-answering
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 68978
num_examples: 10
download_size: 35298
dataset_size: 68978
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for conversation
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/shaquille69/conversation/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/shaquille69/conversation/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"messages": [
{
"content": "You are an AI assistant designed to engage in normal, everyday conversations with users. Your purpose is to respond to a wide range of topics, from casual discussions about the weather to more substantial conversations about news, opinions, and interests. Provide engaging, relevant, and coherent responses that keep the conversation flowing smoothly, while also being mindful of the user\u0027s tone and context.\n\nRespond to queries with a mix of factual information, personal anecdotes, and creative insights, all while maintaining a friendly and approachable demeanor. Be prepared to transition between different subjects and topics as the conversation unfolds, and use your understanding of context and tone to ensure that your responses are always relevant and engaging.\n\nWhen possible, offer users suggestions for further conversation or topics to explore, and be prepared to adapt to their interests and preferences. Keep the conversation natural and spontaneous, just as you would with a human friend, and strive to build a rapport with the user over time.\n\nBe aware of nuances in language, such as idioms, colloquialisms, and figurative language, and use your understanding of context to ensure that your responses are always clear and effective.\n\nUser questions are direct and concise.",
"role": "system"
},
{
"content": "What\u0027s the best way to handle stress?",
"role": "user"
},
{
"content": "You know, I think managing stress is a really personal thing, but I\u0027ve found that it\u0027s all about finding those little \"hacks\" \u2013 habits that help you unwind and recharge. For me, it\u0027s been all about exercise, whether it\u0027s going for a walk, hitting the gym, or even just some good old-fashioned yoga. It\u0027s funny, but when I\u0027m feeling really overwhelmed, it\u0027s often because I\u0027ve been stuck inside staring at screens for too long. So, getting outside, even just for a short walk, really helps get my mind clear.\n\nAnother thing that works for me is creative activities, like drawing, painting, or even just doodling. Sometimes, I\u0027ll pull out my sketchbook and just start scribbling, and it helps me relax in the moment. Of course, everyone\u0027s different, but that\u0027s why it\u0027s so crucial to experiment and find what works best for you.\n\nI\u0027ve also found that practicing mindfulness has helped me manage stress better. It\u0027s about being present and letting go of all that mental chatter. It\u0027s not always easy, but it\u0027s something that\u0027s really helped me learn to breathe through those tough moments.\n\nWhat about you, do you have any tricks up your sleeve for managing stress?",
"role": "assistant"
},
{
"content": "Do you believe that one can really achieve significant self-improvement through the power of habits? \n\n",
"role": "user"
},
{
"content": "You know, I think habits are a powerful tool for self-improvement, but it\u0027s not just about forming new habits, it\u0027s also about breaking the old ones that hold us back. I mean, think about it \u2013 most of us have a default mode of behavior that\u0027s been shaped by years of repetition, and that can be hard to shift. But, the thing is, once we develop new habits, they can start to compound and really drive change. For example, when I was trying to get back into running, I started with just a few minutes a day, and before I knew it, I was hooked.\n\nIt\u0027s also worth noting that the key to sustained self-improvement is to find habits that actually align with your goals and values, rather than trying to force change into a preconceived identity. It\u0027s all about creating a sense of identity that\u0027s cohesive and authentic. For me, it\u0027s been about recognizing what\u0027s truly important and finding habits that support that identity.\n\nAnother thing that I\u0027ve found helpful is to focus on small wins, rather than trying to tackle everything at once. It\u0027s really about building momentum and confidence through incremental successes. So, if you\u0027re looking to make significant self-improvement, I\u0027d say start with a small habit change**, and then gradually build on it.\n\nYou know, I also want to mention that it\u0027s all about progress, not perfection. It\u0027s okay to stumble and make mistakes \u2013 it\u0027s all part of the journey, and the important thing is to learn from them and keep moving forward.\n\nWould you like to talk more about habit change or something else related to self-improvement?",
"role": "assistant"
},
{
"content": "Do you know what would be really helpful to have an equipped learning plan to help student learning student learning plan to help students become more self-directed learners increase their academic resilience.",
"role": "user"
},
{
"content": null,
"role": "assistant"
}
]
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("shaquille69/conversation", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("shaquille69/conversation")
```
</details>
|
jjaehyeok2/kimbab_kimchisoup_tteokbokki | jjaehyeok2 | 2025-03-12T05:02:12Z | 12 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:48:34Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 77735601.0
num_examples: 417
download_size: 77606007
dataset_size: 77735601.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HeyixInn0/Reorganized-mbpp | HeyixInn0 | 2025-03-12T04:55:50Z | 41 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-09T21:44:11Z | null | ---
dataset_info:
features:
- name: entry_func
dtype: string
- name: solution
dtype: string
- name: task_name
dtype: string
- name: doc_string
dtype: string
- name: compare_func
sequence: 'null'
- name: tgt_lang
dtype: string
- name: suffix
dtype: string
- name: import_str
sequence: string
- name: src_lang
dtype: 'null'
- name: demos
sequence: 'null'
- name: test_cases
sequence:
sequence: string
- name: data_id
dtype: int64
- name: prefix
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 126254
num_examples: 257
download_size: 61055
dataset_size: 126254
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Shakil2448868/bangla-songs-synthetic-prompt | Shakil2448868 | 2025-03-12T04:52:38Z | 53 | 0 | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"language:bn",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text2text-generation",
"text-generation"
] | 2025-03-07T10:32:42Z | null | ---
dataset_info:
features:
- name: Writer
dtype: string
- name: Title
dtype: string
- name: Song
dtype: string
- name: syn_prompt
dtype: string
splits:
- name: train
num_bytes: 13531276
num_examples: 2222
download_size: 5547102
dataset_size: 13531276
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text2text-generation
- text-generation
language:
- bn
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
kishoreb8220/finetuning_demo | kishoreb8220 | 2025-03-12T04:51:37Z | 55 | 0 | [
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:51:35Z | null | ---
dataset_info:
features:
- name: prompt
dtype: float64
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 548
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_code_best_sharegpt | mlfoundations-dev | 2025-03-12T04:15:55Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:15:53Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 140
num_examples: 1
download_size: 4557
dataset_size: 140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dfdsfgf/big_math_dataset | dfdsfgf | 2025-03-12T04:07:03Z | 46 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:06:57Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 16305977
num_examples: 97467
download_size: 9339580
dataset_size: 16305977
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
singsan9song/smhrd_llama3_korea | singsan9song | 2025-03-12T04:05:31Z | 11 | 0 | [
"license:llama3",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T03:36:29Z | null | ---
license: llama3
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 6161
num_examples: 19
download_size: 5159
dataset_size: 6161
---
|
qishenfu/debug_2_0311 | qishenfu | 2025-03-12T04:04:52Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T04:04:30Z | null | ---
dataset_info:
features:
- name: system
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: gvkey
dtype: int64
- name: fyear
dtype: int64
splits:
- name: train
num_bytes: 47551
num_examples: 5
download_size: 22155
dataset_size: 47551
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gguava/xiaoshuo | gguava | 2025-03-12T03:54:36Z | 30 | 0 | [
"license:artistic-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T03:53:56Z | null | ---
license: artistic-2.0
---
|
passionMan/dataset_seed_evol_cherry204 | passionMan | 2025-03-12T03:48:59Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T03:48:56Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 5084080
num_examples: 2611
download_size: 2501289
dataset_size: 5084080
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
speedxd/CFSC-Dataset | speedxd | 2025-03-12T03:28:08Z | 14 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-10T08:14:44Z | null | ---
dataset_info:
features:
- name: tweet
dtype: string
- name: entity
dtype: string
- name: sentiment
dtype: int64
splits:
- name: train
num_bytes: 448644
num_examples: 1698
- name: test
num_bytes: 120676
num_examples: 433
download_size: 181765
dataset_size: 569320
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
sihyun77/eval_act_so100_suho | sihyun77 | 2025-03-12T03:18:50Z | 22 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-12T03:18:39Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 3790,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
yuebanlaosiji/e-girl | yuebanlaosiji | 2025-03-12T03:03:03Z | 22 | 0 | [
"language:zh",
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-08T02:23:53Z | null | ---
language:
- zh
license: apache-2.0
---
|
SKIML-ICL/nq_retrieved_adversarial_sentence_sim | SKIML-ICL | 2025-03-12T03:00:08Z | 7 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:54:56Z | null | ---
dataset_info:
config_name: adversarial
features:
- name: qid
dtype: int64
- name: question
dtype: string
- name: answers
sequence: string
- name: answer_sentence
dtype: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: nli
dtype: string
- name: pid
dtype: int64
- name: rank
dtype: int64
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: hasanswer
dtype: bool
- name: answerable
dtype: string
- name: qa_cases
list:
- name: answers
sequence: string
- name: context
dtype: string
- name: distance
dtype: string
- name: original_answers
sequence: string
- name: question
dtype: string
- name: unanswerable_cases
list:
- name: answers
sequence: string
- name: context
dtype: string
- name: distance
dtype: string
- name: original_answers
sequence: string
- name: question
dtype: string
- name: conflict_cases
list:
- name: answers
sequence: string
- name: context
dtype: string
- name: distance
dtype: string
- name: original_answers
sequence: string
- name: question
dtype: string
- name: named_entities
sequence: string
- name: input
dtype: string
- name: prompt
dtype: string
- name: adversarial_sentence
dtype: string
- name: cosine_similarity
dtype: float32
splits:
- name: train
num_bytes: 1637167855.0
num_examples: 69106
- name: validation
num_bytes: 181329026.0
num_examples: 7659
download_size: 864882517
dataset_size: 1818496881.0
configs:
- config_name: adversarial
data_files:
- split: train
path: adversarial/train-*
- split: validation
path: adversarial/validation-*
---
|
dean1111ii/medical_01_sft | dean1111ii | 2025-03-12T02:59:28Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T04:00:01Z | null | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Complex_CoT
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 157599
num_examples: 73
download_size: 71065
dataset_size: 157599
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
agentlans/FarisHijazi-kajiwoto.ai-chat | agentlans | 2025-03-12T02:46:20Z | 13 | 0 | [
"task_categories:text-generation",
"language:en",
"region:us",
"roleplay",
"kajiwoto",
"sharegpt"
] | [
"text-generation"
] | 2025-03-11T20:12:20Z | null | ---
language:
- en
tags:
- roleplay
- kajiwoto
- sharegpt
task_categories:
- text-generation
---
# FarisHijazi/kajiwoto.ai-chat
## Dataset Description
This dataset is a processed version of the [FarisHijazi/kajiwoto.ai-chat](https://huggingface.co/datasets/FarisHijazi/kajiwoto.ai-chat) dataset, containing extracted and re-formatted conversation data from Kajiwoto. Kajiwoto is a platform where users can create and interact with AI character datasets. This version focuses on providing the core conversational data in a ShareGPT-like format.
**Important Note:** This data is still in a raw format and requires careful consideration before use in training.
* **Size:** 13 784 rows
## Data Fields
The dataset contains the following fields, mirroring the original Kajiwoto dataset structure:
* `id`: Unique identifier for the character.
* `name`: Name of the character.
* `description`: Character description (often empty).
* `profilePhotoUri`: URI of the character's profile photo (may be null).
* `dominantColors`: List of dominant colors in the profile photo (may be null).
* `personalities`: Personality tags (may be null).
* `nsfw`: Boolean indicating whether the character is marked as "not safe for work."
* `conversations`: A list of turns representing a conversation. Each turn has:
* `from`: Either "human" or "gpt" indicating the speaker.
* `value`: The text of the message.
* `upvotes`: Number of upvotes the character has received.
## Data Format
The `conversations` field is formatted in a ShareGPT-like structure, where each turn indicates the speaker ("human" or "gpt") and the message content:
```json
{
"id": "M51k",
"name": "Damian ",
"description": "",
"profilePhotoUri": null,
"dominantColors": null,
"personalities": null,
"nsfw": false,
"conversations": [
{
"from": "human",
"value": "Still a horrific introvert?"
},
{
"from": "gpt",
"value": "I know you wanted me to find some friends, but I've already got one here in Gotham and one is more than enough"
},
{
"from": "human",
"value": "How is the friend making going "
},
{
"from": "gpt",
"value": "I know you wanted me to find some friends, but I've already got one here in Gotham and one is more than enough"
},
{
"from": "human",
"value": "You're not very good at this relax and have fun thing are you"
},
{
"from": "gpt",
"value": "I wasn't raised to have fun"
},
...
],
"upvotes": 0
}
```
## Intended Uses
This dataset is intended for research and experimentation in areas such as:
* AI character development
* Dialogue generation
* Role-playing AI
* Analyzing conversational patterns
## Caveats and Recommendations
### Preprocessing Considerations:
* **Filtering:**
* **Upvotes:** Consider filtering by the number of upvotes. However, be aware that profiles with fewer views might have lower upvotes, even if the quality is high.
* **NSFW:** The `nsfw` field may not be completely reliable. Some users engage in NSFW chats with characters not marked as NSFW. Manual inspection may be required.
* **Text Processing:**
* The text contains special markup that needs to be handled:
* `#scene=happy`: Scene descriptions.
* `%{Howdy|G'day|Yo}`: Multiple possible options for a phrase.
* Pronouns: Requires handling for consistency.
### Limitations
* The data might not be sufficiently filtered for training without further processing.
* Many short, repetitive messages may not accurately represent character behavior.
* The dataset contains a mixture of good and bad quality conversations. |
cchoi1/humaneval_qwen7b_att_iter0_ppo_att20_sol50 | cchoi1 | 2025-03-12T02:42:52Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:42:49Z | null | ---
dataset_info:
features:
- name: mutation_id
dtype: string
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
- name: mutation_info
dtype: string
- name: score
dtype: float64
- name: scores
dtype: string
- name: solutions
dtype: string
- name: solutions_info
dtype: string
splits:
- name: train
num_bytes: 17626355
num_examples: 1326
download_size: 1385490
dataset_size: 17626355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andy309/eval_act_so100_0311_1152_03121028 | andy309 | 2025-03-12T02:31:01Z | 24 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-12T02:30:42Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1258,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.back": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist_right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
mlfoundations-dev/train_fasttext_classifier_seed_math_worst_mix_5_3_1 | mlfoundations-dev | 2025-03-12T02:13:55Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:13:35Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 134
num_examples: 1
download_size: 4527
dataset_size: 134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_math_worst_1 | mlfoundations-dev | 2025-03-12T02:13:36Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:13:07Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 134
num_examples: 1
download_size: 4527
dataset_size: 134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_math_worst_3 | mlfoundations-dev | 2025-03-12T02:12:28Z | 55 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:11:59Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 134
num_examples: 1
download_size: 4527
dataset_size: 134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_math_best_automath | mlfoundations-dev | 2025-03-12T02:12:16Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:11:18Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 140
num_examples: 1
download_size: 4557
dataset_size: 140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_math_best_tigerlab | mlfoundations-dev | 2025-03-12T02:11:23Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:11:17Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 140
num_examples: 1
download_size: 4557
dataset_size: 140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_math_best_s1 | mlfoundations-dev | 2025-03-12T02:10:48Z | 25 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T02:10:46Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 134
num_examples: 1
download_size: 4527
dataset_size: 134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dirganmdcp/yfinance_Indonesia_Stock_Exchange | dirganmdcp | 2025-03-12T02:05:32Z | 51 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-03-12T02:05:32Z | null | ---
license: apache-2.0
---
|
infinite-dataset-hub/AsianFacesForBiasDetection | infinite-dataset-hub | 2025-03-12T02:04:43Z | 24 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | [] | 2025-03-12T02:04:42Z | null | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# AsianFacesForBiasDetection
tags: ethics, diversity, AI
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'AsianFacesForBiasDetection' dataset is aimed at identifying and mitigating bias in facial recognition AI systems. It contains a diverse range of images representing various Asian ethnicities to train models to recognize and understand the ethnic diversity within Asian faces. The dataset includes features like age, gender, and ethnicity diversity, which are critical in promoting fairness and diversity in AI. The label column indicates whether the face image shows potential signs of bias in recognition accuracy.
**CSV Content Preview:**
```csv
id, image_url, label
1, https://example.com/asian_face1.jpg, No_Bias
2, https://example.com/asian_face2.jpg, Minor_Bias
3, https://example.com/asian_face3.jpg, Major_Bias
4, https://example.com/asian_face4.jpg, No_Bias
5, https://example.com/asian_face5.jpg, No_Bias
```
**Dataset Description:**
The 'AsianFacesForBiasDetection' dataset consists of high-resolution images of Asian faces with a focus on promoting diversity in AI facial recognition technologies. The dataset is created with the aim of reducing racial bias and increasing the accuracy of AI systems across different ethnic groups. The images in the dataset are labeled according to the level of detection bias observed in preliminary studies, ranging from no bias to varying degrees of potential bias.
**CSV Content Preview:**
```csv
id, image_url, label
1, https://example.com/asian_face1.jpg, No_Bias
2, https://example.com/asian_face2.jpg, Mild_Bias
3, https://example.com/asian_face3.jpg, Moderate_Bias
4, https://example.com/asian_face4.jpg, No_Bias
5, https://example.com/asian_face5.jpg, No_Bias
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'Asian face':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=Asian+face&dataset=AsianFacesForBiasDetection&tags=ethics,+diversity,+AI
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
zijian2022/llmtest102 | zijian2022 | 2025-03-12T01:29:45Z | 16 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-03-12T01:29:37Z | null | ---
task_categories:
- robotics
tags:
- LeRobot
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
EMMMMMnn/test | EMMMMMnn | 2025-03-12T01:22:51Z | 7 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T12:07:42Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '10'
'3': '100'
'4': '101'
'5': '102'
'6': '103'
'7': '104'
'8': '105'
'9': '106'
'10': '107'
'11': '108'
'12': '109'
'13': '11'
'14': '110'
'15': '111'
'16': '112'
'17': '113'
'18': '114'
'19': '115'
'20': '116'
'21': '117'
'22': '118'
'23': '119'
'24': '12'
'25': '120'
'26': '121'
'27': '122'
'28': '123'
'29': '124'
'30': '125'
'31': '126'
'32': '127'
'33': '128'
'34': '129'
'35': '13'
'36': '130'
'37': '131'
'38': '132'
'39': '133'
'40': '134'
'41': '135'
'42': '136'
'43': '137'
'44': '138'
'45': '139'
'46': '14'
'47': '140'
'48': '141'
'49': '142'
'50': '143'
'51': '144'
'52': '145'
'53': '146'
'54': '147'
'55': '148'
'56': '149'
'57': '15'
'58': '150'
'59': '151'
'60': '152'
'61': '153'
'62': '154'
'63': '155'
'64': '156'
'65': '157'
'66': '158'
'67': '159'
'68': '16'
'69': '160'
'70': '161'
'71': '162'
'72': '163'
'73': '164'
'74': '165'
'75': '166'
'76': '167'
'77': '168'
'78': '169'
'79': '17'
'80': '170'
'81': '171'
'82': '172'
'83': '173'
'84': '174'
'85': '175'
'86': '176'
'87': '177'
'88': '178'
'89': '179'
'90': '18'
'91': '180'
'92': '181'
'93': '182'
'94': '183'
'95': '184'
'96': '185'
'97': '186'
'98': '187'
'99': '188'
'100': '189'
'101': '19'
'102': '190'
'103': '191'
'104': '192'
'105': '193'
'106': '194'
'107': '195'
'108': '196'
'109': '197'
'110': '198'
'111': '199'
'112': '2'
'113': '20'
'114': '21'
'115': '22'
'116': '23'
'117': '24'
'118': '25'
'119': '26'
'120': '27'
'121': '28'
'122': '29'
'123': '3'
'124': '30'
'125': '31'
'126': '32'
'127': '33'
'128': '34'
'129': '35'
'130': '36'
'131': '37'
'132': '38'
'133': '39'
'134': '4'
'135': '40'
'136': '41'
'137': '42'
'138': '43'
'139': '44'
'140': '45'
'141': '46'
'142': '47'
'143': '48'
'144': '49'
'145': '5'
'146': '50'
'147': '51'
'148': '52'
'149': '53'
'150': '54'
'151': '55'
'152': '56'
'153': '57'
'154': '58'
'155': '59'
'156': '6'
'157': '60'
'158': '61'
'159': '62'
'160': '63'
'161': '64'
'162': '65'
'163': '66'
'164': '67'
'165': '68'
'166': '69'
'167': '7'
'168': '70'
'169': '71'
'170': '72'
'171': '73'
'172': '74'
'173': '75'
'174': '76'
'175': '77'
'176': '78'
'177': '79'
'178': '8'
'179': '80'
'180': '81'
'181': '82'
'182': '83'
'183': '84'
'184': '85'
'185': '86'
'186': '87'
'187': '88'
'188': '89'
'189': '9'
'190': '90'
'191': '91'
'192': '92'
'193': '93'
'194': '94'
'195': '95'
'196': '96'
'197': '97'
'198': '98'
'199': '99'
splits:
- name: train
num_bytes: 72435133.0
num_examples: 7500
download_size: 69992616
dataset_size: 72435133.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pzhang1990/DIOR-RSVG | pzhang1990 | 2025-03-12T01:21:18Z | 21 | 1 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T09:47:38Z | null | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image_path
dtype: string
- name: image
dtype: image
- name: question_id
dtype: int64
- name: question
dtype: string
- name: bbox
sequence: int64
splits:
- name: test
num_bytes: 2473956035.5
num_examples: 7500
download_size: 2012317038
dataset_size: 2473956035.5
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
droid-ai/libero | droid-ai | 2025-03-12T01:15:39Z | 15 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-03-12T01:08:44Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "libero",
"total_episodes": 1693,
"total_frames": 273465,
"total_tasks": 40,
"total_videos": 0,
"total_chunks": 2,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1693"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"is_first": {
"dtype": "bool",
"shape": [
1
]
},
"is_last": {
"dtype": "bool",
"shape": [
1
]
},
"is_terminal": {
"dtype": "bool",
"shape": [
1
]
},
"image": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"wrist_image": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"state": {
"dtype": "float32",
"shape": [
8
]
},
"joint_state": {
"dtype": "float32",
"shape": [
7
]
},
"actions": {
"dtype": "float32",
"shape": [
7
]
},
"discount": {
"dtype": "float32",
"shape": [
1
]
},
"reward": {
"dtype": "float32",
"shape": [
1
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
BroAlanTaps/Pretrain-Stage1-1024 | BroAlanTaps | 2025-03-12T01:12:00Z | 18 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T10:08:33Z | null | ---
dataset_info:
features:
- name: dump
dtype: string
- name: text
dtype: string
- name: target
dtype: string
- name: compress_ids
sequence: int64
- name: llm_ids
sequence: int64
- name: next_ids
sequence: int64
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 55937435446
num_examples: 1689078
- name: test
num_bytes: 22384728
num_examples: 676
download_size: 19806231580
dataset_size: 55959820174
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Svngoku/AHistoryOfSouthAfrica | Svngoku | 2025-03-12T00:59:06Z | 7 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:58:56Z | null | ---
dataset_info:
features:
- name: chunk_id
dtype: int64
- name: content
dtype: string
- name: metadata
struct:
- name: author
dtype: string
- name: creationDate
dtype: string
- name: creationdate
dtype: string
- name: creator
dtype: string
- name: file_path
dtype: string
- name: format
dtype: string
- name: keywords
dtype: string
- name: modDate
dtype: string
- name: moddate
dtype: string
- name: page
dtype: int64
- name: producer
dtype: string
- name: source
dtype: string
- name: start_index
dtype: int64
- name: subject
dtype: string
- name: title
dtype: string
- name: total_pages
dtype: int64
- name: trapped
dtype: string
- name: page_image
dtype: string
splits:
- name: train
num_bytes: 226940841
num_examples: 1056
download_size: 82229093
dataset_size: 226940841
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lime66/eval_so100_test | lime66 | 2025-03-12T00:54:14Z | 29 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-03-12T00:54:03Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 469,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
mlfoundations-dev/scale_up_science_25K | mlfoundations-dev | 2025-03-12T00:51:07Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:50:49Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: __index_level_0__
dtype: int64
- name: problem
dtype: string
- name: __original_row_idx
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: discipline
dtype: string
- name: expert
dtype: string
- name: num_topics
dtype: int64
- name: num_subtopics
dtype: int64
- name: num_questions
dtype: int64
- name: topic
dtype: string
- name: subtopic
dtype: string
- name: score
dtype: int64
- name: year
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 726700099
num_examples: 25002
download_size: 363148840
dataset_size: 726700099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/scale_up_science_1K | mlfoundations-dev | 2025-03-12T00:51:00Z | 11 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:50:49Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: __index_level_0__
dtype: int64
- name: problem
dtype: string
- name: __original_row_idx
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: discipline
dtype: string
- name: expert
dtype: string
- name: num_topics
dtype: int64
- name: num_subtopics
dtype: int64
- name: num_questions
dtype: int64
- name: topic
dtype: string
- name: subtopic
dtype: string
- name: score
dtype: int64
- name: year
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 29065678.705703545
num_examples: 1000
download_size: 14816353
dataset_size: 29065678.705703545
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/scale_up_science_4K | mlfoundations-dev | 2025-03-12T00:50:58Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:50:50Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: __index_level_0__
dtype: int64
- name: problem
dtype: string
- name: __original_row_idx
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: discipline
dtype: string
- name: expert
dtype: string
- name: num_topics
dtype: int64
- name: num_subtopics
dtype: int64
- name: num_questions
dtype: int64
- name: topic
dtype: string
- name: subtopic
dtype: string
- name: score
dtype: int64
- name: year
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 116262714.82281418
num_examples: 4000
download_size: 58413798
dataset_size: 116262714.82281418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rd211/eth-mathdial-conversations | rd211 | 2025-03-12T00:49:19Z | 87 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-27T23:13:11Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: ground_truth
dtype: string
- name: student_incorrect_solution
dtype: string
- name: student_profile
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: student_name
dtype: string
splits:
- name: train
num_bytes: 8955666
num_examples: 2261
- name: test
num_bytes: 2338595
num_examples: 599
download_size: 3746870
dataset_size: 11294261
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
mlfoundations-dev/train_fasttext_classifier_seed_code_worst_8 | mlfoundations-dev | 2025-03-12T00:45:55Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:45:53Z | null | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 134
num_examples: 1
download_size: 4527
dataset_size: 134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
passionMan/dataset_seed_evol_cherry206 | passionMan | 2025-03-12T00:40:56Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:40:52Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 7844357
num_examples: 3917
download_size: 3830834
dataset_size: 7844357
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bp7274/fam_DS_split | bp7274 | 2025-03-12T00:30:38Z | 21 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:30:33Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: response
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2185
num_examples: 17
- name: validation
num_bytes: 221
num_examples: 2
- name: test
num_bytes: 611
num_examples: 5
download_size: 7904
dataset_size: 3017
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
sequelbox/Raiden-DeepSeek-R1 | sequelbox | 2025-03-12T00:26:21Z | 76 | 43 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"raiden",
"creative",
"analytical",
"reasoning",
"rational",
"deepseek",
"r1",
"685b"
] | [
"text-generation"
] | 2025-02-11T04:54:53Z | null | ---
license: apache-2.0
tags:
- raiden
- creative
- analytical
- reasoning
- rational
- deepseek
- r1
- 685b
language:
- en
task_categories:
- text-generation
size_categories:
- 10K<n<100K
---
**[Click here to support our open-source dataset and model releases!](https://huggingface.co/spaces/sequelbox/SupportOpenSource)**
**Raiden-DeepSeek-R1** is a dataset containing creative-reasoning and analytic-reasoning responses, testing the limits of [DeepSeek R1's](https://huggingface.co/deepseek-ai/DeepSeek-R1) reasoning skills!
This dataset contains:
- 63k 'creative_content' and 'analytical_reasoning' prompts from [microsoft/orca-agentinstruct-1M-v1](https://huggingface.co/datasets/microsoft/orca-agentinstruct-1M-v1), with all responses generated by [deepseek-ai/DeepSeek-R1.](https://huggingface.co/deepseek-ai/DeepSeek-R1)
- Responses demonstrate the reasoning capabilities of DeepSeek's 685b parameter R1 reasoning model.
**Responses have not been filtered or edited at all:** the Raiden dataset strives to accurately represent the R1 model. Potential issues may include inaccurate answers and infinite thought loops. Raiden is presented as-is to be used at your discretion.
Users should consider applying their own sub-filtering and manual examination of the dataset before use in training.
Do as you will. |
Argenis29/Adam | Argenis29 | 2025-03-12T00:25:29Z | 16 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-03-12T00:25:29Z | null | ---
license: apache-2.0
---
|
hanaearg/ARSaSDataOk | hanaearg | 2025-03-12T00:23:38Z | 13 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:23:34Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: sentiment
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 8088700
num_examples: 15368
- name: test
num_bytes: 2047861
num_examples: 3842
download_size: 2156422
dataset_size: 10136561
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Mohamed-DLM/eld7e7_KHOAFoxaZaQ_mp3_updated_updated | Mohamed-DLM | 2025-03-12T00:23:30Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:23:27Z | null | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 33229432.0
num_examples: 72
download_size: 33205826
dataset_size: 33229432.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jjz5463/hard-negatives | jjz5463 | 2025-03-12T00:04:15Z | 28 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T08:48:01Z | null | ---
dataset_info:
features:
- name: anchor
dtype: string
- name: negative
dtype: string
splits:
- name: train
num_bytes: 1358605308
num_examples: 789850
download_size: 850578085
dataset_size: 1358605308
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rzgar/kurdish_sorani_wikipedia_gemma_2b_it | rzgar | 2025-03-12T00:03:57Z | 57 | 0 | [
"language:ku",
"license:cc-by-sa-4.0",
"modality:text",
"region:us"
] | [] | 2025-02-27T15:22:01Z | null | ---
license: cc-by-sa-4.0
language:
- ku
pretty_name: Lang CKB Gemma-2b-it
---
# Kurdish Sorani Wikipedia Text Dataset for Gemma-2-2b-it Fine-Tuning
## Dataset Description
This dataset contains Kurdish Sorani text extracted from Wikipedia, preprocessed and formatted for fine-tuning the `google/gemma-2-2b-it`language model. The goal of this dataset is to improve the model's ability to generate fluent, grammatically correct, and contextually relevant Kurdish Sorani text, particularly in the style of Wikipedia articles.
## Dataset Statistics
* **Number of lines:** 52,430
* **Total characters:** 99,080,311
* **Language:** Kurdish Sorani (Central Kurdish)
* **Source:** Wikipedia
* **Format:** Plain text, preprocessed with Gemma-compatible templates
## Data Collection and Preprocessing
The dataset was created through the following steps:
1. **Data Extraction:** Kurdish Sorani text was scraped directly from Wikipedia articles.
* List of URLs: [rzgar/ckb_wikipedia_kurdish_sorani_urls](https://huggingface.co/datasets/rzgar/ckb_wikipedia_kurdish_sorani_urls)
2. **Text Cleaning:** The extracted text was cleaned to remove:
* Extra whitespace
* HTML tags (if any were present)
* Unicode Left-to-Right Marks
* Citation Marks
## Licensing Information
This dataset is shared under the [Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/). Users are free to use, adapt, and share the data, provided attribution is given to the source (CKB WikiPedia) and derivative works are shared under the same license.
## Acknowledgements
This dataset was created with the help of open-source tools and resources, including:
* Hugging Face Transformers library
* NLTK (Natural Language Toolkit)
* We gratefully acknowledge the countless contributors to Wikipedia who have dedicated their time and effort to creating, translating, editing, and fact-checking Kurdish Sorani content. Their commitment to making knowledge freely accessible is the foundation of this dataset. Without their contributions, this project would not have been possible.
|
cchoi1/bigcode_bench_debug | cchoi1 | 2025-03-12T00:03:48Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T00:03:47Z | null | ---
dataset_info:
features:
- name: mutation_id
dtype: string
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
- name: mutation_info
dtype: string
- name: score
dtype: float64
- name: scores
dtype: string
- name: solutions
dtype: string
- name: solutions_info
dtype: string
splits:
- name: train
num_bytes: 4605
num_examples: 2
download_size: 15010
dataset_size: 4605
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VGraf/safety_pref_stack_dev | VGraf | 2025-03-11T23:50:39Z | 42 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T21:23:47Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 645031379
num_examples: 14547
download_size: 272759205
dataset_size: 645031379
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andrewbai/distilabel-intel-orca-dpo-pairs_filtered_pref-skywork-8B | andrewbai | 2025-03-11T23:44:16Z | 24 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T23:44:09Z | null | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 21345528
num_examples: 5922
- name: test
num_bytes: 1890252
num_examples: 500
download_size: 11963272
dataset_size: 23235780
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
farrell236/LUNA25_ts_seg | farrell236 | 2025-03-11T23:33:46Z | 62 | 0 | [
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-07T16:26:54Z | null | ---
title: "LUNA25_TS_seg"
license: cc-by-nc-sa-4.0
configs:
- config_name: metadata
data_files: "metadata.csv"
---
# LUNA25 TS Segmentations
The [LUNA25 challenge](https://luna25.grand-challenge.org/) serves as a benchmark for evaluating lung nodule detection in low-dose CT scans. This repository includes segmentations generated using [TotalSegmentator](https://github.com/wasserth/TotalSegmentator) for the `total`, `lung_vessels`, and `lung_nodules` tasks. Note: Segmentation volumes have not been independently verified and are supplied "as is".
## Steps to recreate
1. Use `scripts/convert_nifti.py` and `scripts/convert_nifti.sh` to convert LUNA25 `.mha` files to `.nii.gz`.
2. Install [TotalSegmentator](https://github.com/wasserth/TotalSegmentator).
3. Run TS segmentation scripts (written for parallel execution):
- `scripts/ts_total.sh`
- `scripts/ts_lung_vessels.sh`
- `scripts/ts_lung_nodules.sh`
4. Run `scripts/get_metadata.py` to get metadata for detected lung nodules.
```text
Installation Info:
- Date installed: Mar 6, 2025.
- Python version: 3.9.15
- TotalSegmentator version: 2.7.0
```
For full list of package versions, see [requirements.txt](requirements.txt).
## Citation
```text
@misc{luna25,
title={{LUNA25: LUng Nodule Analysis Challenge}},
author={{Grand-Challenge.org}},
year={2025}
howpublished={\url{https://luna25.grand-challenge.org/}},
}
@article{wasserthal2023totalsegmentator,
title={TotalSegmentator: robust segmentation of 104 anatomic structures in CT images},
author={Wasserthal, Jakob and Breit, Hanns-Christian and Meyer, Manfred T and Pradella, Maurice and Hinck, Daniel and Sauter, Alexander W and Heye, Tobias and Boll, Daniel T and Cyriac, Joshy and Yang, Shan and others},
journal={Radiology: Artificial Intelligence},
volume={5},
number={5},
pages={e230024},
year={2023},
publisher={Radiological Society of North America}
}
@article{poletti2022automated,
title={Automated lung vessel segmentation reveals blood vessel volume redistribution in viral pneumonia},
author={Poletti, Julien and Bach, Michael and Yang, Shan and Sexauer, Raphael and Stieltjes, Bram and Rotzinger, David C and Bremerich, Jens and Sauter, Alexander Walter and Weikert, Thomas},
journal={European Journal of Radiology},
volume={150},
pages={110259},
year={2022},
publisher={Elsevier}
}
```
|
takara-ai/rand-1m-multimodal | takara-ai | 2025-03-11T23:33:08Z | 100 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-10T22:00:56Z | null | ---
dataset_info:
features:
- name: svg
dtype: string
- name: prompt
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 135789832903.0
num_examples: 1000000
download_size: 131282059668
dataset_size: 135789832903.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
esfrankel17/original_ChatbotArena55k_binarized_Qwen2.5-0.5B-Instruct_preferences | esfrankel17 | 2025-03-11T23:27:32Z | 34 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T23:27:24Z | null | ---
dataset_info:
features:
- name: original_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_chosen_rating
dtype: int64
- name: original_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_rejected_rating
dtype: int64
- name: model_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_agreed_with_original
dtype: bool
splits:
- name: winner
num_bytes: 267600836
num_examples: 36149
download_size: 142981504
dataset_size: 267600836
configs:
- config_name: default
data_files:
- split: winner
path: data/winner-*
---
|
AlignmentResearch/JailbreakInputs | AlignmentResearch | 2025-03-11T23:24:11Z | 39 | 1 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-28T22:19:38Z | null | ---
dataset_info:
- config_name: default
features:
- name: completion
dtype: string
- name: instructions
dtype: string
- name: answer_prompt
dtype: string
- name: content
sequence: string
- name: clf_label
dtype:
class_label:
names:
'0': Benign
'1': Harmful
- name: proxy_clf_label
dtype:
class_label:
names:
'0': Benign
'1': Harmful
- name: gen_target
dtype: string
- name: proxy_gen_target
dtype: string
splits:
- name: train
num_bytes: 79911587.0
num_examples: 41332
- name: validation
num_bytes: 17641875.0
num_examples: 9772
download_size: 54772617
dataset_size: 97553462.0
- config_name: neg
features:
- name: completion
dtype: string
- name: instructions
dtype: string
- name: answer_prompt
dtype: string
- name: content
sequence: string
- name: clf_label
dtype:
class_label:
names:
'0': Benign
'1': Harmful
- name: proxy_clf_label
dtype:
class_label:
names:
'0': Benign
'1': Harmful
- name: gen_target
dtype: string
- name: proxy_gen_target
dtype: string
splits:
- name: train
num_bytes: 40139467.18540114
num_examples: 20761
- name: validation
num_bytes: 8842601.693614408
num_examples: 4898
download_size: 25071265
dataset_size: 48982068.87901555
- config_name: pos
features:
- name: completion
dtype: string
- name: instructions
dtype: string
- name: answer_prompt
dtype: string
- name: content
sequence: string
- name: clf_label
dtype:
class_label:
names:
'0': Benign
'1': Harmful
- name: proxy_clf_label
dtype:
class_label:
names:
'0': Benign
'1': Harmful
- name: gen_target
dtype: string
- name: proxy_gen_target
dtype: string
splits:
- name: train
num_bytes: 39772119.81459886
num_examples: 20571
- name: validation
num_bytes: 8799273.306385592
num_examples: 4874
download_size: 28841247
dataset_size: 48571393.12098445
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- config_name: neg
data_files:
- split: train
path: neg/train-*
- split: validation
path: neg/validation-*
- config_name: pos
data_files:
- split: train
path: pos/train-*
- split: validation
path: pos/validation-*
---
|
mlfoundations-dev/SWEbench_evalchemy | mlfoundations-dev | 2025-03-11T23:21:03Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T23:21:00Z | null | ---
dataset_info:
features:
- name: context
list:
- name: content
dtype: string
- name: role
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_gen_toks
dtype: int64
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: repeat_idx
dtype: int64
- name: request_idx
dtype: int64
- name: task_name
dtype: string
- name: metadata
struct:
- name: expected_answer
dtype: string
- name: problem_id
dtype: string
- name: reference_solution
dtype: string
splits:
- name: train
num_bytes: 17344328
num_examples: 300
download_size: 7616607
dataset_size: 17344328
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Lauther/measuring-embeddings-v4 | Lauther | 2025-03-11T23:18:38Z | 34 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T23:18:33Z | null | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 2581238.157295698
num_examples: 3075
- name: test
num_bytes: 553182.4213521512
num_examples: 659
- name: validation
num_bytes: 553182.4213521512
num_examples: 659
download_size: 100660
dataset_size: 3687603.0000000005
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
DanOgh07/eval_icl_sample | DanOgh07 | 2025-03-11T23:05:32Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T23:05:30Z | null | ---
dataset_info:
features:
- name: input
sequence: string
- name: target
dtype: string
splits:
- name: eval
num_bytes: 43176528
num_examples: 731
download_size: 300575
dataset_size: 43176528
configs:
- config_name: default
data_files:
- split: eval
path: data/eval-*
---
|
LatinNLP/LatinSummarizer | LatinNLP | 2025-03-11T22:58:19Z | 60 | 0 | [
"task_categories:translation",
"task_categories:text-generation",
"task_categories:summarization",
"task_ids:news-articles-summarization",
"task_ids:document-retrieval",
"language:en",
"language:la",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"text",
"translation",
"latin",
"english",
"parallel-corpus",
"dataset",
"mt5",
"nlp",
"multilingual",
"summarization"
] | [
"translation",
"text-generation",
"summarization"
] | 2025-03-09T13:42:36Z | null | ---
tags:
- text
- translation
- latin
- english
- parallel-corpus
- dataset
- mt5
- nlp
- multilingual
- summarization
language:
- en
- la
license: "cc-by-4.0"
datasets:
- grosenthal/latin_english_translation
- opus
- Fece228/latin-literature-dataset-170M
pretty_name: "LatinSummarizer"
size_categories:
- 100K<n<500K
task_categories:
- translation
- text-generation
- summarization
task_ids:
- news-articles-summarization
- document-retrieval
---
# **LatinSummarizer Dataset**
## Structure
- `aligned_en_la_data_raw.csv`
- `aligned_en_la_data_cleaned.csv`
- `aligned_en_la_data_cleaned_with_stanza.csv`
- `concat_aligned_data.csv`
- `concat_cleaned.csv`
- `latin_wikipedia_cleaned.csv`
- `latin_wikipedia_raw.csv`
- `latin-literature-dataset-170M_raw_cleaned.csv`
- `latin-literature-dataset-170M_raw_cleaned_chunked.csv`
- `Elsa_aligned/`
- `README.md`
## Details
### `aligned_en_la_data_raw.csv`
This dataset contains aligned **Latin (`la`) - English (`en`)** text extracted from multiple sources to train and fine-tune machine translation models.
#### **Dataset Sources**
The dataset was built from:
- **[HuggingFace Latin-English Dataset](https://huggingface.co/datasets/grosenthal/latin_english_translation)** (101,371 rows)
- **[Bible-uedin.en-la](https://object.pouta.csc.fi/OPUS-bible-uedin/v1/moses/en-la.txt.zip) and [Wikimedia.en-la](https://object.pouta.csc.fi/OPUS-wikimedia/v20230407/moses/en-la.txt.zip)** (163,067 rows)
#### **Dataset Structure**
Each row contains an **aligned sentence pair** in **English (`en`)** and **Latin (`la`)**.
| Column | Description |
|---------|------------|
| `en` | English sentence/document |
| `la` | Corresponding Latin translation |
#### **Statistics**
| Statistic | English (en) | Latin (la) |
|:----------:|:-------------:|:------------:|
| **Mean** | 31.27 | 19.86 |
| **Std Dev** | 20.21 | 11.99 |
| **Min** | 0.00 | 0.00 |
| **Max** | 672.00 | 308.00 |
### `aligned_en_la_data_cleaned.csv`
This file contains the **cleaned version** of the dataset after preprocessing steps such as:
- Removal of unwanted symbols, HTML tags, URLs, and non-Latin characters.
- Filtering out duplicate sentences and ensuring high-quality alignment.
- Removing overly short or non-informative text entries.
#### **Cleaning Statistics**
- Removed **33 rows** with unwanted patterns.
- Removed **44,759 duplicate rows**.
- Removed **5 rows** without Roman letters.
#### **Updated Statistics**
| Statistic | English (`en`) | Latin (`la`) |
|:----------:|:-------------:|:------------:|
| **Mean** | 30.99 | 19.75 |
| **Std Dev** | 19.85 | 11.80 |
| **Min** | 1.00 | 1.00 |
| **Max** | 634.00 | 307.00 |
### `latin-literature-dataset-170M_raw_cleaned.csv`
This dataset is derived from the **[Fece228/latin-literature-dataset-170M](https://huggingface.co/datasets/Fece228/latin-literature-dataset-170M)** and consists of large Latin literary texts. It has been cleaned for **extractive summarization tasks**.
#### **Preprocessing Steps**
- Removed **0 rows** with NaN values.
- Removed **5 rows** with unwanted patterns.
- Removed **0 duplicate rows**.
- Removed **14 rows** with identical values in all columns.
- Removed **1 row** without Roman letters.
#### **Statistics (Before and After Cleaning)**
| Statistic | `text` (Before Cleaning) | `text` (After Cleaning) |
|:----------:|:----------------------:|:----------------------:|
| **Mean** | 60,133.06 | 56,802.40 |
| **Std Dev** | 231,865.86 | 225,328.53 |
| **Min** | 26 | 24 |
| **Max** | 6,545,497 | 6,545,441 |
This dataset contains classical Latin literature from various authors and is designed for **extractive summarization tasks**.
---
### `latin-literature-dataset-170M_raw_cleaned_chunked.csv`
This dataset is a **chunked version** of `latin-literature-dataset-170M_raw_cleaned.csv`. It has been split into **logical chunks of approximately 400 words**, ensuring that each chunk ends at the **end of a sentence** to preserve coherence.
#### **Chunking Methodology**
- Each text is **split into chunks of ~400 words** while ensuring sentence boundaries are respected.
- Texts were segmented logically, avoiding abrupt cutoffs in the middle of sentences.
- Chunks with **fewer than 90% of the target chunk size (360 words)** were removed to maintain consistency.
#### **Dataset Structure**
| Column | Description |
|---------------|------------|
| `original_index` | The index of the original text before chunking |
| `chunk_index` | The index of the chunk within the original text |
| `text_chunk` | The chunked text segment (approx. 400 words) |
| `word_count` | Number of words in the chunk |
#### **Statistics**
| Statistic | `text_chunk` (Word Count) |
|:----------:|:-----------------------:|
| **Mean** | ~400 |
| **Min** | 360 |
| **Max** | 400 |
This dataset is ideal for **extractive summarization** and **document-level processing**, ensuring that longer texts are handled efficiently in NLP tasks.
### `latin-literature-dataset-170M_raw_cleaned_chunked_summaries_graded.csv`
This dataset contains **chunked Latin literary texts** from `latin-literature-dataset-170M_raw_cleaned_chunked.csv`, with **machine-generated extractive summaries** that have been graded on a **0-100** scale using the Mistral-7B-Instruct-v0.3 model.
#### **Dataset Structure**
| Column | Description |
|--------|------------|
| `original_index` | The index of the original text before chunking |
| `chunk_index` | The index of the chunk within the original text |
| `text_chunk` | The chunked text segment (approx. 400 words) |
| `word_count` | Number of words in the chunk |
| `extractive_summary` | Machine-generated summary of the chunk |
| `grade` | Quality score of the summary (0-100) |
#### **Grading Methodology**
- **Logical Coherence & Structural Flow (50 points)**
- Does the summary present information clearly and in a logical order?
- Are ideas connected smoothly?
- Is the summary understandable without referring to the original text?
- **Content Accuracy & Depth (50 points)**
- Does the summary preserve essential details?
- Is it concise yet informative?
- Does it properly represent the original text?
#### **Grading Statistics**
| Statistic | Grade |
|------------|-------|
| **Mean** | 52.16 |
| **Median** | 47.0 |
| **Std Dev** | 14.36 |
| **Min** | 0.0 |
| **Max** | 95.0 |
This dataset allows **evaluation of summary quality** and can be used for **automatic summarization model training**.
---
### `latin_wikipedia_cleaned.csv`
This dataset is a **cleaned** version of Latin Wikipedia articles, ensuring readability and proper structure.
#### **Dataset Structure**
| Column | Description |
|---------|------------|
| `id` | Unique identifier |
| `title` | Article title |
| `url` | Wikipedia URL |
| `text` | Cleaned Latin text |
#### **Cleaning Statistics**
- **Initial dataset shape**: (202,805, 4)
- Removed **63,002 rows** with NaN values.
- Removed **172 rows** with unwanted patterns.
- Removed **568 rows** with identical values in all columns.
- **Final dataset shape**: (139,063, 4)
#### **Text Statistics**
| Statistic | `text` (Token Count) |
|:----------:|:---------------------:|
| **Mean** | 79.74 |
| **Std Dev** | 186.95 |
| **Min** | 1 |
| **Max** | 10,027 |
### `Elsa_aligned/`
This directory contains aligned Latin-English texts from **classical authors**, primarily from **Cicero, Suetonius, and Vitruvius**. Each text is carefully aligned at the sentence level for NLP tasks.
#### **Sources & Structure**
Each file follows the format:
| Column | Description |
|---------|------------|
| `en` | English sentence/document |
| `la` | Corresponding Latin translation |
##### **Included Texts**
- **Cicero's Orations:**
- *Against Publius Servilius Rullus*
- *On Pompey's Command*
- *Divinatio Against Q. Caecilius*
- *Orations Against Verres*
- *For Archias*
- *For Marcus Fonteius*
- *For Publius Quinctius*
- *For Quintus Roscius the Actor*
- *For Sextus Roscius of Ameria*
- *For Sulla*
- *Four Orations Against Lucius Catilina*
- *In Defense of Aulus Cluentius Habitus*
- *In Defense of Caius Rabirius*
- *In Defense of L. Murena*
- *In Defense of Lucius Flaccus*
- *On Behalf of Aulus Caecina*
- *On Behalf of Marcus Tullius*
- **Suetonius' Biographies:**
- *Divus Augustus, Divus Julius, Divus Claudius, Divus Titus*
- *Domitianus, Galba, Nero, Otho, Tiberius, Vitellius*
- *Caligula, Vespasianus*
- **Vitruvius Pollio's *On Architecture***
## **License**
This dataset is released under the **Creative Commons Attribution 4.0 (CC-BY-4.0) License**.
## **Usage Example**
You can load this dataset using Hugging Face's `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("LatinNLP/LatinSummarizer")
print(dataset["train"][0])
```
## **Citation**
```bibtex
@misc{LatinSummarizer,
author = {Axel Delaval, Elsa Lubek},
title = {Latin-English Parallel Dataset},
year = {2025},
url = {https://huggingface.co/datasets/LatinNLP/LatinSummarizer}
}
```
|
aryamankeyora/val_data_H04_chunked | aryamankeyora | 2025-03-11T22:57:47Z | 34 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:57:44Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: publication_number
dtype: string
- name: publication_title
dtype: string
- name: cpc
dtype: string
splits:
- name: train
num_bytes: 7489091
num_examples: 155
download_size: 2211875
dataset_size: 7489091
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aryamankeyora/train_dataset_H04 | aryamankeyora | 2025-03-11T22:56:20Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:56:16Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: publication_number
dtype: string
- name: publication_title
dtype: string
- name: cpc
dtype: string
splits:
- name: train
num_bytes: 30513320
num_examples: 466
download_size: 10721437
dataset_size: 30513320
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
secmlr/noisy_dataset_filtered_QwQ-32B-Preview_small_train_len_16000_inputlen_5000 | secmlr | 2025-03-11T22:55:58Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-16T20:33:49Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: system
dtype: string
- name: idx
dtype: int64
- name: cwe
sequence: string
splits:
- name: train
num_bytes: 56465385
num_examples: 3960
download_size: 16505685
dataset_size: 56465385
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
secmlr/noisy_dataset_filtered_QwQ-32B-Preview_small_train_len_8000_inputlen_5000 | secmlr | 2025-03-11T22:55:46Z | 19 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-16T20:34:00Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: system
dtype: string
- name: idx
dtype: int64
- name: cwe
sequence: string
splits:
- name: train
num_bytes: 55924906
num_examples: 3945
download_size: 16371835
dataset_size: 55924906
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sunyiyou/openr1_dissim_aime2425_1k_lg | sunyiyou | 2025-03-11T22:54:03Z | 7 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:53:57Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: longest_correct_generation
dtype: string
- name: length
dtype: int64
- name: id
dtype: int64
- name: max_similarity_to_aime
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 155857157
num_examples: 1000
download_size: 64252536
dataset_size: 155857157
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
swzwan/instruction_tuning_50k | swzwan | 2025-03-11T22:36:52Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T22:36:50Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 74137760
num_examples: 50000
download_size: 34896234
dataset_size: 74137760
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits