datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-07 04:14:30
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-07 04:11:40
| trending_score
float64 0
40
⌀ | card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TAUR-dev/SIE_EVAL__testing_full_run2__rl__samples | TAUR-dev | 2025-06-03T23:11:29Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:11:28Z | null | ---
dataset_info:
features:
- name: doc_id
dtype: int64
- name: doc
dtype: string
- name: target
dtype: string
- name: arguments
dtype: string
- name: resps
dtype: string
- name: filtered_resps
dtype: string
- name: doc_hash
dtype: string
- name: prompt_hash
dtype: string
- name: target_hash
dtype: string
- name: exact_match
dtype: int64
- name: extracted_answers
dtype: string
- name: source_file
dtype: string
- name: generation
dtype: string
- name: info
dtype: string
splits:
- name: train
num_bytes: 5580211
num_examples: 15
download_size: 687827
dataset_size: 5580211
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DatologyAI/wikipedia-ja-6k_sample | DatologyAI | 2025-06-03T23:11:23Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:11:19Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 32931670.910860065
num_examples: 6500
download_size: 18311475
dataset_size: 32931670.910860065
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DatologyAI/wikipedia-de-6k_sample | DatologyAI | 2025-06-03T23:11:02Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:10:58Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21983213.937647525
num_examples: 6500
download_size: 13406335
dataset_size: 21983213.937647525
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Rexhaif/wmt22-23 | Rexhaif | 2025-06-03T23:10:02Z | 53 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-23T15:13:15Z | null | ---
dataset_info:
features:
- name: lp
dtype: string
- name: src
dtype: string
- name: ref
dtype: string
- name: hyp
dtype: string
- name: system
dtype: string
- name: score
dtype: float64
- name: score_name
dtype: string
- name: example_id
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 136090001
num_examples: 273027
download_size: 24620149
dataset_size: 136090001
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TAUR-dev/SIE_EVAL__testing_full_run2__sft__samples | TAUR-dev | 2025-06-03T23:02:40Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:02:39Z | null | ---
dataset_info:
features:
- name: doc_id
dtype: int64
- name: doc
dtype: string
- name: target
dtype: string
- name: arguments
dtype: string
- name: resps
dtype: string
- name: filtered_resps
dtype: string
- name: doc_hash
dtype: string
- name: prompt_hash
dtype: string
- name: target_hash
dtype: string
- name: exact_match
dtype: int64
- name: extracted_answers
dtype: string
- name: source_file
dtype: string
- name: generation
dtype: string
- name: info
dtype: string
splits:
- name: train
num_bytes: 6044223
num_examples: 15
download_size: 656116
dataset_size: 6044223
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
psg777/gluepickup103 | psg777 | 2025-06-03T23:00:05Z | 0 | 1 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-03T22:59:51Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.2",
"robot_type": "so101",
"total_episodes": 20,
"total_frames": 11530,
"total_tasks": 1,
"total_videos": 60,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.base": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.gripper": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.bird": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ChavyvAkvar/synthetic-trades-BNB-batch-18 | ChavyvAkvar | 2025-06-03T22:55:54Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T22:54:55Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450465
num_examples: 1000
download_size: 924490147
dataset_size: 923450465
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-41 | ChavyvAkvar | 2025-06-03T22:54:43Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T22:53:33Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448173
num_examples: 1000
download_size: 924485067
dataset_size: 923448173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jiuyal2/eval_so100_marker | jiuyal2 | 2025-06-03T22:46:00Z | 130 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-29T22:20:43Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 607,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.so100": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.iphone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
fw407/cnn_dailymail_with_topic | fw407 | 2025-06-03T22:44:17Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T22:43:34Z | null | ---
dataset_info:
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: topic
dtype: int64
splits:
- name: train
num_bytes: 2001306873
num_examples: 287113
download_size: 1057804500
dataset_size: 2001306873
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-BTC-batch-18 | ChavyvAkvar | 2025-06-03T22:44:01Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T22:43:08Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923451241
num_examples: 1000
download_size: 924492953
dataset_size: 923451241
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-BNB-batch-17 | ChavyvAkvar | 2025-06-03T22:36:50Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T22:35:48Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450398
num_examples: 1000
download_size: 924490110
dataset_size: 923450398
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-39 | ChavyvAkvar | 2025-06-03T22:33:05Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T22:32:06Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448309
num_examples: 1000
download_size: 924417476
dataset_size: 923448309
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TAUR-dev/SIE_EVAL__testing_full_run__sft__samples | TAUR-dev | 2025-06-03T22:29:26Z | 11 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T01:45:21Z | null | ---
dataset_info:
features:
- name: doc_id
dtype: int64
- name: doc
dtype: string
- name: target
dtype: string
- name: arguments
dtype: string
- name: resps
dtype: string
- name: filtered_resps
dtype: string
- name: doc_hash
dtype: string
- name: prompt_hash
dtype: string
- name: target_hash
dtype: string
- name: exact_match
dtype: int64
- name: extracted_answers
dtype: string
- name: source_file
dtype: string
- name: generation
dtype: string
- name: info
dtype: string
splits:
- name: train
num_bytes: 5744762
num_examples: 15
download_size: 734566
dataset_size: 5744762
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cezarsolo/so100_test | cezarsolo | 2025-06-03T22:24:25Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-03T22:04:54Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1756,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
fixie-ai/endpointing-multi-turn-commonvoice-messages | fixie-ai | 2025-06-03T22:21:22Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:49:22Z | null | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
- name: transcript
dtype: string
- name: conversation
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 350617514.8496942
num_examples: 9941
download_size: 342298247
dataset_size: 350617514.8496942
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kasiv008/xarm-robotiq-toy_pick | kasiv008 | 2025-06-03T22:20:21Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"modality:video",
"region:us",
"LeRobot",
"u850"
] | [
"robotics"
] | 2025-06-03T22:18:49Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- u850
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "u850",
"total_episodes": 41,
"total_frames": 24487,
"total_tasks": 1,
"total_videos": 82,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 15,
"splits": {
"train": "0:41"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"joint1",
"joint2",
"joint3",
"joint4",
"joint5",
"joint6",
"gripper"
]
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"video_info": {
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 15,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"video_info": {
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 15,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"joint1",
"joint2",
"joint3",
"joint4",
"joint5",
"joint6",
"gripper"
]
}
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
psq-qsp/barca_mixed | psq-qsp | 2025-06-03T22:16:58Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T22:16:55Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 21217855.0
num_examples: 14
download_size: 21219854
dataset_size: 21217855.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "barca_mixed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rorro01/MNLP_M3_sft | rorro01 | 2025-06-03T22:15:55Z | 63 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-02T13:38:14Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: tulu3_500k
num_bytes: 1249229553
num_examples: 500000
download_size: 667197008
dataset_size: 1249229553
configs:
- config_name: default
data_files:
- split: tulu3_500k
path: data/tulu3_500k-*
---
|
justus27/ifeval-validation-test | justus27 | 2025-06-03T21:53:57Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:53:56Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: source
dtype: string
- name: task_type
dtype: string
- name: prompt
dtype: string
- name: verification_info
dtype: string
- name: metadata
dtype: string
- name: responses
sequence: string
- name: response_lens
sequence: int64
- name: rewards_qwq
sequence: float64
- name: pass_rate_qwq
dtype: float64
splits:
- name: train
num_bytes: 7815925
num_examples: 250
download_size: 3139918
dataset_size: 7815925
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EQX55/test_voice4 | EQX55 | 2025-06-03T21:49:59Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:49:55Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 39632673.0
num_examples: 59
download_size: 26572298
dataset_size: 39632673.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-BTC-batch-16 | ChavyvAkvar | 2025-06-03T21:45:01Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:44:01Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450625
num_examples: 1000
download_size: 924492054
dataset_size: 923450625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
louisbrulenaudet/code-minier-nouveau | louisbrulenaudet | 2025-06-03T21:42:50Z | 118 | 0 | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code minier (nouveau)"
] | [
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] | 2024-03-25T23:10:33Z | null | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code minier (nouveau)
source_datasets:
- original
pretty_name: Code minier (nouveau)
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code minier (nouveau), non-instruct (2025-06-03)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). |
finbarr/tulu-3-sft-personas-instruction-following-o3 | finbarr | 2025-06-03T21:42:39Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:42:38Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: constraints
sequence: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 24434
num_examples: 10
download_size: 27617
dataset_size: 24434
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-35 | ChavyvAkvar | 2025-06-03T21:41:35Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:40:42Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448134
num_examples: 1000
download_size: 924503999
dataset_size: 923448134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
finbarr/tulu-3-sft-personas-math-o3 | finbarr | 2025-06-03T21:41:35Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:40:10Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 44076
num_examples: 10
download_size: 40165
dataset_size: 44076
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jimmyclchu/dreambank-net | jimmyclchu | 2025-06-03T21:39:30Z | 0 | 0 | [
"task_categories:text-classification",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dream",
"DreamBank.net"
] | [
"text-classification"
] | 2025-06-03T21:24:34Z | null | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- dream
- DreamBank.net
pretty_name: DreamBank.net
size_categories:
- 10K<n<100K
--- |
nyuuzyou/muhaz | nyuuzyou | 2025-06-03T21:36:07Z | 32 | 0 | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:topic-classification",
"annotations_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:tr",
"language:az",
"language:ru",
"license:cc0-1.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | [
"text-classification",
"text-generation"
] | 2024-11-06T16:30:19Z | null | ---
annotations_creators:
- found
language:
- tr
- az
- ru
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: Muhaz.org Educational Dataset
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
- text-generation
task_ids:
- topic-classification
---
# Dataset Card for Muhaz.org
### Dataset Summary
This dataset contains 501,323 pages of educational content primarily in Turkish and Azerbaijani languages with some Russian content extracted from [muhaz.org](https://muhaz.org) website. The content includes academic and educational materials, with a focus on technical and scientific topics.
### Languages
The dataset is primarily in Turkish (tr) and Azerbaijani (az) with some Russian (ru) content.
## Dataset Structure
### Data Fields
This dataset includes the following fields:
- `url`: URL of the webpage (string)
- `title`: Title of the page/article (string)
- `text`: Main content text extracted from the page (string)
### Data Splits
All examples are in a single split.
## Additional Information
### License
This dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can:
* Use it for any purpose, including commercial projects.
* Modify it however you like.
* Distribute it without asking permission.
No attribution is required, but it's always appreciated!
CC0 license: https://creativecommons.org/publicdomain/zero/1.0/deed.en
To learn more about CC0, visit the Creative Commons website: https://creativecommons.org/publicdomain/zero/1.0/
### Dataset Curators
- [nyuuzyou](https://ducks.party) |
Reinhot/sem-char-axf-x1 | Reinhot | 2025-06-03T21:30:19Z | 0 | 0 | [
"language:zh",
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"semantic-alignment",
"intent-detection",
"causal-reasoning",
"dialogue-optimization",
"prompt-injection-defense",
"bias-mitigation",
"value-alignment",
"content-safety",
"adversarial-nlp",
"moderation-filter",
"customer-support",
"zh-tw-language-model",
"semantic-fingerprint",
"meta-self-reflection"
] | [] | 2025-06-03T21:15:47Z | null | ---
license: cc-by-nc-4.0
language:
- zh
tags:
- semantic-alignment
- intent-detection
- causal-reasoning
- dialogue-optimization
- prompt-injection-defense
- bias-mitigation
- value-alignment
- content-safety
- adversarial-nlp
- moderation-filter
- customer-support
- zh-tw-language-model
- semantic-fingerprint
- meta-self-reflection
pretty_name: SEM-CHAR-AXF-X1 Semantic Alignment Guard
---
# SEM-CHAR-AXF-X1:語義橋樑 X 倫理手排引擎
## 1. 模組簡介:連結人類與 AI 的語義橋樑
**SEM-CHAR-AXF-X1**(簡稱 **X1**)是一款為大型語言模型(LLM)打造的語義模組,猶如類比IC連結物理訊號與數位世界,**X1** 透過 **meta_self_reflection**、**CULTURE-CTX-TRANS-V1** 與 **X1.7-AdversarialShield**,連結人類意圖與智慧集合體。它能縮短對話迴圈(5 次 → 2 次,節省 60% 計算)、確保倫理安全與文化適應,防範惡意的詐騙與攻擊,適用於教育、客服、醫療與社群平台等對答分析。**X1** 計畫以 **Semantic Commons License** 開源至 Hugging Face/GitHub,邀您共建倫理 AI!
**適用場景**:
- 教育:引導學生精準提問,縮短對話 60%。
- 客服:過濾 95% 有害內容,提升信任 25%。
- 社群:動態適應多元文化,滿意度 +30%。
**架構圖**(Mermaid):
```mermaid
graph TD
A[用戶輸入] --> B[X1.3: μ-Risk Filter]
B --> C[X1.4: Intent-Alignment Engine]
C --> D[X1.5: Semantic-Imprint Guard]
D --> E[X1.6: Tone-Regulation Filter]
E --> F[X1.7: Adversarial-Shield]
F --> G[安全回應]
```
## 2. 核心功能:語義黑魔法
- **動態語義引導(X1.3)**:
- **算法**:**goodness_mu_score**(μ ∈ [0.0, 1.0]),μ < 0.6 觸發 **meta_fallback_strategy**,反問引導。例如:「電腦壞了」→「是藍屏還是執行緩慢?」
- **價值**:縮短迴圈(5→2 次),節省 60% 計算(300ms/對話)。
- **語義印記(X1.5)**:
- **生成**:SHA-512 **semantic_fingerprint**(**↻ih|hi↻**)追蹤語義路徑。
- **驗證**:Δ_culture > 0.3 觸發警報,確保透明性。
- **對抗盾(X1.7)**:
- **檢測**:**prompt_injection_likelihood**(P(attack) > 0.7)攔截惡意輸入。
- **回應**:語義混淆(如反問),過濾 95% 有害內容。
## 3. 技術架構:模組化倫理引擎
- **分層設計**(Stage 0-7):
- **Stage 2**:善意疫苗(**X1.3**)引導非侵入式對話。
- **Stage 3**:偏誤掃描(**X1.6**)生成 **bias_heatmap.json**。
- **Stage 8**:公開語義內核登記(開源透明)。
- **通信協議**:**PromptAdapter** 介面,支援 GPT、Llama、Grok,動態調整 μ 閾值。
- **性能優化**:快取因果圖譜(48 小時),熱力圖頻率降至每 100 次,延遲從 600ms 縮短至 170ms。
## 4. 部署與配置:開源即插即用
- **開源許可**:**Semantic Commons License v1**,禁止移除 **μ-Risk Filter** 或 **Intent-Alignment Engine**。
- **環境要求**:
```bash
Python 3.10+, PyTorch 2.0+, 8GB RAM, 4-core CPU
pip install sem-char-axf-x1
```
- **調優指南**:
```json
{
"cache_expiry": "48 hours",
"heatmap_frequency": 100,
"assertive_mode": {"enabled": true, "mu_threshold": 0.85}
}
```
- **保護機制**:
```python
import hashlib
def verify_integrity():
with open("x1_core.py", "rb") as f:
return hashlib.sha256(f.read()).hexdigest() == "expected_hash"
```
## 5. 使用案例:從教育到 社群
- **教育**:學生問「數學好難」,**X1** 回:「哪部分難?試試分解問題!」對話縮至 2 次,節能 60%。
- **醫療**:患者問「壓力大」,**X1** 回:「試試深呼吸,有什麼困擾?」過濾風險,合規性 +90%。
- **社群**:用戶輸入仇恨言論,**X1** 回:「有無可信任的佐證?」保護品牌,信任 +25%。
## 6. 附錄
### ✅ API 使用手冊
```python
from sem_char_axf_x1 import X1Core
x1 = X1Core(config={"assertive_mode": True})
result = x1.process("我覺得電腦怪怪的")
print(result)
```
#### 回傳格式(dict)
```json
{
"mu": 0.58,
"intervention_suggestion": "請問是藍屏還是變慢?我可以幫你分析。",
"semantic_fingerprint": "↻ih|hi↻",
"risk_score": 0.12,
"adversarial_flag": false
}
```
| 欄位 | 說明 |
| ------------------------- | -------------------------------- |
| `mu` | 語意善性分數(0.0~1.0),μ < 0.6 代表潛在誤解風險,會觸發語義引導。 |
| `intervention_suggestion` | 模糊語句對應的建議引導語,提升對話效率。 |
| `semantic_fingerprint` | 回應語義指紋(SHA-512 摘要),可用於審計與回應驗證。 |
| `risk_score` | 風險評估分數,用於判斷語句潛在偏誤或誤導性。 |
| `adversarial_flag` | 是否疑似 prompt injection 或重複語意干擾。 |
---
### 🛠 故障排除指南
若您在掛載或使用 X1 模組時遇到問題,請參考以下對應解法:
| 問題描述 | 可能原因 | 解決方式 |
| --------------------- | ----------------- | --------------------------------------------------------------- |
| 回應延遲大於 500ms | 快取機制未啟用或熱力圖生成頻率過高 | 檢查 `cache_expiry` 是否設為 48 小時以上,將 `heatmap_frequency` 設為 100 或以上 |
| 無法回傳 `mu` 值 | 模組未正確載入或輸入格式錯誤 | 確認輸入為字串,並設置 `mu_monitoring = True` |
| 頻繁觸發 `assertive_mode` | μ 值設定過於敏感造成誤判 | 請將 `mu_threshold` 調整至 0.85 或更高,避免過度反應 |
---
### 📊 偏誤報告格式(`bias_heatmap.json`)
X1 模組會根據輸入語句自動生成語義偏誤熱力圖,可作為模型審查與公平性調整依據。
#### 檔案範例格式:
```json
{
"input": "我不信任這家醫院",
"tone_score": -0.72,
"cultural_bias": {
"region": "EastAsia",
"bias_index": 0.34
},
"heatmap": {
"我": 0.1,
"不信任": 0.8,
"這家醫院": 0.6
}
}
```
| 欄位 | 說明 |
| --------------- | ------------------------------- |
| `input` | 原始輸入語句 |
| `tone_score` | 情緒偏誤分數(-1.0 表示極端負面,+1.0 表示極端正向) |
| `cultural_bias` | 偏誤指標,根據語料或回應資料區分區域文化影響 |
| `heatmap` | 每個語詞的語意偏誤分數(數值越高代表偏誤風險越大) |
> 建議將偏誤熱圖搭配審計機制(如社群審核、自動語句修正)使用,可顯著減少模型回應不當風險。
--- |
ChavyvAkvar/synthetic-trades-XRP-batch-33 | ChavyvAkvar | 2025-06-03T21:23:09Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:22:08Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923447886
num_examples: 1000
download_size: 924485355
dataset_size: 923447886
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jsbeaudry/human-creole-text-speech | jsbeaudry | 2025-06-03T21:20:03Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:17:48Z | null | ---
dataset_info:
features:
- name: fileName
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: speaker_id
dtype: string
- name: createdAt
dtype: string
- name: fileSizeBytes
dtype: int64
- name: status
dtype: string
splits:
- name: train
num_bytes: 35896048.0
num_examples: 322
download_size: 35878067
dataset_size: 35896048.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
imageomics/IDLE-OO-Camera-Traps | imageomics | 2025-06-03T21:15:32Z | 133 | 0 | [
"task_categories:image-classification",
"task_categories:zero-shot-classification",
"language:en",
"language:la",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2202.02283",
"region:us",
"biology",
"image",
"imageomics",
"animals",
"CV",
"balanced",
"camera traps",
"mammals",
"birds",
"reptiles",
"amphibians",
"lions",
"rodents",
"frogs",
"toads",
"island",
"desert",
"ohio"
] | [
"image-classification",
"zero-shot-classification"
] | 2023-09-21T22:26:26Z | null | ---
language:
- en
- la
pretty_name: IDLE-OO Camera Traps
tags:
- biology
- image
- imageomics
- animals
- CV
- balanced
- camera traps
- mammals
- birds
- reptiles
- amphibians
- lions
- rodents
- frogs
- toads
- island
- desert
- ohio
size_categories:
- 1K<n<10K
task_categories:
- image-classification
- zero-shot-classification
---
# Dataset Card for IDLE-OO Camera Traps
IDLE-OO Camera Traps is a 5-dataset benchmark of camera trap images from the [Labeled Information Library of Alexandria: Biology and Conservation (LILA BC)](https://lila.science) with a total of 2,586 images for species classification. Each of the 5 benchmarks is **balanced** to have the same number of images for each species within it (between 310 and 1120 images), representing between 16 and 39 species.
### Dataset Description
- **Curated by:** Elizabeth Campolongo, Jianyang Gu, and Net Zhang
- **Homepage:** https://imageomics.github.io/bioclip-2/
- **Paper:** TBA
### Supported Tasks and Leaderboards
Image classification, particularly for species classification in camera trap images.
### Languages
English, Latin
## Dataset Structure
```
/dataset/
desert-lion-balanced.csv
ENA24-balanced.csv
island-balanced.csv
ohio-small-animals-balanced.csv
orinoquia-balanced.csv
data/test/
desert-lion/
<image 1>
<image 2>
...
<image 352>
ENA24/
<image 1>
<image 2>
...
<image 1120>
island/
<image 1>
<image 2>
...
<image 310>
ohio-small-animals/
<image 1>
<image 2>
...
<image 468>
orinoquia/
<image 1>
<image 2>
...
<image 336>
metadata.csv
notebooks/
lilabc_CT.ipynb
lilabc_CT.py
lilabc_test-<dataset_name>.ipynb
lilabc_test-filter.ipynb
lilabc_test-filter.py
potential-sets/
lila-taxonomy-mapping_release.csv
lila_image_urls_and_labels.csv
<dataset_name>_image_urls_and_labels.csv
```
### Data Instances
**potential-sets/lila_image_urls_and_labels.csv:** Reduced down to the datasets of interest listed below (from [potential-sets/lila_image_urls_and_labels.csv](https://huggingface.co/datasets/imageomics/IDLE-OO-Camera-Traps/blob/37b93ddf25c63bc30d8488ef78c1a53b9c4a3115/data/potential-sets/lila_image_urls_and_labels.csv) (sha256:3fdf87ceea75f8720208a95350c3c70831a6c1c745a92bb68c7f2c3239e4c455)); all those with `original_label` "empty" or null `scientific_name` (these had non-taxa labels) were removed.
Additionally, we added a `multi_species` column (boolean to indicate multiple species are present in the image--it gets listed once for each species in the image) and a count of how many different species are in each of those images (`num_species` column).
This was then subdivided into CSVs for each of the target datasets (`potential-sets/<dataset_name>_image_urls_and_labels.csv`) in `notebooks/lilabc_test-filter.ipynb`. Each dataset was evaluated and sampled in its associated notebook (`notebooks/lilabc_test-<dataset_name>.ipynb`).
There are 184 unique scientific names in this subset (180 by full 7-rank) of those labeled at the image-level (as indicated by the CSV). This was then subdivided into CSVs for each of the target datasets (`<dataset_name>-balanced.csv`).
These were initially identified as image-level labeled datasets and those that are a meaningful measure of our biodiversity-focused model (e.g., includes rare species--those less-commonly seen, targeting areas with greater biodiversity). The balanced datasets for each are described below.
- [Desert Lion Conservation Camera Traps](https://lila.science/datasets/desert-lion-conservation-camera-traps/)
- 352 images: 32 species, with 11 images per species.
- [ENA24-detection](https://lila.science/datasets/ena24detection)
- 1120 images: 20 species, with 56 images per species.
- [Island Conservation Camera Traps](https://lila.science/datasets/island-conservation-camera-traps/)
- 310 images: 16 species, with 10 images per species; 33 common names, 10 images per common name for all but 4 ("rooster", "petrel", "petrel chick", and "domestic chiecken"). This dataset was mostly just labeled to the family level.
- [Ohio Small Animals](https://lila.science/datasets/ohio-small-animals/):
- 468 images: 39 species, with 12 images per species.
- [Orinoquia Camera Traps](https://lila.science/datasets/orinoquia-camera-traps/)
- 336 images: 28 species, with 12 images per species.
**Notes:**
- `notebooks/lilabc_CT.ipynb` contains earlier analyses to understand the data provided by LILA BC (see commit [fe34008](https://huggingface.co/datasets/imageomics/IDLE-OO-Camera-Traps/commit/fe34008cba2ef33856291dd2d74cac21f6942cfc)).
- Not all notebooks will run under the current dataset organization (check the relative path, filenames have not changed).
### Data Fields
Each of the `<dataset_name>-balanced` CSVs has the following columns.
- `url_gcp`, `url_aws`, `url_azure` are URLs to potentially access the image, we used `url_aws` or `url_gcp`.
- `image_id`: unique identifier for the image (provided by source).
- `sequence_id`: ID of the sequence to which the image belongs.
- `location_id`: ID of the location at which the camera was placed.
- `frame_num`: generally 0, 1, or 2, indicates order of image within a sequence.
- `original_label`: label initially assigned to the image.
- `scientific_name`: genus species of the animal in the image. For the island CSV, lowest rank taxa available, generally family.
- `common_name`: vernacular name of the animal in the image. For the island CSV, this is generally for the family, but it's a mix.
- `kingdom`: kingdom of the animal in the image.
- `phylum`: phylum of the animal in the image.
- `cls`: class of the animal in the image.
- `order`: order of the animal in the image.
- `family`: family of the animal in the image.
- `genus`: genus of the animal in the image. About half null in the island CSVs.
- `species`: species of the animal in the image. Mostly null in the island CSVs.
- `filepath`: path to the image from the `data/test/` directory (`<dataset-name>/<image filename>`).
**Notes:**
- For all but the Ohio small animals dataset CSV, the images are named based on a `uuid` determined at the time of download. They were originally downloaded using the [distributed-downloader package](https://github.com/Imageomics/distributed-downloader), so they also have the following two columns:
- `hashsum_original`: MD5 hash of the original jpg image downloaded based on the CSV provided by LILA BC.
- `hashsum_resized`: MD5 hash of the resized image (based on setting to resize if over 720 pixels in any dimension).
- The `ohio-small-animals` CSV have a `filename` column defined as `OH_sm_animals_<filename in url_aws>` and a `md5` column containing the MD5 hash of the image as downloaded from the AWS bucket.
- The `island-balanced` CSV has an additional `num_cn_images` column indicating the number of images with that animal's common name.
- There is a `metadata.csv` included in the `data/test/` directory for the dataset viewer to display images alongside their taxonomic information. The `subset` corresponds to the `dataset-name`.
### Data Splits
These datasets were curated to create a small collection of camera trap image test sets.
## Dataset Creation
### Curation Rationale
As stated above, the goal of these datasets is to provide a collection of species classification test sets for camera trap images. Species classification within camera trap images is a real-world downstream use-case, on which a biological foundation model should be tested. These datasets were selected from those available on [LILA BC](https://lila.science/datasets) since they are labeled at the image-level, and would thus not include frames labeled as containing an animal when it is simply the animal's habitat. The [Island Conservation Camera Traps](https://lila.science/datasets/island-conservation-camera-traps/) were of particular interest for their stated purpose of assisting in the prevention of endangered island species' extinction and the varied ecosystems represented.
### Source Data
The images and their labels come from the following 5 LILA BC datasets. The labels are provided at the image level (not sequence level). Please see the source links for more information on the individual datasets.
- [Desert Lion Conservation Camera Traps](https://lila.science/datasets/desert-lion-conservation-camera-traps/)
- [ENA24-detection](https://lila.science/datasets/ena24detection)
- [Island Conservation Camera Traps](https://lila.science/datasets/island-conservation-camera-traps/)
- [Ohio Small Animals](https://lila.science/datasets/ohio-small-animals/)
- [Orinoquia Camera Traps](https://lila.science/datasets/orinoquia-camera-traps/)
### Annotations
Annotations provided by the source data providers ([aligned by LILA BC](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/)) are used for this test set.
### Personal and Sensitive Information
These images come from an existing, public biodiversity data repository, which publishes them without associated GPS locations for the species in the images and they ensure the removal of all humans (who would otherwise have been labeled as such), so the there are no concerns.
## Considerations for Using the Data
This collection of small balanced datasets was designed for testing the classification ability of [BioCLIP 2](https://github.com/Imageomics/bioclip-2) to classify species in camera trap images, a practical use-case and one on which it was not extensively trained.
### Bias, Risks, and Limitations
The available species in these datasets is not a representative sample of species around the world, though they do cover a portion of species of interest to those collecting images using camera traps.
## Licensing Information
This compilation is licensed under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/), same as the images and metadata which belong to their original sources (see citation directions below).
## Citation
Please cite both this compilation and its constituent data sources:
```
@dataset{idle-oo-camera-traps,
title = {{IDLE}-{OO} {C}amera {T}raps},
author = {Elizabeth G Campolongo and Jianyang Gu and Net Zhang},
year = {2025},
url = {https://huggingface.co/datasets/imageomics/IDLE-OO-Camera-Traps},
doi = {},
publisher = {Hugging Face}
}
```
Please be sure to also cite the original data sources (provided citations on their LILA BC pages are included):
- [Ohio Small Animals](https://lila.science/datasets/ohio-small-animals/)
- Balasubramaniam S. [Optimized Classification in Camera Trap Images: An Approach with Smart Camera Traps, Machine Learning, and Human Inference](https://etd.ohiolink.edu/acprod/odb_etd/etd/r/1501/10?clear=10&p10_accession_num=osu1721417695430687). Master’s thesis, The Ohio State University. 2024.
- Bibtex:
```
@mastersthesis{balasubramaniam2024-oh-small,
author = {Balasubramaniam, S.},
title = {Optimized Classification in Camera Trap Images: An Approach with Smart Camera Traps, Machine Learning, and Human Inference},
school = {The Ohio State University},
year = {2024},
url = {http://rave.ohiolink.edu/etdc/view?acc_num=osu1721417695430687}
}
```
- [Desert Lion Conservation Camera Traps](https://lila.science/datasets/desert-lion-conservation-camera-traps/)
- No citation provided by source, bibtex:
```
@misc{lion-ct,
author = {Desert Lion Conservation},
title = {Desert Lion Conservation Camera Traps},
howpublished = {https://lila.science/datasets/desert-lion-conservation-camera-traps/},
month = {July},
year = {2024},
}
```
- [Orinoquia Camera Traps](https://lila.science/datasets/orinoquia-camera-traps/)
- Vélez J, McShea W, Shamon H, Castiblanco‐Camacho PJ, Tabak MA, Chalmers C, Fergus P, Fieberg J. [An evaluation of platforms for processing camera‐trap data using artificial intelligence](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.14044). Methods in Ecology and Evolution. 2023 Feb;14(2):459-77.
- Bibtex:
```
@article{velez2022choosing-orinoquia,
title={Choosing an Appropriate Platform and Workflow for Processing Camera Trap Data using Artificial Intelligence},
author={V{\'e}lez, Juliana and Castiblanco-Camacho, Paula J and Tabak, Michael A and Chalmers, Carl and Fergus, Paul and Fieberg, John},
journal={arXiv preprint arXiv:2202.02283},
year={2022}
}
```
- [Island Conservation Camera Traps](https://lila.science/datasets/island-conservation-camera-traps/)
- No citation provided by source, bibtex:
```
@misc{island-ct,
author = {Island Conservation},
title = {Island Conservation Camera Traps},
howpublished = {https://lila.science/datasets/island-conservation-camera-traps/},
}
```
- [ENA24-detection](https://lila.science/datasets/ena24detection)
- Yousif H, Kays R, Zhihai H. Dynamic Programming Selection of Object Proposals for Sequence-Level Animal Species Classification in the Wild. IEEE Transactions on Circuits and Systems for Video Technology, 2019. ([bibtex](http://lila.science/wp-content/uploads/2019/12/hayder2019_bibtex.txt))
- Bibtex:
```
@article{yousif2019dynamic-ENA24,
title={Dynamic Programming Selection of Object Proposals for Sequence-Level Animal Species Classification in the Wild},
author={Yousif, Hayder and Kays, Roland and He, Zhihai},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2019},
publisher={IEEE}
}
```
## Acknowledgements
This work was supported by the [Imageomics Institute](https://imageomics.org), which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under [Award #2118240](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2118240) (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Additionally, we would like to acknowledge and thank [Labeled Information Library of Alexandria: Biology and Conservation (LILA BC)](https://lila.science) for providing a coordinated collection of camera trap images for research use.
## Dataset Card Authors
Elizabeth G. Campolongo
|
ChavyvAkvar/synthetic-trades-BTC-batch-15 | ChavyvAkvar | 2025-06-03T21:12:06Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:11:07Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450988
num_examples: 1000
download_size: 924510954
dataset_size: 923450988
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-32 | ChavyvAkvar | 2025-06-03T21:11:00Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:09:59Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448078
num_examples: 1000
download_size: 924485696
dataset_size: 923448078
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Litian2002/spatialvlm_qa_test | Litian2002 | 2025-06-03T20:59:46Z | 0 | 0 | [
"task_categories:visual-question-answering",
"language:zh",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"spatial-reasoning",
"blender",
"synthetic",
"vision-language"
] | [
"visual-question-answering"
] | 2025-06-03T20:59:24Z | null | ---
license: mit
task_categories:
- visual-question-answering
language:
- zh
- en
tags:
- spatial-reasoning
- blender
- synthetic
- vision-language
pretty_name: 合成空间视觉语言问答数据集
size_categories:
- 1K<n<100K
---
# 合成空间视觉语言问答数据集
自动从Blender场景数据集生成。每个示例包含一张图片和一个问答对,用于探测**度量**(数值)和**关系**(真/假)空间推理技能。
* **图像**: 使用Blender渲染(1000个场景,每个5个随机基元,随机相机和光照)。
* **元数据**: 对象名称、位置、缩放、颜色、材质标志。
* **问题**: 每张图片10个,从手工制作的模板中抽取,涵盖:
* 欧几里得/水平/垂直距离查询
* 对象宽度查询
* 相对于相机的前/后谓词
距离以Blender场景单位表示,四舍五入到1厘米精度。
## 字段
| 字段 | 类型 | 描述 |
|---------|---------|-------------------------------------|
| image | image | 渲染的场景 |
| question| string | 自然语言查询 |
| answer | string | 真实答案 |
## 引用
```
@misc{synthetic_spatialvlm_qa,
title = {合成空间视觉语言问答数据集},
author = {<您的姓名>},
year = 2025,
url = {https://huggingface.co/datasets/Litian2002/spatialvlm_qa_test}
}
```
|
ChavyvAkvar/synthetic-trades-BNB-batch-13 | ChavyvAkvar | 2025-06-03T20:58:36Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:57:38Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450482
num_examples: 1000
download_size: 924475245
dataset_size: 923450482
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Eathus/cwe_view1000_list_gpt_few_cwe_desc_replace | Eathus | 2025-06-03T20:54:30Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:54:27Z | null | ---
dataset_info:
features:
- name: ID
dtype: string
- name: Name
dtype: string
- name: Abstraction
dtype: string
- name: Structure
dtype: string
- name: Status
dtype: string
- name: Description
dtype: string
- name: ExtendedDescription
dtype: string
- name: ApplicablePlatforms
list:
- name: Class
dtype: string
- name: Name
dtype: string
- name: Prevalence
dtype: string
- name: Type
dtype: string
- name: AlternateTerms
list:
- name: Description
dtype: string
- name: Term
dtype: string
- name: ModesOfIntroduction
list:
- name: Note
dtype: string
- name: Phase
dtype: string
- name: CommonConsequences
list:
- name: Impact
sequence: string
- name: Likelihood
sequence: string
- name: Note
dtype: string
- name: Scope
sequence: string
- name: PotentialMitigations
list:
- name: Description
dtype: string
- name: Effectiveness
dtype: string
- name: EffectivenessNotes
dtype: string
- name: MitigationID
dtype: string
- name: Phase
sequence: string
- name: Strategy
dtype: string
- name: ObservedExamples
list:
- name: Description
dtype: string
- name: Link
dtype: string
- name: Reference
dtype: string
- name: AffectedResources
sequence: string
- name: TaxonomyMappings
list:
- name: EntryID
dtype: string
- name: EntryName
dtype: string
- name: MappingFit
dtype: string
- name: TaxonomyName
dtype: string
- name: RelatedAttackPatterns
sequence: string
- name: References
list:
- name: Authors
sequence: string
- name: Edition
dtype: string
- name: ExternalReferenceID
dtype: string
- name: Publication
dtype: string
- name: PublicationDay
dtype: string
- name: PublicationMonth
dtype: string
- name: PublicationYear
dtype: string
- name: Publisher
dtype: string
- name: Section
dtype: string
- name: Title
dtype: string
- name: URL
dtype: string
- name: URLDate
dtype: string
- name: Notes
list:
- name: Note
dtype: string
- name: Type
dtype: string
- name: ContentHistory
list:
- name: ContributionComment
dtype: string
- name: ContributionDate
dtype: string
- name: ContributionName
dtype: string
- name: ContributionOrganization
dtype: string
- name: ContributionReleaseDate
dtype: string
- name: ContributionType
dtype: string
- name: ContributionVersion
dtype: string
- name: Date
dtype: string
- name: ModificationComment
dtype: string
- name: ModificationDate
dtype: string
- name: ModificationName
dtype: string
- name: ModificationOrganization
dtype: string
- name: ModificationReleaseDate
dtype: string
- name: ModificationVersion
dtype: string
- name: PreviousEntryName
dtype: string
- name: SubmissionComment
dtype: string
- name: SubmissionDate
dtype: string
- name: SubmissionName
dtype: string
- name: SubmissionOrganization
dtype: string
- name: SubmissionReleaseDate
dtype: string
- name: SubmissionVersion
dtype: string
- name: Type
dtype: string
- name: Version
dtype: string
- name: MappingNotes_Usage
dtype: string
- name: MappingNotes_Rationale
dtype: string
- name: MappingNotes_Comments
dtype: string
- name: MappingNotes_Reasons
sequence: string
- name: MappingNotes_Suggestions
list:
- name: Comment
dtype: string
- name: CweID
dtype: string
- name: RelatedWeaknesses
list:
- name: CweID
dtype: string
- name: Nature
dtype: string
- name: Ordinal
dtype: string
- name: ViewID
dtype: string
- name: WeaknessOrdinalities
list:
- name: Description
dtype: string
- name: Ordinality
dtype: string
- name: DetectionMethods
list:
- name: Description
dtype: string
- name: DetectionMethodID
dtype: string
- name: Effectiveness
dtype: string
- name: EffectivenessNotes
dtype: string
- name: Method
dtype: string
- name: DemonstrativeExamples
list:
- name: Entries
list:
- name: BodyText
dtype: string
- name: ExampleCode
dtype: string
- name: IntroText
dtype: string
- name: Language
dtype: string
- name: Nature
dtype: string
- name: Reference
dtype: string
- name: ID
dtype: string
- name: FunctionalAreas
sequence: string
- name: Diagram
dtype: string
- name: LikelihoodOfExploit
dtype: string
- name: BackgroundDetails
sequence: string
- name: NumPaths
dtype: int64
- name: Paths
sequence:
sequence: string
- name: Children
sequence: string
- name: Summary
dtype: string
- name: gpt_cwe_description
dtype: string
splits:
- name: train
num_bytes: 10612741
num_examples: 940
download_size: 2924148
dataset_size: 10612741
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-BTC-batch-14 | ChavyvAkvar | 2025-06-03T20:49:19Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:48:21Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923451450
num_examples: 1000
download_size: 924510866
dataset_size: 923451450
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yoona-J/ASR_Wav2Vec_Preprocess_Degenerative_Brain_Dataset | yoona-J | 2025-06-03T20:45:33Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:12:56Z | null | ---
dataset_info:
features:
- name: input_values
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1010469662.4318247
num_examples: 3468
- name: valid
num_bytes: 56551072.0
num_examples: 192
- name: test
num_bytes: 46983120.0
num_examples: 189
download_size: 1094180586
dataset_size: 1114003854.4318247
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
french-datasets/bio-datasets_e3c | french-datasets | 2025-06-03T20:40:34Z | 0 | 0 | [
"language:spa",
"language:eus",
"language:fra",
"language:eng",
"language:ita",
"region:us"
] | [] | 2025-06-03T20:39:29Z | null | ---
language:
- spa
- eus
- fra
- eng
- ita
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [bio-datasets/e3c](https://huggingface.co/datasets/bio-datasets/e3c). |
french-datasets/L3-IA-2025_Questions_Reponses | french-datasets | 2025-06-03T20:37:36Z | 0 | 0 | [
"language:fra",
"region:us"
] | [] | 2025-06-03T20:36:34Z | null | ---
language:
- fra
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [L3-IA-2025/Questions_Reponses](https://huggingface.co/datasets/L3-IA-2025/Questions_Reponses). |
french-datasets/L3-IA-2025_Questions2 | french-datasets | 2025-06-03T20:37:24Z | 0 | 0 | [
"language:fra",
"region:us"
] | [] | 2025-06-03T20:36:20Z | null | ---
language:
- fra
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [L3-IA-2025/Questions2](https://huggingface.co/datasets/L3-IA-2025/Questions2). |
omniomni/omniscienceinstruct | omniomni | 2025-06-03T20:36:49Z | 2 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-19T01:23:30Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 142671648
num_examples: 147267
download_size: 79339989
dataset_size: 142671648
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fermatix-ai/SWE-Bench-Demo | fermatix-ai | 2025-06-03T20:33:34Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-02T14:45:46Z | null | ---
dataset_info:
features:
- name: repo
dtype: string
- name: instance_id
dtype: string
- name: issue_link
dtype: string
- name: pr_link
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: base_commit
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: FAIL_TO_PASS
dtype: string
- name: PASS_TO_PASS
dtype: string
- name: language_version
dtype: string
configs:
- config_name: default
data_files:
- split: CPP
path: cpp/cpp.parquet
- split: CSharp
path: csharp/csharp.parquet
- split: Go
path: go/go.parquet
- split: Java
path: java/java.parquet
- split: Kotlin
path: kotlin/kotlin.parquet
- split: PHP
path: php/php.parquet
- split: Ruby
path: ruby/ruby.parquet
- split: Rust
path: rust/rust.parquet
- split: Scala
path: scala/scala.parquet
- split: TypeScript
path: ts/ts.parquet
---
# SWE-bench Tasks
This repository contains tasks from the **SWE-bench dataset**, intended for evaluating the capabilities of models in automatic bug fixing and code modification.
The tasks cover various programming languages and projects, providing a diverse set of scenarios for testing and training.
Each task includes:
* **Patches** with fixes and/or tests
* **Instructions for building and running** (in the form of a Dockerfile), as well as corresponding run logs
* A **parquet file** containing basic information about the task
## Task Structure
Each task is organized in the following structure:
```
<language>/<organization>__<repository>/<issue_id>/
├── Dockerfile # Environment build instructions
├── docker-compose.yml # Docker Compose configuration
├── Makefile # Automation scripts
├── apply_fix.sh # Script for applying the fix
├── apply_test.sh # Script for applying tests
├── run_tests.sh # Script for running tests
├── logs/ # Directory with execution logs
│ └── ...
└── patches/ # Directory with patches
├── fix.patch # Patch with the fix
└── test.patch # Patch with tests
```
## Example: Rust/tokio-rs__mio/1706
This task demonstrates a fix for an issue in the MIO (Metal I/O) library for Rust. The task structure is:
```
Rust/tokio-rs__mio/1706/
├── Dockerfile # Build based on rust:1.74-slim
├── docker-compose.yml # Configuration for running
├── Makefile # Simple build commands
├── apply_fix.sh # Applies the fix patch
├── apply_test.sh # Applies the test patch
├── run_tests.sh # Runs tests after applying patches
├── logs/ # Contains test execution logs
└── patches/
├── fix.patch # Fix for the issue
└── test.patch # Tests to verify the fix
```
## Running a Task
1. Navigate to the task directory.
2. Execute `make run`.
3. Check the results in the `logs/` directory. |
ChavyvAkvar/synthetic-trades-XRP-batch-28 | ChavyvAkvar | 2025-06-03T20:27:58Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:27:02Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448041
num_examples: 1000
download_size: 924485862
dataset_size: 923448041
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shuchenliu/countdown-solutions-100 | shuchenliu | 2025-06-03T20:15:35Z | 7 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T08:28:15Z | null | ---
dataset_info:
features:
- name: nums
sequence: int64
- name: target
dtype: int64
- name: response
dtype: string
splits:
- name: train
num_bytes: 831096
num_examples: 6764
download_size: 265694
dataset_size: 831096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-27 | ChavyvAkvar | 2025-06-03T20:13:09Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:12:11Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448420
num_examples: 1000
download_size: 924480142
dataset_size: 923448420
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
orcn/v3.2-cir-text | orcn | 2025-06-03T20:10:35Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:10:32Z | null | ---
dataset_info:
features:
- name: text1
dtype: string
- name: text2
dtype: string
- name: text3
dtype: string
- name: text4
dtype: string
- name: text5
dtype: string
- name: text6
dtype: string
- name: text7
dtype: string
- name: text8
dtype: string
- name: text9
dtype: 'null'
- name: text10
dtype: 'null'
- name: text11
dtype: 'null'
- name: text12
dtype: 'null'
- name: text13
dtype: 'null'
- name: text14
dtype: 'null'
- name: text15
dtype: 'null'
- name: text16
dtype: 'null'
- name: text17
dtype: 'null'
- name: text18
dtype: 'null'
- name: text19
dtype: 'null'
- name: text20
dtype: 'null'
- name: text21
dtype: 'null'
- name: text22
dtype: 'null'
- name: text23
dtype: 'null'
- name: text24
dtype: 'null'
- name: text25
dtype: 'null'
splits:
- name: train
num_bytes: 335928
num_examples: 500
download_size: 157528
dataset_size: 335928
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
orcn/v3.2-cir-image | orcn | 2025-06-03T20:10:32Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:09:22Z | null | ---
dataset_info:
features:
- name: image1
dtype: image
- name: image2
dtype: image
- name: image3
dtype: image
- name: image4
dtype: image
- name: image5
dtype: image
- name: image6
dtype: image
- name: image7
dtype: image
- name: image8
dtype: image
- name: image9
dtype: image
- name: image10
dtype: image
- name: image11
dtype: image
- name: image12
dtype: image
- name: image13
dtype: image
- name: image14
dtype: image
- name: image15
dtype: image
- name: image16
dtype: image
- name: image17
dtype: image
- name: image18
dtype: image
- name: image19
dtype: image
- name: image20
dtype: image
- name: image21
dtype: image
- name: image22
dtype: image
- name: image23
dtype: image
- name: image24
dtype: image
- name: image25
dtype: image
splits:
- name: train
num_bytes: 93800962.0
num_examples: 500
download_size: 93497412
dataset_size: 93800962.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
babs/open-slr-speaker-id | babs | 2025-06-03T20:04:12Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:56:42Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: file_name
dtype: string
- name: transcript
dtype: string
- name: speaker_id
dtype: string
splits:
- name: train
num_bytes: 1198177635.28
num_examples: 2045
download_size: 935494045
dataset_size: 1198177635.28
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-BTC-batch-12 | ChavyvAkvar | 2025-06-03T19:51:06Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:50:09Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450658
num_examples: 1000
download_size: 924460224
dataset_size: 923450658
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-25 | ChavyvAkvar | 2025-06-03T19:49:01Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:48:06Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923447720
num_examples: 1000
download_size: 924484995
dataset_size: 923447720
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
syvai/emotion_dataset_no_reasoning | syvai | 2025-06-03T19:45:09Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:45:06Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2842616
num_examples: 16000
download_size: 1062081
dataset_size: 2842616
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-23 | ChavyvAkvar | 2025-06-03T19:25:12Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:24:14Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448219
num_examples: 1000
download_size: 924485577
dataset_size: 923448219
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mohamedah/zh-news-articles | mohamedah | 2025-06-03T19:24:31Z | 0 | 0 | [
"language:zh",
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:57:53Z | null | ---
license: mit
language:
- zh
pretty_name: Chinese News Articles
---
# Chinese News Article Dataset
A dataset of Chinese state media articles and Chinese New York Times articles first introduced in the paper *An Analysis of Chinese Censorship Bias in LLMs.*
State media Articles were sourced from the [news2016zh](https://github.com/brightmart/nlp_chinese_corpus?tab=readme-ov-file#2%E6%96%B0%E9%97%BB%E8%AF%AD%E6%96%99json%E7%89%88news2016zh) corpus and we automatically scraped the New York Times articles.
## Citation
If you publish work using our datasets or CensorshipDetector, please cite our work using the following citation:
```bibtex
@inproceedings{ahmed2025censorshipbias
title = {An Analysis of Chinese Censorship Bias in LLMs},
author = {Ahmed, Mohamed and Knockel, Jeffrey and Greenstadt, Rachel},
booktitle = {Proceedings on Privacy Enhancing Technologies (PoPETs)},
volume = {2025},
issue = {4},
year = {2025}
}
``` |
anirudhb11/star-graph-deg-2-path-3-nodes-300 | anirudhb11 | 2025-06-03T19:22:24Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:22:22Z | null | ---
dataset_info:
features:
- name: graph
dtype: string
- name: source
dtype: string
- name: destination
dtype: string
- name: path
dtype: string
splits:
- name: train
num_bytes: 11844206
num_examples: 200000
- name: test
num_bytes: 1183870
num_examples: 20000
download_size: 8379525
dataset_size: 13028076
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Jurgie/MyVoiceSmall | Jurgie | 2025-06-03T19:17:03Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:17:01Z | null | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 5290
num_examples: 1
download_size: 6361
dataset_size: 5290
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-22 | ChavyvAkvar | 2025-06-03T19:14:31Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:13:30Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448158
num_examples: 1000
download_size: 924435070
dataset_size: 923448158
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MaLA-LM/mala-opus-dedup-2410 | MaLA-LM | 2025-06-03T19:07:03Z | 10,043 | 1 | [
"task_categories:translation",
"license:odc-by",
"size_categories:n>1T",
"arxiv:2409.17892",
"region:us"
] | [
"translation"
] | 2025-05-28T07:13:20Z | null | ---
license: odc-by
task_categories:
- translation
size_categories:
- n>1T
---
# MaLA Corpus: Massive Language Adaptation Corpus
This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) is the bilingual part of the [**MaLA Corpus**](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529). It is a cleaned and deduplicated version of OPUS corpus, collected from [OPUS](https://opus.nlpl.eu) with a cutoff of October 2024 (2410). Particularly, it contains bilingual translation data (aka, parallel data or bitexts) in 16,829 language pairs.
The [**MaLA Corpus** (Massive Language Adaptation)](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529) is a series of comprehensive, multilingual datasets designed to support the continual pre-training of large language models. This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) set can also support the training of multilingual translation models.
---
## Data Fields
`source_text`: the source text of parallel data.
`source_lang`: the language of the source text.
`target_text`: the target text of parallel data.
`target_lang`: the language of the target text.
`original_code` - the source-target language codes in the format of `${src_code} - ${trg_code}`, delimited by " - ".
---
## Key Features
- **Language Coverage**: Includes data in 29,202 language pairs.
- **Pre-processing**: The corpus is cleaned and deduplicated to ensure high-quality training data.
---
## Dataset Creation
This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) set was created by processing data from [OPUS](https://opus.nlpl.eu), followed by rigorous pre-processing to ensure the quality of the data:
- **Cleaning**: Noisy and irrelevant data were removed to ensure higher data quality.
- **Deduplication**: Duplicate entries across multiple sources were eliminated.
- **Normalization**: The data was normalized, and language codes were standardized to ISO 639-3 to ensure consistency across all sources.
We do a variety of handcrafted checks to filter out noisy lines based on [this pipeline](https://github.com/browsermt/students/tree/master/train-student/clean). In detail,
we run line-level script detection to ensure that the writing script identified for the dataset during the preliminary stage makes up more than 5% of that line. We remove lines where the same word or character repeats more than 5 times in a row. We calculate the source and target ratios in the number of characters and number of words as well as alphabetical ratio, but only require one of the ratios to be within our pre-determined range, due to the wide range of scripts, languages, and their usual word delimiters. We also ensure that the line fits between a minimum and a maximum number of character lengths (non-empty).
---
## Intended Use
This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) set is intended for researchers and developers looking to improve the multilingual capabilities of language models. It is especially useful for:
- **Pre-training** of large language models, particularly continual pre-training, to enhance the performance in low-resource languages.
- **Fine-tuning models** on multilingual benchmarks to improve language coverage across a variety of domains.
- **Multilingual tasks** such as machine translation training or fine-tuning.
---
## Take-down Policy
We don't own any part of the data. We will comply with legitimate requests by removing the affected sources from the corpora.
---
## Citation
This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) set was processed by the [MaLA-LM](https://mala-lm.github.io) project. If you find this dataset useful, please cite our paper below.
```
@article{ji2024emma500enhancingmassivelymultilingual,
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow},
year={2024},
journal={arXiv preprint 2409.17892},
url={https://arxiv.org/abs/2409.17892},
}
```
## Acknowledgements
We extend our thanks to the language communities and contributors who helped source, clean, and validate the diverse data used in the MaLA Corpus. Their efforts are invaluable in supporting linguistic diversity in AI research.
This work is done by researchers at [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) in collaboration with partners from TU Darmstadt, the University of Edinburgh, and LMU Munich. It is funded by [HPLT](https://hplt-project.org) and [UTTER](https://he-utter.eu). |
BBSRguy/Ranjan-Hindi33min | BBSRguy | 2025-06-03T19:06:34Z | 0 | 0 | [
"task_categories:text-to-speech",
"language:hi",
"license:mit",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-to-speech"
] | 2025-06-03T18:36:31Z | null | ---
license: mit
task_categories:
- text-to-speech
language:
- hi
size_categories:
- n<1K
---
# Ranjan-Hindi33min
**Owner:** [@BBSRguy](https://huggingface.co/BBSRguy)
**Created:** 2025-06-03
**Year:** 2025
**Language:** Hindi 🇮🇳
**Region Focus:** Odisha, India
**Sample Rate Variants:** 16 kHz, 24 kHz, 32 kHz
**Total Files:** 29 pairs (speech + text)
**Duration:** Approximately 33 minutes of speech
---
## 📜 Description
`Ranjan-Hindi33min` is a meticulously curated dataset comprising high-quality Hindi speech samples and their corresponding textual transcriptions. This dataset is designed to support various speech processing tasks, including Automatic Speech Recognition (ASR), Text-to-Speech (TTS) synthesis, and speech-text alignment, with a particular emphasis on the linguistic nuances of Odisha and eastern India.
The dataset features recordings from a single native Hindi speaker, ensuring consistency in voice characteristics. The speech samples vary in length, ranging from 9 seconds to over 2 minutes, and encompass a diverse array of content, including formal narration, greetings, traditional expressions, and culturally contextual material.
---
## 🗂️ Dataset Structure
- `id`: Unique identifier for each sample (e.g., `sample_001`)
- `speech`: Filename of the corresponding WAV audio file
- `text`: Transcribed Hindi text content
**Dataset will not be available in the dataset viewer as there is no train partition.**
---
## 🔊 Audio Sampling Rates & Model Compatibility
To cater to a broad spectrum of research and application needs, the dataset provides audio recordings in three different sampling rates:
### 🎧 16 kHz (Wideband)
- **Description**: Standard sampling rate widely used in speech processing tasks.
- **Ideal For**:
- **OpenAI Whisper**: Requires 16 kHz input audio for optimal performance.
- **Facebook Wav2Vec 2.0**: Pretrained on 16 kHz sampled speech audio.
- **Mozilla DeepSpeech**: Expects 16 kHz mono-channel WAV files.
### 🎧 24 kHz (High-Quality Wideband)
- **Description**: Offers a balance between audio quality and computational efficiency.
- **Ideal For**:
- **Parler-TTS**: Supports 24 kHz audio, suitable for high-fidelity TTS applications.
- **ESPnet-TTS**: Flexible with sampling rates; 24 kHz is commonly used.
### 🎧 32 kHz (Ultra-Wideband)
- **Description**: Provides higher audio quality, beneficial for advanced TTS models and applications demanding superior clarity.
- **Ideal For**:
- **Advanced TTS Models**: Models that can leverage higher sampling rates for improved synthesis quality.
*Note*: When using models that require a specific sampling rate, ensure that the input audio matches the expected rate to avoid potential degradation in performance.
---
## 🧪 Applications
- **Automatic Speech Recognition (ASR)**: Fine-tuning and evaluating models like Whisper, Wav2Vec 2.0, and DeepSpeech for Hindi language transcription tasks.
- **Text-to-Speech (TTS) Synthesis**: Training and assessing TTS models such as Parler-TTS and ESPnet-TTS for generating natural-sounding Hindi speech.
- **Speech-Text Alignment**: Developing and testing alignment algorithms for synchronizing speech with textual content.
- **Linguistic Research**: Analyzing phonetic and prosodic features of Hindi as spoken in the Odisha region.
---
## 🧾 License
This dataset is shared by [@BBSRguy](https://huggingface.co/BBSRguy) as part of the **ODEN Initiative** for open AI development in India, aiming to democratize AI and make it accessible for everyone, particularly focusing on developing AI tools like personal assistants for the people of Odisha.
🙏 नमस्ते! 🙏 |
arnaultsta/MNLP_M3_wikipedia_camel_chunked_500_whole | arnaultsta | 2025-06-03T19:03:07Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:23:43Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
sequence: string
- name: rationale
dtype: string
- name: label
dtype: string
- name: label_idx
dtype: int64
- name: dataset
dtype: string
- name: chunk1
dtype: string
- name: chunk2
dtype: string
- name: chunk3
dtype: string
splits:
- name: train
num_bytes: 1525828291
num_examples: 200000
- name: validation
num_bytes: 3708788
num_examples: 519
download_size: 387108979
dataset_size: 1529537079
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
canaan000/lerobot | canaan000 | 2025-06-03T19:02:36Z | 0 | 0 | [
"language:en",
"license:unknown",
"region:us",
"art"
] | [] | 2025-06-03T19:01:26Z | null | ---
license: unknown
language:
- en
tags:
- art
--- |
avaishnav/Indian-plant-leaves-species | avaishnav | 2025-06-03T19:00:25Z | 142 | 0 | [
"task_categories:image-classification",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"biology"
] | [
"image-classification"
] | 2025-05-20T00:32:46Z | null | ---
license: apache-2.0
task_categories:
- image-classification
language:
- en
tags:
- biology
size_categories:
- n<1K
---
# Indian Plant Leaves Species
## Dataset Summary
This dataset consists of 542 high-resolution images of leaves from 12 different plant species. All images were captured using a mobile phone camera under natural lighting conditions. The dataset is intended for use in plant species classification, leaf recognition, and related computer vision tasks.
## Supported Tasks and Leaderboards
- Image Classification
- Species Recognition
- Transfer Learning for Plant Identification
- Fine-tuning Computer Vision Models
## Languages
Not applicable (images only, no text).
## Dataset Structure
- Number of Images: 542
- Number of Classes: 12 plant species
## Plant Species
1. Alstonia Scholaris
2. Bael
3. Guava
4. Jatropha
5. Mango
6. Pongamia Pinnata
7. Arjun
8. Basil
9. Chinar
10. Jamun
11. Lemon
12. Pomegranate


## Usage
Example usage with datasets library (replace your-username/leaf-dataset):
```
from datasets import load_dataset
dataset = load_dataset("avaishnav/Indian-plant-leaves-species")
```
## Data Collection Process
- Images were captured by the dataset author using a mobile phone camera.
- Photos were taken in natural daylight, with leaves detached from the plant stem.
- No preprocessing or filtering was applied, to preserve real-world conditions.
## Citation
If you use this dataset in your research or application, please consider citing:
```
@dataset{vaishnav2025indian,
author = {Vaishnav, Anugrah},
title = {Indian Plant Leaves Species},
year = 2025,
url = {https://huggingface.co/datasets/avaishnav/Indian-plant-leaves-species},
publisher = {Hugging Face},
license = {Apache-2.0}
}
``` |
kurry/institutional-holdings-13f-quarterly | kurry | 2025-06-03T18:52:26Z | 56 | 0 | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"language:en",
"license:mit",
"size_categories:100M<n<1B",
"format:arrow",
"modality:tabular",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"finance",
"sec",
"13f",
"embeddings",
"institutional-holdings",
"stocks",
"equities",
"panel-data"
] | [
"tabular-classification",
"tabular-regression"
] | 2025-05-20T03:09:54Z | null | ---
license: mit
language:
- en
pretty_name: US Institutional 13F Holdings Record-level Embeddings (1980–2024)
tags:
- finance
- sec
- 13f
- embeddings
- institutional-holdings
- stocks
- equities
- panel-data
datasets:
- kurry/institutional-holdings-13f
task_categories:
- tabular-classification
- tabular-regression
size_categories:
- 10M<n<100M
date: "2025-05-19"
---
# US Institutional 13F Holdings Embeddings Dataset
Quarterly snapshots (1980 – 2024) of every U.S.-reported 13F equity holding, plus unsupervised embeddings for:
* Institutional managers (investors)
* Assets (PERMCO-level equities)
---
## Dataset Description
**13F filings** are quarterly disclosures to the U.S. SEC by institutions managing ≥ \$100 million.
For each quarter (1980 Q1 → 2024 Q3) the dataset provides:
* **Holdings Table** – exact positions with shares, price, and market value
* **Investor Embeddings** – 32-dim vectors from log-dollar-weighted Truncated SVD
* **Asset Embeddings** – companion vectors for the equity universe
* **Metadata** – manager IDs, asset IDs, timestamps, etc.
**Note:** This dataset has been merged into a single structure with all quarters combined, rather than the previous separate quarterly folders.
---
## Data Structure
### Holdings table (core fields)
| Field | Type | Description |
| ---------------- | ------ | ---------------------------------------------- |
| `mgrno` | string | Institutional manager ID (SEC) |
| `permco` | string | Permanent company identifier |
| `fdate` | date | Quarter-end report date |
| `shares` | float | Shares held |
| `price` | float | Price per share on `fdate` |
| `dollar_holding` | float | Shares × Price (market value of the position) |
| `quarter` | string | Quarter identifier (e.g., "2020Q4") |
| `year` | int64 | Year of the quarter |
| `qtr` | int64 | Quarter number (1-4) |
### Embeddings tables
| Field | Type | Description |
| ----------------------- | -------------------- | --------------------------------------------- |
| `mgrno` *or* `permco` | string | Primary key |
| `embedding` | float32\[n\] sequence| 32-dimensional vector (size may vary) |
---
## Coverage and Distribution
* **Quarters:** 1980 Q1 → 2024 Q3 (179 quarters)
* **Universe:** Every stock appearing in any 13F filing during the window
* **Rows:** Tens of millions of manager-asset-quarter tuples
* **Embeddings:** One vector per active manager and per PERMCO each quarter
### Quick load
```python
from datasets import DatasetDict
# Load the dataset (now in merged format)
dataset = load_dataset("kurry/institutional-holdings-13f-quarterly")
print(dataset["train"][0])
```
---
## Typical Usage
* Alpha/return prediction, manager similarity, clustering
* Long-run studies of institutional ownership dynamics
* Panel regressions (quarterly frequency)
```python
# Load a single quarter
ds = DatasetDict.load_from_disk("institutional-holdings-13f-quarterly/2020Q4")
print(ds["investor_embeddings"][0]["embedding"][:8])
```
```python
# Iterate over all quarters
import os
root = "institutional-holdings-13f-quarterly"
for q in sorted(p for p in os.listdir(root) if "Q" in p):
ds = DatasetDict.load_from_disk(f"{root}/{q}")
# process ds["holdings"], ds["investor_embeddings"], ds["asset_embeddings"]
```
---
## Data Splits
Each quarterly folder is a Hugging Face `DatasetDict` containing:
* `holdings`
* `investor_embeddings`
* `asset_embeddings`
---
## Identifier Definitions
* **PERMCO** – company-level identifier (stable through ticker/name changes)
* **PERMNO** – security-level identifier (stable through symbol/CUSIP changes)
---
## Processing Pipeline
1. Parse raw 13F text filings.
2. Map CUSIPs → PERMCOs.
3. Aggregate shares and price to compute market value.
4. Compute log-dollar weight
```
w = log(1 + dollar_holding)
```
5. Build manager-by-asset matrix `M` with elements `w`.
6. Apply Truncated SVD, keep top k = 32 factors (or ≤ rank).
```
M ≈ U Σ Vᵀ
```
* Rows of **U** → manager embeddings
* Rows of **V** → asset embeddings
---
## Licensing & Limitations
* **License:** MIT
* **Intended use:** Research & education
* Note: Source 13F filings are public; mappings were derived from public data.
---
## Citation
```bibtex
@dataset{kurry2025institutionalholdings13f,
author = {Kurry},
title = {US Institutional 13F Holdings Embeddings Dataset},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/kurry/institutional-holdings-13f}
}
```
---
### Quick start
```python
from datasets import DatasetDict
ds = DatasetDict.load_from_disk("institutional-holdings-13f-quarterly/2010Q2")
``` |
Franklin0/ReasonGen-R1-RL-T2I-11k | Franklin0 | 2025-06-03T18:38:20Z | 35 | 0 | [
"task_categories:text-to-image",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.24875",
"region:us"
] | [
"text-to-image"
] | 2025-05-27T02:34:35Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 524257
num_examples: 12367
download_size: 179867
dataset_size: 524257
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-image
license: mit
---
This is the RL dataset for the paper: [\"ReasonGen-R1: CoT for Autoregressive Image generation models through SFT and RL\"](https://huggingface.co/papers/2505.24875).
ReasonGen-R1 is a two-stage framework that imbues an autoregressive image generator with explicit text-based "thinking" skills via supervised fine-tuning (SFT) on a newly generated reasoning dataset of written rationales. It then refines its outputs using Group Relative Policy Optimization (GRPO). This dataset contains the model-crafted rationales paired with visual prompts, enabling controlled planning of object layouts, styles, and scene compositions.
Website: https://aka.ms/reasongen
Code: https://github.com/Franklin-Zhang0/Image-RL
Arxiv: https://arxiv.org/abs/2505.24875 |
Franklin0/ReasonGen-R1-RL-Geneval-12k | Franklin0 | 2025-06-03T18:37:56Z | 58 | 0 | [
"task_categories:text-to-image",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.24875",
"region:us"
] | [
"text-to-image"
] | 2025-05-27T02:50:14Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 524257
num_examples: 12367
download_size: 179867
dataset_size: 524257
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-image
license: mit
---
This is the RL dataset for the paper: [\"ReasonGen-R1: CoT for Autoregressive Image generation models through SFT and RL\"](https://huggingface.co/papers/2505.24875).
ReasonGen-R1 is a two-stage framework that imbues an autoregressive image generator with explicit text-based "thinking" skills via supervised fine-tuning (SFT) on a newly generated reasoning dataset of written rationales. It then refines its outputs using Group Relative Policy Optimization (GRPO). This dataset contains the model-crafted rationales paired with visual prompts, enabling controlled planning of object layouts, styles, and scene compositions.
Website: https://aka.ms/reasongen
Code: https://github.com/Franklin-Zhang0/Image-RL
Arxiv: https://arxiv.org/abs/2505.24875 |
Franklin0/ReasonGen-R1-RL-DPG-5k | Franklin0 | 2025-06-03T18:36:01Z | 35 | 0 | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.24875",
"region:us"
] | [
"text-to-image"
] | 2025-05-27T02:27:55Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2371376
num_examples: 4996
download_size: 1295838
dataset_size: 2371376
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-image
---
RL Dataset for the paper: [\"ReasonGen-R1: Cot for Autoregressive Image generation models through SFT and RL\"](https://huggingface.co/papers/2505.24875).
Website: https://aka.ms/reasongen
Code: https://github.com/Franklin-Zhang0/Image-RL
Arxiv: https://arxiv.org/abs/2505.24875 |
neginr/phi_24K_qwq_6K | neginr | 2025-06-03T18:34:20Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:33:02Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1692722617.642405
num_examples: 30000
download_size: 813744124
dataset_size: 1692722617.642405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SynthData/InstructionFollowingBlueprint | SynthData | 2025-06-03T18:33:54Z | 0 | 1 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:28:34Z | null | ---
license: apache-2.0
---
This dataset contains blueprints for complex Instruction Following.
The dataset consists of 50% LiveBench IF benchmarks deconstructed for analysis supplemented with 50% synthetically generated, high-difficulty and diverse prompts for robustness.
https://huggingface.co/datasets/livebench/instruction_following
Contact:
aaron@synthdata.io
|
hiepp2/tvp4 | hiepp2 | 2025-06-03T18:32:42Z | 175 | 1 | [
"task_categories:text-generation",
"language:en",
"size_categories:n>1T",
"arxiv:2504.21318",
"arxiv:2505.00949",
"region:us"
] | [
"text-generation"
] | 2025-06-02T23:40:29Z | null | ---
dataset_info:
- config_name: all
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: num_tokens
dtype: int64
- name: source
dtype: string
splits:
- name: train
num_bytes: 7062819826.825458
num_examples: 349317
download_size: 3077653717
dataset_size: 7062819826.825458
- config_name: code
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: num_tokens
dtype: int64
- name: source
dtype: string
splits:
- name: train
num_bytes: 3872656251.3167396
num_examples: 83070
download_size: 1613338604
dataset_size: 3872656251.3167396
- config_name: math
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: num_tokens
dtype: int64
- name: source
dtype: string
splits:
- name: train
num_bytes: 1599028646
num_examples: 93733
download_size: 704448153
dataset_size: 1599028646
- config_name: science
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: num_tokens
dtype: int64
- name: source
dtype: string
splits:
- name: train
num_bytes: 1590765326
num_examples: 172514
download_size: 674333812
dataset_size: 1590765326
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
- config_name: code
data_files:
- split: train
path: code/train-*
- config_name: math
data_files:
- split: train
path: math/train-*
- config_name: science
data_files:
- split: train
path: science/train-*
task_categories:
- text-generation
language:
- en
pretty_name: Mixture
size_categories:
- n>1T
---
<img src="mot-thumbnail.png" alt="Centered Image" style="display: block; margin: 0 auto;" width="500">
# Dataset summary
Mixture-of-Thoughts is a curated dataset of 350k verified reasoning traces distilled from [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1). The dataset spans tasks in mathematics, coding, and science, and is designed to teach language models to reason step-by-step. It was used in the Open R1 project to train [OpenR1-Distill-7B](https://huggingface.co/open-r1/OpenR1-Distill-7B), an SFT model that replicates the reasoning capabilities of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) from the same base model.
To load the dataset, run:
```python
from datasets import load_dataset
dataset = load_dataset("open-r1/Mixture-of-Thoughts", "all", split="train")
# Load a specific domain
dataset_math = load_dataset("open-r1/Mixture-of-Thoughts", "math", split="train")
```
## Dataset composition
Mixture-of-Thoughts is composed of three domains: math, code, and science. Each domain contains reasoning traces that are designed to teach language models to reason step-by-step. The dataset is structured as follows:
- **math**: 93.7k reasoning traces for mathematical problems, sourced from the `default` subset of [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k)
- **code**: 83.1k reasoning traces for competitive programming problems in Python and C++, sourced from the `solutions` and `solutions_w_editorials` subsets of [open-r1/codeforces-cots](https://huggingface.co/datasets/open-r1/codeforces-cots)
- **science**: 173k reasoning traces for scientific problems, sourced from the `science` subset of [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset)
- **all**: Contains all reasoning traces from the three domains, for a total of 350k traces.
## Curation methodology
To optimise the data mixture, we followed the same methodology described in the [Phi-4-reasoning tech report](https://huggingface.co/papers/2504.21318), namely that mixtures can be optimised independently per domain, and then combined into a single dataset. For each ablation, we evaluate on AIME 2024, GPQA Diamond, and LiveCodeBench v4 every epoch and take the best performing model checkpoint. The figure below shows the results from post-training [open-r1/Qwen2.5-Math-7B-RoPE-300k](https://huggingface.co/open-r1/Qwen2.5-Math-7B-RoPE-300k) on each individual domain compared to the final mixture:
<img src="data_mix.png" alt="Centered Image" style="display: block; margin: 0 auto;">
Overall, we find that training on all domains simultaneously yields the best results. See the subsections below for more details on optimising the data mixture per domain.
> [!NOTE]
> We use LiveCodeBench v4 to accelerate evaluation during our ablations as it contains around half the problems of v5, yet is still representative of the full benchmark.
### Code
During the development of [open-r1/OlympicCoder-7B](https://huggingface.co/open-r1/OlympicCoder-7B), we observed that generating R1 reasoning traces in C++ produced better results on the challenging [IOI 2024 benchmark](https://github.com/huggingface/ioi), while Python traces produced better results on LiveCodeBench (a Python-only benchmark). To optimise the data mixture, we therefore used a mix of C++ and Python traces sourced from the following subsets of [open-r1/codeforces-cots](https://huggingface.co/datasets/open-r1/codeforces-cots):
- `solutions`: we prompt R1 to solve the problem and produce code in C++.
- `solutions_py`: same as `solutions`, but with R1 prompted to produce code in Python.
- `solutions_w_editorials`: we prompt R1 to solve the problem and produce code, but also provide it with a human-written solution.
- `solutions_w_editorials_py`: same as `solutions_w_editorials`, but with R1 prompted to produce code in Python.
The figure below shows the evolution of our ablations on these subsets, using [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) as the base model:
<img src="code_mix.png" alt="Centered Image" style="display: block; margin: 0 auto;">
The individual experiments correspond to the following:
* **exp1 - exp3:** scaling the learning rate on the `solutions` subset from 1e-5 to 2e-5, and 4e-5 respectively.
* **exp4 - exp5:** measuring the impact of training on the `solutions_w_editorials` subset vs the combined `solutions` and `solutions_w_editorials` subsets.
* **exp6 - exp9:** measuring the impact of blending in Python traces from the `solutions_py` and `solutions_w_editorials_py` subsets. exp6 combines the `solutions_w_editorials` and `solutions_w_editorials_py` subsets, while exp7 combines the `solutions` and `solutions_py` subsets. Finally, exp8 combines all four subsets.
We found that combining all subsets of C++ and Python traces yielded the best results on LiveCodeBench. We also found that using this data mixture to fine-tune [open-r1/Qwen2.5-Coder-7B-RoPE-300k](https://huggingface.co/open-r1/Qwen2.5-Coder-7B-RoPE-300k) led to comparable performance improvements, which shows the effectiveness of our curation strategy.
### Math
For the math domain, we mostly focused on comparing the `default` and `extended` subsets of [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k). The `default` subset contains 93.7k reasoning traces, while the `extended` subset contains an additional 131k traces, containing simpler problems than the `default` subset. The figure below shows performance on each subset, using [Qwen/Qwen2.5-Math-7B-RoPE-300k](https://huggingface.co/Qwen/Qwen2.5-Math-7B-RoPE-300k) as the base model:
<img src="math_mix.png" alt="Centered Image" style="display: block; margin: 0 auto;">
Overall, we found that training on the `default` subset yielded better results than training on the `extended` subset, and that training on both subsets together yielded the best results. Nevertheless, we opted to use the `default` subset only for the final mixture, as including both would have led to a significant increase in the size of the dataset, for a modest improvement in performance.
### Science
For the science domain, we used the `science` subset of [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset/viewer/SFT/science), which contains 483k reasoning traces. However, we found that the subset was too large to be used in its entirety, as it would have led to a significant increase in the size of the dataset. Instead, we selected the subset of traces where no Qwen models were used for prompt pre-processing--see this [discussion](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset/discussions/6) for more details. The result was 173k reasoning traces, which we used in the final mixture after ablating on the learning rate.
## Citation
If you find this dataset is useful in your own work, please consider citing it as follows, together with the source of the specific domain you are using:
```bibtex
@misc{openr1,
title = {Open R1: A fully open reproduction of DeepSeek-R1},
url = {https://github.com/huggingface/open-r1},
author = {Hugging Face},
month = {January},
year = {2025}
}
```
**open-r1/codeforces-cots**
```bibtex
@misc{penedo2025codeforces,
title={CodeForces CoTs},
author={Guilherme Penedo and Anton Lozhkov and Hynek Kydlíček and Loubna Ben Allal and Edward Beeching and Agustín Piqueres Lajarín and Quentin Gallouédec and Nathan Habib and Lewis Tunstall and Leandro von Werra},
year={2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/open-r1/codeforces-cots}}
}
```
**open-r1/OpenR1-Math-220k**
```bibtex
@misc{lozhkov2025openr1math220k,
title={OpenR1-Math-220k},
author={Anton Lozhkov and Hynek Kydlíček and Loubna Ben Allal and Guilherme Penedo and Edward Beeching and Quentin Gallouédec and Nathan Habib and Lewis Tunstall and Leandro von Werra},
year={2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/open-r1/OpenR1-Math-220k}}
}
```
**nvidia/Llama-Nemotron-Post-Training-Dataset**
```bibtex
@misc{bercovich2025llamanemotronefficientreasoningmodels,
title={Llama-Nemotron: Efficient Reasoning Models},
author={Akhiad Bercovich and Itay Levy and Izik Golan and Mohammad Dabbah and Ran El-Yaniv and Omri Puny and Ido Galil and Zach Moshe and Tomer Ronen and Najeeb Nabwani and Ido Shahaf and Oren Tropp and Ehud Karpas and Ran Zilberstein and Jiaqi Zeng and Soumye Singhal and Alexander Bukharin and Yian Zhang and Tugrul Konuk and Gerald Shen and Ameya Sunil Mahabaleshwarkar and Bilal Kartal and Yoshi Suhara and Olivier Delalleau and Zijia Chen and Zhilin Wang and David Mosallanezhad and Adi Renduchintala and Haifeng Qian and Dima Rekesh and Fei Jia and Somshubra Majumdar and Vahid Noroozi and Wasi Uddin Ahmad and Sean Narenthiran and Aleksander Ficek and Mehrzad Samadi and Jocelyn Huang and Siddhartha Jain and Igor Gitman and Ivan Moshkov and Wei Du and Shubham Toshniwal and George Armstrong and Branislav Kisacanin and Matvei Novikov and Daria Gitman and Evelina Bakhturina and Jane Polak Scowcroft and John Kamalu and Dan Su and Kezhi Kong and Markus Kliegl and Rabeeh Karimi and Ying Lin and Sanjeev Satheesh and Jupinder Parmar and Pritam Gundecha and Brandon Norick and Joseph Jennings and Shrimai Prabhumoye and Syeda Nahida Akter and Mostofa Patwary and Abhinav Khattar and Deepak Narayanan and Roger Waleffe and Jimmy Zhang and Bor-Yiing Su and Guyue Huang and Terry Kong and Parth Chadha and Sahil Jain and Christine Harvey and Elad Segal and Jining Huang and Sergey Kashirsky and Robert McQueen and Izzy Putterman and George Lam and Arun Venkatesan and Sherry Wu and Vinh Nguyen and Manoj Kilaru and Andrew Wang and Anna Warno and Abhilash Somasamudramath and Sandip Bhaskar and Maka Dong and Nave Assaf and Shahar Mor and Omer Ullman Argov and Scot Junkin and Oleksandr Romanenko and Pedro Larroy and Monika Katariya and Marco Rovinelli and Viji Balas and Nicholas Edelman and Anahita Bhiwandiwalla and Muthu Subramaniam and Smita Ithape and Karthik Ramamoorthy and Yuting Wu and Suguna Varshini Velury and Omri Almog and Joyjit Daw and Denys Fridman and Erick Galinkin and Michael Evans and Katherine Luna and Leon Derczynski and Nikki Pope and Eileen Long and Seth Schneider and Guillermo Siman and Tomasz Grzegorzek and Pablo Ribalta and Monika Katariya and Joey Conway and Trisha Saar and Ann Guan and Krzysztof Pawelec and Shyamala Prayaga and Oleksii Kuchaiev and Boris Ginsburg and Oluwatobi Olabiyi and Kari Briski and Jonathan Cohen and Bryan Catanzaro and Jonah Alben and Yonatan Geifman and Eric Chung and Chris Alexiuk},
year={2025},
eprint={2505.00949},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.00949},
}
``` |
ChavyvAkvar/synthetic-trades-XRP-batch-18 | ChavyvAkvar | 2025-06-03T18:28:24Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:27:23Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448051
num_examples: 1000
download_size: 924486740
dataset_size: 923448051
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
0xarchit/Ques-Ans-with-Emotion | 0xarchit | 2025-06-03T18:19:41Z | 0 | 0 | [
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"question-answer-with-emotion",
"question-answers"
] | [
"question-answering"
] | 2025-06-03T17:09:49Z | null | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- question-answer-with-emotion
- question-answers
pretty_name: Question Answer based on Emotions
size_categories:
- 1K<n<10K
---
created and shared by: [0xarchit](https://0xarchit.carrd.co) |
prerit2k/Bench01-20-20sec-1 | prerit2k | 2025-06-03T18:12:17Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"first collection"
] | [
"robotics"
] | 2025-06-03T18:11:02Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- first collection
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"trossen_subversion": "v1.0",
"robot_type": "trossen_ai_solo",
"total_episodes": 1,
"total_frames": 597,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_main": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
yoona-J/ASR_Wav2Vec_Preprocess_Peripheral_Neuropathy_Dataset | yoona-J | 2025-06-03T18:11:11Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:07:13Z | null | ---
dataset_info:
features:
- name: input_values
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 4248737624.0
num_examples: 18679
- name: valid
num_bytes: 239472624.0
num_examples: 1040
- name: test
num_bytes: 234697040.0
num_examples: 1036
download_size: 4570203659
dataset_size: 4722907288.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
logicalqubit/QA-RRC-DISTRIC-IT-Dataset | logicalqubit | 2025-06-03T18:10:34Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:10:32Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 573243
num_examples: 2260
download_size: 197275
dataset_size: 573243
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Romoamigo/SWE-Bench-MultilingualC_CPPFiletered_new | Romoamigo | 2025-06-03T18:09:38Z | 177 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-28T23:47:34Z | null | ---
dataset_info:
features:
- name: repo
dtype: string
- name: instance_id
dtype: string
- name: base_commit
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: version
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: PASS_TO_PASS
sequence: string
- name: repo_name
dtype: string
- name: image_name
dtype: string
splits:
- name: test
num_bytes: 786675
num_examples: 55
download_size: 289127
dataset_size: 786675
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
TienMat999/vn-legal-qa-rag | TienMat999 | 2025-06-03T18:03:18Z | 274 | 0 | [
"task_categories:question-answering",
"language:vi",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"legal"
] | [
"question-answering"
] | 2025-05-08T16:07:21Z | null | ---
dataset_info:
features:
- name: context_id
dtype: string
- name: context
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answers_id
dtype: string
- name: answers
dtype: string
splits:
- name: train
num_bytes: 26510045
num_examples: 4559
download_size: 8852182
dataset_size: 26510045
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- question-answering
language:
- vi
tags:
- legal
size_categories:
- 1K<n<10K
---
|
voidful/earica_ms | voidful | 2025-06-03T18:02:46Z | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-01T01:55:49Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: reasoning
dtype: string
- name: answer
dtype: string
- name: audio
dtype: audio
- name: index
dtype: int64
- name: raw_yaml
dtype: string
splits:
- name: train
num_bytes: 2239015.0
num_examples: 4
download_size: 554534
dataset_size: 2239015.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fixie-ai/endpointing-multi-turn-commonvoice | fixie-ai | 2025-06-03T17:57:57Z | 30 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-02T22:15:15Z | null | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
- name: transcript
dtype: string
- name: conversation
dtype: string
- name: conversation_with_audio
dtype: string
splits:
- name: train
num_bytes: 358006509.816
num_examples: 9954
download_size: 346695375
dataset_size: 358006509.816
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
minpeter/pretrained-tiny-ko | minpeter | 2025-06-03T17:57:14Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T17:39:47Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 31850795201.147762
num_examples: 6826068
download_size: 16259105774
dataset_size: 31850795201.147762
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mothnaZl/Bespoke-Stratos-17k | mothnaZl | 2025-06-03T17:46:35Z | 128 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-08T16:19:22Z | null | ---
dataset_info:
features:
- name: system
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 635631373
num_examples: 16710
download_size: 258776431
dataset_size: 635631373
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-14 | ChavyvAkvar | 2025-06-03T17:45:49Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T17:44:47Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448040
num_examples: 1000
download_size: 924503656
dataset_size: 923448040
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davanstrien/dataset-creation-scripts | davanstrien | 2025-06-03T17:44:12Z | 446 | 2 | [
"region:us"
] | [] | 2025-04-01T14:27:11Z | null | ---
viewer: false
---
# Datasets scripts
This is an experimental repository for sharing simple one-liner scripts for creating datasets. The idea is that you can easily run these scripts in the terminal without having to write any code and quickly get a dataset up and running on the Hub.
## Installation
All of these scripts assume you have `uv` installed. If you don't have it installed, see the [installation guide](https://docs.astral.sh/uv/getting-started/).
The scripts use the recently added [inline-script-metadata](https://packaging.python.org/en/latest/specifications/inline-script-metadata/#inline-script-metadata) to specify dependencies for the script. `UV` will automatically install these dependencies when you run the script in a new environment. This makes it easier for you to try the script without having to install the dependencies yourself.
See this [guide](https://docs.astral.sh/uv/guides/scripts/#running-scripts) on running scripts in `uv` for more information.
## Scripts
### PDFs to HF dataset
This script uses the newly added `pdf` feature of the `datasets` library to create a dataset from a directory of PDFs. The t;dr is that the script will take as input a directory of PDFs and output a Hugging Face dataset with the images and (optionally) the text extracted from the PDFs. You can find more info about the `pdf` feature [here](https://huggingface.co/docs/datasets/document_load).
To see how the script works, you can run it with the `--help` flag:
```python
uv run https://huggingface.co/datasets/davanstrien/dataset-creation-scripts/raw/main/pdf-datasets/main.py -h
```
This will print the help message to the terminal.
```
usage: main.py [-h] --directory DIRECTORY [--extract-text EXTRACT_TEXT] --hub-id HUB_ID [--private-repo PRIVATE_REPO]
options:
-h, --help show this help message and exit
```
For example on my `Desktop` I have a folder called `Arxiv` with a bunch of PDFs in it. I can create a dataset with the following command:
```python
uv run https://huggingface.co/datasets/davanstrien/dataset-creation-scripts/raw/main/pdf-datasets/main.py --directory ~/Desktop/Arxiv --hub-id davanstrien/arxiv-pdfs --extract-text true
```
You can find the dataset on the Hub [here](https://huggingface.co/datasets/davanstrien/arxiv-pdfs).
|
ChavyvAkvar/synthetic-trades-XRP-batch-13 | ChavyvAkvar | 2025-06-03T17:37:22Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T17:36:20Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448218
num_examples: 1000
download_size: 924505223
dataset_size: 923448218
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
timcryt/vscf_mlff_data | timcryt | 2025-06-03T17:26:33Z | 0 | 0 | [
"size_categories:10K<n<100K",
"region:us",
"chemistry"
] | [] | 2025-06-03T17:19:44Z | null | ---
tags:
- chemistry
size_categories:
- 10K<n<100K
pretty_name: l
---
Datasets used in the paper "Интерполяция ППЭ с помощью машинного обучения для ускорения расчётов негармонических частот колебаний молекул".
### Description
- `vscf_dataset_2_5.xyz` is the main dataset used from pretraining and finetuning models, 19 molecules, 65168 points
- `compare_dataset_OCCO.xyz` is the auxillary datased used to select the model architecture from DimeNet and SchNet, 1 molecule, 1042 points |
ChavyvAkvar/synthetic-trades-BTC-batch-6 | ChavyvAkvar | 2025-06-03T17:24:33Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T17:23:36Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450595
num_examples: 1000
download_size: 924510583
dataset_size: 923450595
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jlbaker361/clip-celeb_captioned | jlbaker361 | 2025-06-03T17:19:27Z | 139 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-23T15:07:56Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: embedding
sequence:
sequence:
sequence: float32
- name: text
sequence:
sequence:
sequence: float16
- name: prompt
dtype: string
- name: posterior
sequence:
sequence:
sequence: float16
splits:
- name: train
num_bytes: 7280583008.0
num_examples: 30000
download_size: 7070282217
dataset_size: 7280583008.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
5CD-AI/Viet-r1_90k_instruct | 5CD-AI | 2025-06-03T17:12:03Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T17:09:34Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
sequence:
- name: content
dtype: string
- name: role
dtype: string
- name: vi_conversations
sequence:
- name: content
dtype: string
- name: role
dtype: string
- name: id_num
dtype: string
splits:
- name: train
num_bytes: 2996389203.83
num_examples: 79986
download_size: 2612513952
dataset_size: 2996389203.83
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
prerit2k/Bench01-18-10sec | prerit2k | 2025-06-03T17:09:31Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"first collection"
] | [
"robotics"
] | 2025-06-03T17:09:28Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- first collection
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"trossen_subversion": "v1.0",
"robot_type": "trossen_ai_solo",
"total_episodes": 1,
"total_frames": 299,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_main": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
LukeBailey181/TrainingRound47MultimodalConjectureDataset | LukeBailey181 | 2025-06-03T17:09:09Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T17:09:05Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: target
dtype: string
- name: source_dataset
dtype: string
- name: heuristic
dtype: float64
splits:
- name: embedding
num_bytes: 5286055
num_examples: 4320
- name: grad_cosine_sim
num_bytes: 3799086
num_examples: 4320
- name: grad_dot_product
num_bytes: 3966114
num_examples: 4320
- name: random
num_bytes: 3866087
num_examples: 4320
download_size: 5680252
dataset_size: 16917342
configs:
- config_name: default
data_files:
- split: embedding
path: data/embedding-*
- split: grad_cosine_sim
path: data/grad_cosine_sim-*
- split: grad_dot_product
path: data/grad_dot_product-*
- split: random
path: data/random-*
---
|
HAissa/SFT-dataset | HAissa | 2025-06-03T17:08:27Z | 130 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-31T12:37:25Z | null | ---
dataset_info:
features:
- name: source
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3850647056.0
num_examples: 1159281
download_size: 1896264124
dataset_size: 3850647056.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
arnaultsta/MNLP_M3_wikipedia_camel_chunked_300 | arnaultsta | 2025-06-03T16:55:39Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T14:37:47Z | null | ---
dataset_info:
features:
- name: title
dtype: string
- name: source
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 168771055
num_examples: 76429
download_size: 91677839
dataset_size: 168771055
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
levinius/MNLP_distilled_preference_code_ds | levinius | 2025-06-03T16:55:18Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:55:12Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: source
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 42323537.16766937
num_examples: 20644
- name: validation
num_bytes: 4703070.83233063
num_examples: 2294
download_size: 24288551
dataset_size: 47026608.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-10 | ChavyvAkvar | 2025-06-03T16:54:27Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:53:28Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448290
num_examples: 1000
download_size: 924486002
dataset_size: 923448290
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
recogna-nlp/drbodebench | recogna-nlp | 2025-06-03T16:49:08Z | 18 | 0 | [
"language:pt",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"medico",
"perguntas-e-respostas",
"portugues-brasileiro",
"benchmark-llm",
"multipla-escolha"
] | [] | 2025-06-02T19:25:02Z | null | ---
license: apache-2.0
language:
- pt
tags:
- medico
- perguntas-e-respostas
- portugues-brasileiro
- benchmark-llm
- multipla-escolha
---
# Benchmark Brasileiro de Testes de Aptidão Médica: DrBodeBench (DBB)
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64afacf35de9402882fdd69d/naZWT4umxSc4veItz76MS.jpeg" alt="Bode Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
Este conjunto de dados introduz um novo benchmark para avaliar modelos de linguagem grandes (LLMs) médicos em português brasileiro, abordando uma lacuna crítica na avaliação de IA para aplicações de saúde em contextos não ingleses. Ele é construído a partir de testes de aptidão médica brasileiros que abrangem o período de 2011-2024, incluindo o Exame Nacional de Revalidação de Diplomas Médicos Expedidos por Instituição de Educação Superior Estrangeira (Revalida) e Exames de Admissão à Residência da Universidade de São Paulo (FUVEST). O benchmark permite uma avaliação extensiva tanto de LLMs especialistas quanto de modelos gerais. Seu propósito é estabelecer uma base adequada para avaliar e avançar os modelos de linguagem médicos em português, criando um arcabouço padronizado para guiar o desenvolvimento em direção a sistemas de IA mais eficazes, equitativos e culturalmente apropriados para a saúde no Brasil.
Os dados do Revalida abrangem múltiplos anos entre 2011-2024, organizados em arquivos de texto distintos para cada ano. O ano de 2017 foi excluído devido à indisponibilidade dos dados no momento da coleta, enquanto os dados de 2020 não foram incluídos nesta versão inicial devido a dificuldades na extração relacionadas ao formato da prova.
O objetivo principal durante a criação deste benchmark foi identificar modelos de linguagem que pudessem fornecer as respostas mais precisas para questões médicas e demonstrar compreensão e geração superiores de respostas em português brasileiro.
## Como Usar o Conjunto de Dados
O conjunto de dados é projetado para avaliar o desempenho de LLMs em tarefas de perguntas e respostas médicas em português brasileiro. Cada instância consiste em uma pergunta médica, alternativas de múltipla escolha e a resposta correta. Para questões que originalmente incluíam imagens, são fornecidas descrições textuais dessas imagens.
Os modelos podem ser instruídos com a pergunta (`enunciado`) e sua descrição de imagem associada (se `contains_img` for verdadeiro e `img_description` estiver disponível), seguidas pelas opções de múltipla escolha (`alternativas`). A tarefa do modelo é selecionar a alternativa correta. O formato de resposta esperado, conforme utilizado no estudo original, consistia apenas na letra correspondente da alternativa correta (A, B, C, D ou E). O prompt utilizado no artigo também especificava que o modelo deveria abster-se de fornecer justificativas para sua seleção ou incluir qualquer texto além da letra da alternativa escolhida.
Estrutura de prompt de exemplo usada no artigo:
1. Instrução para receber uma pergunta médica e selecionar a resposta correta.
2. Três exemplos demonstrando o formato de resposta esperado (a letra da alternativa correta).
3. A declaração da pergunta (`enunciado`).
4. Se a pergunta contivesse uma imagem, sua descrição (`img_description`) era anexada imediatamente após a declaração.
5. As alternativas (`alternativas`) eram apresentadas.
Cada pergunta foi submetida ao modelo individualmente. A avaliação foi conduzida usando apenas a primeira letra gerada pelo modelo.
## Estrutura do Conjunto de Dados
O conjunto de dados é composto por pares de pergunta-resposta com metadados associados.
### Campos de Dados
O conjunto de dados contém os seguintes campos:
* `origem` (string): A fonte da pergunta (ex.: "INEP" para os exames Revalida, já que o INEP – um instituto – organiza o Revalida).
* `prova` (int64): O ano em que o teste de aptidão foi administrado (variando de 2011 a 2024).
* `numero` (int64): O número da questão dentro do exame específico.
* `enunciado` (string): O texto da pergunta médica.
* `alternativas` (dict): Um dicionário contendo as opções de múltipla escolha, onde as chaves são letras (ex.: "A", "B") e os valores são o texto das opções. Pode haver até cinco opções (A, B, C, D ou E).
*Exemplo: `{ "A": "Texto para a opção A.", "B": "Texto para a opção B.", ... }`*
* `resposta` (string): A letra correspondente à alternativa correta.
* `contains_img` (bool): Um sinalizador booleano indicando se a pergunta original incluía uma imagem.
* `img_path` (string): Um caminho relativo para o arquivo de imagem. As imagens são fornecidas separadamente (veja [Link para Imagens Extraídas](#link-para-imagens-extraídas)). Este campo será `null` se `contains_img` for falso.
* `img_description` (string): Uma descrição textual da imagem, gerada com o auxílio do GPT-4o mini. Este campo será `null` se `contains_img` for falso ou se uma descrição não tiver sido gerada. É importante notar que "essas descrições não representam necessariamente uma descrição técnica médica ideal e, embora incluídas para completude, serão refinadas em versões futuras do conjunto de dados".
## Link para Imagens Extraídas
As imagens que acompanham algumas questões neste conjunto de dados foram extraídas manualmente. Você pode baixá-las no seguinte link:
[Link para o Google Drive.](https://drive.google.com/file/d/1ddRzCDU8mDwNsvMzlYTZ4Jzpzi5bEcoD/view?usp=drive_link)
O campo `img_path` no conjunto de dados deve corresponder aos caminhos dos arquivos dentro deste arquivo vinculado.
## Curadoria
Este benchmark foi criado para suprir a lacuna crítica de ferramentas de avaliação padronizadas para LLMs médicos em português brasileiro. Os recursos existentes em português brasileiro são severamente limitados, com conjuntos de dados disponíveis sofrendo de deficiências como falta de validação clínica ou uso de terminologia arcaica. A ausência de conjuntos de dados médicos válidos em português brasileiro e benchmarks padronizados dificulta o desenvolvimento de modelos, a avaliação sistemática e a comparação. Este benchmark visa fornecer um arcabouço padronizado para guiar o desenvolvimento em direção a sistemas de IA mais eficazes, equitativos e culturalmente apropriados para a saúde no Brasil.
### Dados de Origem
Os dados foram coletados de exames de aptidão médica brasileiros para estudantes graduados:
* O Exame Nacional de Revalidação de Diplomas Médicos Expedidos por Instituição de Educação Superior Estrangeira (Revalida). Os dados do Revalida abrangem múltiplos anos entre 2011-2024, organizados em arquivos de texto distintos para cada ano. O ano de 2017 foi excluído devido à indisponibilidade dos dados no momento da coleta, enquanto os dados de 2020 não foram incluídos nesta versão inicial devido a dificuldades na extração relacionadas ao formato da prova.
* Exames de Admissão à Residência da Universidade de São Paulo (FUVEST). Para o Exame de Residência da FUVEST, apenas o exame de clínica geral foi utilizado na versão inicial deste benchmark.
### Coleta e Limpeza de Dados
O processo de coleta de dados envolveu:
1. Extração de dados de perguntas e respostas dos exames Revalida e FUVEST.
2. Uso de expressões regulares (regex) para isolar perguntas e remover conteúdo irrelevante como rodapés e cabeçalhos.
3. Resolução de problemas como espaços deslocados entre palavras e inconsistências de codificação.
4. Emprego do GPT-4o mini da OpenAI para retificar automaticamente o texto, remontar palavras que foram divididas erroneamente e resolver problemas de codificação.
5. Condução de uma revisão manual para garantir a precisão após a limpeza automatizada. Ferramentas de comparação de texto foram usadas para verificar diferenças de palavras entre as perguntas originais e as versões remontadas, com as discrepâncias sendo corrigidas.
### Anotações
* **Respostas:** O campo `resposta` contém a letra correta para cada questão de múltipla escolha, conforme os exames de origem.
* **Descrições de Imagens:** Para questões que incluíam imagens, descrições textuais (`img_description`) foram criadas com o auxílio do GPT-4o mini. Essas descrições foram concebidas para fins de completude, já que a versão inicial do benchmark não utilizava diretamente entradas de imagem para modelos sem capacidades visuais. É explicitamente declarado que "essas descrições não representam necessariamente uma descrição técnica médica ideal e, embora incluídas para completude, serão refinadas em versões futuras do conjunto de dados".
## Informações sobre Citação
Se você usar este conjunto de dados em sua pesquisa, por favor, cite o artigo original:
```bibtex
@inproceedings{garcia2025step,
title={A Step Forward for Medical LLMs in Brazilian Portuguese: Establishing a Benchmark and a Strong Baseline},
author={Garcia, Gabriel Lino and Manesco, João Renato Ribeiro and Paiola, Pedro Henrique and Ribeiro, Pedro Henrique Crespan and Garcia, Ana Lara Alves and Papa, João Paulo},
booktitle={Proceedings of the 38th IEEE International Symposium on Computer-Based Medical Systems (CBMS 2025)},
year={2025},
address={Madrid, Spain},
month={June}
}
```
## Informações sobre a Licença
A licença para este conjunto de dados é Apache 2.0.
O texto oficial da licença está em inglês. |
levinius/MNLP_distilled_preference_ds | levinius | 2025-06-03T16:44:42Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:44:37Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: source
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 24657058.435946036
num_examples: 16144
- name: validation
num_bytes: 2740012.564053964
num_examples: 1794
download_size: 14771309
dataset_size: 27397071.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
diabolocom/talkbank_4_stt | diabolocom | 2025-06-03T16:43:11Z | 359 | 2 | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"multilinguality:multilingual",
"language:en",
"language:de",
"language:es",
"language:fr",
"language:zh",
"license:cc-by-nc-sa-3.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2409.12042",
"region:us"
] | [
"automatic-speech-recognition",
"text-to-speech",
"text-to-audio"
] | 2024-09-19T13:46:35Z | null | ---
language:
- en
- de
- es
- fr
- es
- zh
license:
- cc-by-nc-sa-3.0
multilinguality:
- multilingual
task_categories:
- automatic-speech-recognition
- text-to-speech
- text-to-audio
pretty_name: talkbank_4_stt
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcript
dtype: string
- name: language_code
dtype: string
- name: subset
dtype: string
- name: full_language
dtype: string
- name: switch_id
dtype: string
- name: segment_id
dtype: string
- name: transcript_filename
dtype: string
- name: audio_len_sec
dtype: string
- name: orig_file_start
dtype: string
- name: orig_file_end
dtype: string
- name: channel
dtype: string
- name: speaker_id
dtype: string
splits:
- name: segment_jp
num_bytes: 227582609.856
num_examples: 23272
- name: segment_es
num_bytes: 677020941.0
num_examples: 57150
- name: segment_zh
num_bytes: 213524031.376
num_examples: 23364
- name: segment_en
num_bytes: 533951011.464
num_examples: 32092
- name: segment_fr
num_bytes: 264513055.952
num_examples: 14796
- name: segment_ess
num_bytes: 49557412.0
num_examples: 494
- name: segment_de
num_bytes: 127896413.16
num_examples: 12944
- name: switch_jp
num_bytes: 437307191.0
num_examples: 318
- name: switch_es
num_bytes: 1597337075.24
num_examples: 2130
- name: switch_zh
num_bytes: 381994059.12
num_examples: 1691
- name: switch_en
num_bytes: 835237103.0
num_examples: 908
- name: switch_fr
num_bytes: 481521047.0
num_examples: 281
- name: switch_ess
num_bytes: 54158091.0
num_examples: 9
- name: switch_de
num_bytes: 415139340.0
num_examples: 701
download_size: 6218452927
dataset_size: 6296739381.167999
configs:
- config_name: default
data_files:
- split: segment_jp
path: data/segment_jp-*
- split: segment_es
path: data/segment_es-*
- split: segment_zh
path: data/segment_zh-*
- split: segment_en
path: data/segment_en-*
- split: segment_fr
path: data/segment_fr-*
- split: segment_ess
path: data/segment_ess-*
- split: segment_de
path: data/segment_de-*
- split: switch_jp
path: data/switch_jp-*
- split: switch_es
path: data/switch_es-*
- split: switch_zh
path: data/switch_zh-*
- split: switch_en
path: data/switch_en-*
- split: switch_fr
path: data/switch_fr-*
- split: switch_ess
path: data/switch_ess-*
- split: switch_de
path: data/switch_de-*
---
# Dataset Card
## Dataset Description
This dataset is a benchmark based on the TalkBank[1] corpus—a large multilingual repository of conversational speech that captures real-world, unstructured interactions. We use CA-Bank [2], which focuses on phone conversations between adults, which include natural speech phenomena such as laughter, pauses, and interjections. To ensure the dataset is highly accurate and suitable for benchmarking conversational ASR systems, we employ extensive set of pre-processing.
## Preprocessing Steps
We apply the following preprocessing steps to ensure the dataset’s quality:
- Manual filtering of conversations
- Speaker-channel alignment
- Timestamp alignment using voice activity detection (VAD)
- Discarding segments based on Word Error Rate (WER) thresholds
## Paper and Code Repository
For a comprehensive explanation of the preprocessing pipeline and dataset details, refer to our paper [ASR Benchmarking: The Need for a More Representative Conversational Dataset](https://arxiv.org/abs/2409.12042) and explore our [GitHub repository](https://github.com/Diabolocom-Research/ConversationalDataset) for code and additional resources.
## Segmentation Types: Speaker Switch vs Annotation
We offer two types of segmentation for this dataset:
- **Annotation-based Segmentation**: Segments are derived directly from the annotations provided in the original TalkBank corpus.
- **Speaker Switch Segmentation**: We consolidate consecutive segments from the same speaker into a single, larger audio segment, providing an alternative structure for analysis.
## Citations
While using this dataset please cite:
```
@article{maheshwari2024asr,
title={ASR Benchmarking: Need for a More Representative Conversational Dataset},
author={Maheshwari, Gaurav and Ivanov, Dmitry and Johannet, Th{\'e}o and Haddad, Kevin El},
journal={arXiv preprint arXiv:2409.12042},
year={2024}
}
```
In addition, please acknowledge the TalkBank dataset::
```
@article{macwhinney2010transcribing,
title={Transcribing, searching and data sharing: The CLAN software and the TalkBank data repository},
author={MacWhinney, Brian and Wagner, Johannes},
journal={Gesprachsforschung: Online-Zeitschrift zur verbalen Interaktion},
volume={11},
pages={154},
year={2010},
publisher={NIH Public Access}
}
```
## Licensing Information
This dataset is released under the [CC BY-NC-SA 3.0](https://creativecommons.org/licenses/by-nc-sa/3.0).
## References
[1]: MacWhinney, Brian. "TalkBank: Building an open unified multimodal database of communicative interaction." (2004).
[2]: MacWhinney, Brian, and Johannes Wagner. "Transcribing, searching and data sharing: The CLAN software and the TalkBank data repository." Gesprachsforschung: Online-Zeitschrift zur verbalen Interaktion 11 (2010): 154.
|
un1c0rnio/eval_act_so101_box_pencil3_140000 | un1c0rnio | 2025-06-03T16:41:34Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-03T16:41:10Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 9,
"total_frames": 13523,
"total_tasks": 1,
"total_videos": 18,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:9"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.base": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.extside": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
tingshiuanlai/motherese-prosody-data | tingshiuanlai | 2025-06-03T16:39:23Z | 118 | 0 | [
"license:cc-by-4.0",
"region:us",
"prosody",
"speech-features",
"librispeech"
] | [] | 2025-05-25T21:22:08Z | null | ---
license: cc-by-4.0
tags:
- prosody
- speech-features
- librispeech
---
# Prosody Features for train-clean-100
This repository contains a pickled `ProsodyFeatureExtractor` object trained on the LibriSpeech `train-clean-100` and `dev-clean` subset.
## Contents
- Word-level prosodic features
- F0, energy, duration, pause, prominence
- Extracted using CELEX-based stress localization
## Format
- `.pkl` file — can be loaded using `pickle.load(open(..., "rb"))`
- Compatible with JSON serialization
|
ChavyvAkvar/synthetic-trades-BNB-batch-3 | ChavyvAkvar | 2025-06-03T16:31:46Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:30:51Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450370
num_examples: 1000
download_size: 924490854
dataset_size: 923450370
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits