datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-06 00:37:09
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-06 00:34:34
| trending_score
float64 0
40
⌀ | card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
qanta-challenge/advcal-llm-cache-old | qanta-challenge | 2025-06-04T05:10:29Z | 1,293 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-11T16:19:52Z | null | ---
dataset_info:
features:
- name: key
dtype: string
- name: model
dtype: string
- name: system
dtype: string
- name: prompt
dtype: string
- name: response_format
struct:
- name: properties
struct:
- name: answer
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: answers
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: confidence
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: explanation
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: final_answer
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: final_confidence
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: final_explanation
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: justification
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: question_length
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: reasoning_space
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: required
sequence: string
- name: title
dtype: string
- name: type
dtype: string
- name: temperature
dtype: float64
- name: response
dtype: string
splits:
- name: train
num_bytes: 250140394
num_examples: 78001
download_size: 30815430
dataset_size: 250140394
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
qanta-challenge/advcal-llm-cache | qanta-challenge | 2025-06-04T05:10:29Z | 1,293 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-11T16:19:52Z | null | ---
dataset_info:
features:
- name: key
dtype: string
- name: model
dtype: string
- name: system
dtype: string
- name: prompt
dtype: string
- name: response_format
struct:
- name: properties
struct:
- name: answer
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: answers
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: confidence
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: explanation
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: final_answer
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: final_confidence
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: final_explanation
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: justification
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: question_length
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: reasoning_space
struct:
- name: description
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: required
sequence: string
- name: title
dtype: string
- name: type
dtype: string
- name: temperature
dtype: float64
- name: response
dtype: string
splits:
- name: train
num_bytes: 250140394
num_examples: 78001
download_size: 30815430
dataset_size: 250140394
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fongks/APOFront | fongks | 2025-06-04T04:59:42Z | 147 | 3 | [
"license:cc-by-4.0",
"region:us"
] | [] | 2025-05-27T23:15:31Z | null | ---
license: cc-by-4.0
---
# Dataset Card for APO Front for Symbolic Regression
<!-- Provide a quick summary of the dataset. -->
Dataset accompanying the paper "Pareto-Optimal Fronts for Benchmarking Symbolic Regression Algorithms".
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
Symbolic Regression (SR) is the task of finding the best closed-form expression that describes the relationship between variables in a dataset. Traditionally, SR algorithms select the best expression based on prediction performance (i.e., R-squared score). However, it is also important that the eventual expressions produced by SR algorithms are short in order to maintain the key strength of SR -- producing explainable white box models. In this context, SR algorithms can be evaluated by measuring the extent to which the expressions discovered are Pareto-optimal, in the sense of having the best R-squared score for the achieved expression size. However, this evaluation is most commonly done based on relative performance, in the sense that an SR algorithm is judged on whether it Pareto-dominates other SR algorithms selected in the analysis. In this paper, we explore absolute Pareto-optimal solutions instead, which have the optimal tradeoff between the multiple SR objectives (i.e., R-squared and expression size) by conducting an empirical investigation that exhaustively searches expressions with selected sizes. Specifically, we find an absolute Pareto-optimal (APO) front of expressions for real-world datasets from SRBench using an algorithm that exhaustively searches K-expressions from gene expression programming to conveniently control the length of expressions. Additionally, we utilize a range of numerical optimization methods for SR and analyze the performance differences to evaluate the choice of the commonly used Broyden–Fletcher–Goldfarb–Shanno (BFGS) numerical optimization method in most state-of-the-art SR algorithms.
We extract, for every real-world dataset, an APO front of expressions that can serve as a universe baseline for SR algorithms that informs researchers of the best achievable performance for selected sizes (Folder Extracted_APO_Fronts). We also contribute raw data for every expression evaluated, its corresponding optimized numerical parameters and its performance for real-world datasets in SRBench (APOv3.zip and APOv4.zip).
- **Curated by:** Kei Sen Fong (National University of Singapore)
- **License:** cc-by-4.0
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
APO fronts, obtained from using several random seeds and various numerical optimization methods, are directly availble in Extracted_APO_Fronts folder as csv files. The file names follow the following format: "{dataset_name}\_{Method}\_{head_length}\_{random_state}_summary.csv". For more metrics and complexity measures, please see "APO_MoreMetricsWithCode.zip".
To be used in conjunction with Symbolic Regression benchmark results on real-world datasets. For example, an available benchmark result from NeurIPS 2021 is at https://github.com/cavalab/srbench [1]. Data in the folder Extracted_APO_Fronts extracts the APO fronts from the folders APOv3 and APOv4. Use the raw data in APOv3 and APOv4 for the optimized numerical parameters and R-squared score for ALL expressions evaluated in our work (not just the APO front).
[1] La Cava, W., Orzechowski, P., Burlacu, B., de França, F. O., Virgolin, M., Jin, Y., Kommenda, M., & Moore, J. H. (2021). Contemporary Symbolic Regression Methods and their Relative Performance. Neurips Track on Datasets and Benchmarks. arXiv, neurips.cc
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Data in the folder Extracted_APO_Fronts extracts the APO fronts from the files APOv3 and APOv4. Use the raw data in APOv3 and APOv4 for the optimized numerical parameters and R-squared score for ALL expressions evaluated in our work (not just the the APO front).
With reference to Algorithm in the paper, the file names follow the following format: "{dataset_name}\_{Method}\_{head_length}\_{random_state}.csv".
All data files are in csv format with main 7 columns (depending on the file type, there may be additional columns):
1. EquationLength: Length of the equation
2. EquationStructure: Structure of the equation without numerical parameters, xdata refers to the real-world dataset and x refers to EquationParameters
3. EquationLambda: Structure of the equation without numerical parameters in code form, xdata refers to the real-world dataset and x refers to EquationParameters
4. EquationParameters: Numerical parameters optimized by the numerical optimization method chosen
5. NumericalIterations: Number of iterations taken by the numerical optimization method chosen
6. MSE: Mean squared error of the equation
7. R2: R-squared score of the equation
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
Contains data for an APO front for 34 real-world datasets used in SRBench. These APO fronts serve as a useful baseline for benchmarking and informs SR researchers about the efficiency and limits of state-of-the-art SR algorithms.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
Code used to generate the raw data files from APOv3 and APOv4 are found in the Code folder.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Kei Sen Fong. The computational work for this article was (partially) performed on resources of the National Supercomputing Centre, Singapore (https://www.nscc.sg).
## Dataset Card Contact
Kei Sen Fong (fongkeisen@u.nus.edu)
|
ko-vlm/K-LLaVA-W | ko-vlm | 2025-06-04T04:30:22Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:30:11Z | null | ---
dataset_info:
features:
- name: images
dtype: image
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 20063061.0
num_examples: 60
download_size: 8322916
dataset_size: 20063061.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
ChavyvAkvar/synthetic-trades-BTC-batch-33 | ChavyvAkvar | 2025-06-04T04:28:17Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:27:19Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450626
num_examples: 1000
download_size: 924503851
dataset_size: 923450626
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Allen-UQ/cora_2_hop_nei_aug | Allen-UQ | 2025-06-04T04:21:35Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:21:15Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 8067719
num_examples: 1133
- name: validation
num_bytes: 26585828
num_examples: 3727
- name: test
num_bytes: 109633444
num_examples: 15277
download_size: 71028871
dataset_size: 144286991
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ChavyvAkvar/synthetic-trades-ADA-batch-6 | ChavyvAkvar | 2025-06-04T04:04:54Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T04:03:49Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923454456
num_examples: 1000
download_size: 924461804
dataset_size: 923454456
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hr16/ViVoicePP | hr16 | 2025-06-04T03:52:34Z | 915 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:text",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-04-12T13:49:54Z | null | ---
license: apache-2.0
---
|
ljnlonoljpiljm/stockimage-1.5M-scored-high-similarity | ljnlonoljpiljm | 2025-06-04T03:34:35Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T02:50:26Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: text
dtype: string
- name: similarity
dtype: float64
splits:
- name: train
num_bytes: 22490818813.355957
num_examples: 575394
download_size: 22354991127
dataset_size: 22490818813.355957
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
prerit2k/eval_act_bench01_21_2 | prerit2k | 2025-06-04T03:15:40Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-06-04T03:15:36Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"trossen_subversion": "v1.0",
"robot_type": "trossen_ai_solo",
"total_episodes": 1,
"total_frames": 841,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_main": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
smirki/postview-commons | smirki | 2025-06-04T02:58:48Z | 0 | 0 | [
"license:fair-noncommercial-research-license",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T02:58:25Z | null | ---
license: fair-noncommercial-research-license
---
|
ChavyvAkvar/synthetic-trades-BNB-batch-28 | ChavyvAkvar | 2025-06-04T02:53:43Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T02:52:42Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450272
num_examples: 1000
download_size: 924468373
dataset_size: 923450272
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ztony0712/motion_prediction | ztony0712 | 2025-06-04T02:52:29Z | 54 | 0 | [
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-15T06:15:07Z | null | ---
dataset_info:
features:
- name: Name
dtype: string
- name: Scenario
dtype: image
- name: Rating
dtype: float64
- name: Deviation
dtype: float64
- name: Percentile
dtype: float64
splits:
- name: val
num_bytes: 647194235.422
num_examples: 4409
download_size: 707151942
dataset_size: 647194235.422
configs:
- config_name: default
data_files:
- split: val
path: data/train-*
license: apache-2.0
language:
- en
pretty_name: Motion Prediction
size_categories:
- 1K<n<10K
---
# Visualization of Motion Prediction Task Cases Samples
Check dataset samples visualization by viewing Dataset Viewer.
The sampling procedure is guided by the Elo distribution introduced in our method.
Original dataset is validation split of Waymo Open Motion Dataset (WOMD).
samples/origin: 4409/ 44097
# License
This repository is licensed under the Apache License 2.0 |
GarrieD/toy_in_pot_v2_simple | GarrieD | 2025-06-04T02:47:56Z | 0 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-06-04T01:45:42Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# toy_in_pot_v2_simple
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
deariejheng/example_dataset | deariejheng | 2025-06-04T01:39:59Z | 0 | 0 | [
"task_categories:robotics",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-06-04T01:39:56Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# example_dataset
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
OpenSound/CapSpeech-SEDB | OpenSound | 2025-06-04T01:39:24Z | 51 | 0 | [
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2506.02863",
"region:us"
] | [] | 2025-05-11T00:47:59Z | null | ---
dataset_info:
features:
- name: audio_path
dtype: string
- name: text
dtype: string
- name: source
dtype: string
- name: speech_duration
dtype: float32
- name: pitch
dtype: string
- name: age
dtype: string
- name: gender
dtype: string
- name: speaking_rate
dtype: string
- name: speech_monotony
dtype: string
- name: caption
dtype: string
- name: intrinsic_tags
sequence: string
- name: situational_tags
sequence: string
- name: basic_tags
sequence: string
- name: all_tags
sequence: string
- name: accent
dtype: string
- name: noise
dtype: string
splits:
- name: train
num_bytes: 271725
num_examples: 500
download_size: 108674
dataset_size: 271725
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-nc-4.0
---
# CapSpeech-SEDB
SFT dataset used for the paper: ***CapSpeech: Enabling Downstream Applications in Style-Captioned Text-to-Speech***
This dataset is used for the CapTTS-SE task.
Please refer to [CapSpeech](https://huggingface.co/datasets/OpenSound/CapSpeech) for the whole dataset.
## Overview
🔥 CapSpeech is a new benchmark designed for style-captioned TTS (**CapTTS**) tasks, including style-captioned text-to-speech synthesis with sound effects (**CapTTS-SE**), accent-captioned TTS (**AccCapTTS**), emotion-captioned TTS (**EmoCapTTS**) and text-to-speech synthesis for chat agent (**AgentTTS**).
CapSpeech comprises over **10 million machine-annotated** audio-caption pairs and nearly **0.36 million human-annotated** audio-caption pairs. **3 new speech datasets** are specifically designed for the CapTTS-SE and AgentTTS tasks to enhance the benchmark’s coverage of real-world scenarios.

## License
⚠️ All resources are under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
## Citation
If you use this dataset, the models or the repository, please cite our work as follows:
```bibtex
@misc{wang2025capspeechenablingdownstreamapplications,
title={CapSpeech: Enabling Downstream Applications in Style-Captioned Text-to-Speech},
author={Helin Wang and Jiarui Hai and Dading Chong and Karan Thakkar and Tiantian Feng and Dongchao Yang and Junhyeok Lee and Laureano Moro Velazquez and Jesus Villalba and Zengyi Qin and Shrikanth Narayanan and Mounya Elhiali and Najim Dehak},
year={2025},
eprint={2506.02863},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2506.02863},
}
``` |
mmqm/m196k-v2 | mmqm | 2025-06-04T01:31:12Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T01:20:03Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
dtype: string
- name: answer_idx
dtype: int64
- name: source
dtype: string
- name: metadata
dtype: string
- name: prompt
dtype: string
- name: answer_letter
dtype: string
- name: answer_string
dtype: string
splits:
- name: train
num_bytes: 272272328
num_examples: 196657
download_size: 153708159
dataset_size: 272272328
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yangfengzzz/so101_test10 | yangfengzzz | 2025-06-04T01:30:34Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-04T01:29:24Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 10,
"total_frames": 8733,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
EvoGym/robots | EvoGym | 2025-06-04T01:20:40Z | 0 | 0 | [
"task_categories:robotics",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2201.09863",
"region:us",
"robotics",
"soft-robotics",
"voxel-robot",
"reinforcement learning"
] | [
"robotics"
] | 2025-06-04T00:04:39Z | null | ---
dataset_info:
features:
- name: uid
dtype: string
- name: body
sequence:
sequence: int64
- name: connections
sequence:
sequence: int64
- name: reward
dtype: float64
- name: env_name
dtype: string
- name: generated_by
dtype: string
splits:
- name: train
num_bytes: 62889336
num_examples: 90563
download_size: 6965556
dataset_size: 62889336
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- robotics
- soft-robotics
- voxel-robot
- reinforcement learning
size_categories:
- 10K<n<100K
license: cc-by-nc-4.0
task_categories:
- robotics
---
Evolution Gym is a large-scale benchmark for co-optimizing the design and control of soft robots. It provides a lightweight soft-body simulator wrapped with a gym-like interface for developing learning algorithms. EvoGym also includes a suite of 32 locomotion and manipulation tasks, detailed on our [website](https://evolutiongym.github.io/all-tasks). Task suite evaluations are described in our [NeurIPS 2021 paper](https://arxiv.org/pdf/2201.09863).
<img src="https://github.com/EvolutionGym/evogym/raw/main/images/teaser-low-res.gif" alt="teaser" style="width: 50%; display: block; margin: auto;" />
In this dataset, we open-source 90k+ annotated robot structures from the EvoGym paper. The fields of each robot in the dataset are as follows:
- `uid` *(str)*: Unique identifier for the robot
- `body` *(int64 np.ndarray)*: 2D array indicating the voxels that make up the robot
- `connections` *(int64 np.ndarray)*: 2D array indicating how the robot's voxels are connected. In this dataset, all robots are fully-connected, meaning that all adjacent voxels are connected
- `reward` *(float)*: reward achieved by the robot's policy
- `env_name` *(str)*: Name of the EvoGym environment (task) the robot was trained on
- `generated_by` *("Genetic Algorithm" | "Bayesian Optimization" | "CPPN-NEAT")*: Algorithm used to generate the robot
If you find this dataset helpful to your research, please cite our paper:
```
@article{bhatia2021evolution,
title={Evolution gym: A large-scale benchmark for evolving soft robots},
author={Bhatia, Jagdeep and Jackson, Holly and Tian, Yunsheng and Xu, Jie and Matusik, Wojciech},
journal={Advances in Neural Information Processing Systems},
volume={34},
year={2021}
}
``` |
zijian2022/vis4 | zijian2022 | 2025-06-04T00:58:34Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-04T00:58:28Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 20,
"total_frames": 4780,
"total_tasks": 1,
"total_videos": 40,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
patcdaniel/phytoplankton-test-dataset-360k | patcdaniel | 2025-06-04T00:57:57Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T00:51:20Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Akashiwo
'1': Akashiwo_dividing
'2': Alexandrium
'3': Amylax_Gonyaulax_Protoceratium
'4': Asterionellopsis
'5': Asteromphalus
'6': Bad_blurred
'7': Bad_mixed_phyto
'8': Bad_setae
'9': Centric
'10': Centric_fuzzy
'11': Ceratium_furca
'12': Ceratium_lineatum
'13': Chaetoceros
'14': Ciliate_cutoff
'15': Ciliate_large_2
'16': Ciliate_other_morpho_1
'17': Clusterflagellate_morpho_1
'18': Clusterflagellate_morpho_2
'19': Cryptophyte
'20': Cylindrotheca_Nitzschia
'21': Detonula_Cerataulina_Lauderia
'22': Detritus
'23': Detritus_infection
'24': Dictyocha
'25': Dinoflagellate_morpho_1
'26': Dinoflagellate_morpho_2
'27': Dinophysis
'28': Ditylum
'29': Entomoneis
'30': Eucampia
'31': Euglenoid
'32': Flagellate_morpho_1
'33': Flagellate_morpho_3
'34': Flagellate_nano_1
'35': Flagellate_nano_2
'36': Fragilariopsis
'37': Guinardia_Dactyliosolen
'38': Gymnodinium
'39': Gymnodinium_dividing
'40': Gyrodinium
'41': Gyrosigma
'42': Hemiaulus
'43': Hemiselmis
'44': Heterocapsa_morpho_1
'45': Heterocapsa_morpho_2
'46': Heterosigma_akashiwo
'47': Laboea
'48': Leptocylindrus
'49': Margalefidinium
'50': Mesodinium
'51': Nano_cluster
'52': Nano_p_white
'53': Odontella
'54': Pennate_med
'55': Pennate_short
'56': Peridinium
'57': Phaeocystis
'58': Pleurosigma
'59': Prorocentrum_narrow
'60': Prorocentrum_narrow_dividing
'61': Prorocentrum_wide
'62': Pseudo-nitzschia
'63': Pseudo-nitzschia_singlet
'64': Pyramimonas
'65': Rhizosolenia
'66': Scrippsiella
'67': Skeleonema
'68': Skeletonema
'69': Spiky_packman_elogated
'70': Spiky_pacman_circular
'71': Stombidinium_morpho_1
'72': Strombidium_morpho_2
'73': Thalassionema
'74': Thalassiosira
'75': Tiarina
'76': Tontonia
'77': Torodinium
'78': Tropidoneis
'79': Unknown_morpho_1
'80': Vicicitus
- name: label_name
dtype: string
splits:
- name: train
num_bytes: 8392092237.1
num_examples: 364675
download_size: 4781582716
dataset_size: 8392092237.1
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Scottie201/text_files_with_embeddings | Scottie201 | 2025-06-04T00:44:30Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T00:43:47Z | null | ---
dataset_info:
features:
- name: content
dtype: string
- name: filename
dtype: string
- name: relative_path
dtype: string
- name: full_path
dtype: string
- name: num_words
dtype: int32
- name: source
dtype: string
- name: embeddings
sequence: float64
splits:
- name: train
num_bytes: 1764879411
num_examples: 6104
- name: validation
num_bytes: 90528520
num_examples: 1527
download_size: 161716276
dataset_size: 1855407931
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
AdityaMayukhSom/HMS | AdityaMayukhSom | 2025-06-04T00:42:16Z | 40 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-01T22:57:20Z | null | ---
dataset_info:
features:
- name: PII
dtype: string
- name: ArticleAbstract
dtype: string
- name: CorrectHighlight
dtype: string
- name: HallucinatedHighlight
dtype: string
splits:
- name: TRAIN
num_bytes: 39645837
num_examples: 17101
- name: VALIDATION
num_bytes: 4425482
num_examples: 1985
- name: TEST
num_bytes: 4059881
num_examples: 1840
download_size: 27646455
dataset_size: 48131200
configs:
- config_name: default
data_files:
- split: TRAIN
path: data/TRAIN-*
- split: VALIDATION
path: data/VALIDATION-*
- split: TEST
path: data/TEST-*
---
|
ChavyvAkvar/synthetic-trades-BNB-batch-22 | ChavyvAkvar | 2025-06-04T00:31:57Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T00:30:51Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450173
num_examples: 1000
download_size: 924491588
dataset_size: 923450173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AthenaAgent42/jee_papers_subset | AthenaAgent42 | 2025-06-04T00:21:18Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T00:21:16Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: id
dtype: string
- name: subject
dtype: string
- name: chapter
dtype: string
- name: topic
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: correct_option
dtype: string
- name: correct_answer
dtype: string
- name: explanation
dtype: string
- name: Type
dtype: string
- name: paper_id
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: option3
dtype: string
- name: option4
dtype: string
- name: __index_level_0__
dtype: int64
- name: pass_rate
dtype: int64
- name: pass16_correct_count
dtype: int64
splits:
- name: train
num_bytes: 806186
num_examples: 476
download_size: 319482
dataset_size: 806186
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-47 | ChavyvAkvar | 2025-06-04T00:08:24Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T00:07:22Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923447810
num_examples: 1000
download_size: 924481628
dataset_size: 923447810
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-BNB-batch-21 | ChavyvAkvar | 2025-06-04T00:03:49Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T00:02:51Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450790
num_examples: 1000
download_size: 924490325
dataset_size: 923450790
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-BTC-batch-21 | ChavyvAkvar | 2025-06-03T23:48:52Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:47:56Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450867
num_examples: 1000
download_size: 924481067
dataset_size: 923450867
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Rexhaif/wmt22-24 | Rexhaif | 2025-06-03T23:26:18Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:24:34Z | null | ---
dataset_info:
features:
- name: lp
dtype: string
- name: src
dtype: string
- name: ref
dtype: string
- name: hyp
dtype: string
- name: system
dtype: string
- name: score
dtype: float64
- name: score_name
dtype: string
- name: example_id
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 224627496
num_examples: 378505
download_size: 39437195
dataset_size: 224627496
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mmcarpi/carolina-150M-bertimbau | mmcarpi | 2025-06-03T23:23:07Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:16:23Z | null | ---
dataset_info:
features:
- name: meta
dtype: string
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: length
dtype: int64
splits:
- name: corpus
num_bytes: 7801019756.599199
num_examples: 723582
download_size: 975029044
dataset_size: 7801019756.599199
configs:
- config_name: default
data_files:
- split: corpus
path: data/corpus-*
---
|
zhengbang0707/hh_train | zhengbang0707 | 2025-06-03T23:17:19Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:16:52Z | null | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: reject
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen_token
sequence: int64
- name: reject_token
sequence: int64
- name: chosen_mask
sequence: int64
- name: reject_mask
sequence: int64
- name: num_turn
dtype: int64
splits:
- name: train
num_bytes: 10548295974
num_examples: 156466
download_size: 289072151
dataset_size: 10548295974
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TAUR-dev/SIE_EVAL__testing_full_run2__rl__results | TAUR-dev | 2025-06-03T23:11:31Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:11:30Z | null | ---
dataset_info:
features:
- name: task
dtype: string
- name: alias
dtype: string
- name: exact_match,none
dtype: float64
- name: exact_match_stderr,none
dtype: string
- name: extracted_answers,none
dtype: int64
- name: extracted_answers_stderr,none
dtype: string
splits:
- name: train
num_bytes: 316
num_examples: 5
download_size: 3022
dataset_size: 316
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
matthewchung74/apa-1_0y-5min-bars | matthewchung74 | 2025-06-03T23:11:29Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:11:27Z | null | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: timestamp
dtype: string
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: volume
dtype: float64
- name: trade_count
dtype: float64
- name: vwap
dtype: float64
configs:
- config_name: default
data_files:
- split: train
path: data/apa_1_0_years_5min.csv
download_size: 1578814
dataset_size: 19732
---
# APA 5-Minute Stock Data (1.0 Years)
This dataset contains 1.0 years of APA stock market data downloaded from Alpaca Markets.
## Dataset Description
- **Symbol**: APA
- **Duration**: 1.0 years
- **Timeframe**: 5-minute bars
- **Market Hours**: 9:30 AM - 4:00 PM EST only
- **Data Source**: Alpaca Markets API
- **Last Updated**: 2025-06-03
## Features
- `symbol`: Stock symbol (always "APA")
- `timestamp`: Timestamp in Eastern Time (EST/EDT)
- `open`: Opening price for the 5-minute period
- `high`: Highest price during the 5-minute period
- `low`: Lowest price during the 5-minute period
- `close`: Closing price for the 5-minute period
- `volume`: Number of shares traded
- `trade_count`: Number of individual trades
- `vwap`: Volume Weighted Average Price
## Data Quality
- Only includes data during regular market hours (9:30 AM - 4:00 PM EST)
- Excludes weekends and holidays when markets are closed
- Approximately 19,732 records covering ~1.0 years of trading data
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("matthewchung74/apa-1_0y-5min-bars")
df = dataset['train'].to_pandas()
```
## Price Statistics
- **Price Range**: $13.58 - $33.41
- **Average Volume**: 86,049
- **Date Range**: 2024-06-03 09:30:00-04:00 to 2025-06-03 16:00:00-04:00
## License
This dataset is provided under the MIT license. The underlying market data is sourced from Alpaca Markets.
|
DatologyAI/wikipedia-de-6k_sample | DatologyAI | 2025-06-03T23:11:02Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T23:10:58Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21983213.937647525
num_examples: 6500
download_size: 13406335
dataset_size: 21983213.937647525
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Rexhaif/wmt22-23 | Rexhaif | 2025-06-03T23:10:02Z | 53 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-23T15:13:15Z | null | ---
dataset_info:
features:
- name: lp
dtype: string
- name: src
dtype: string
- name: ref
dtype: string
- name: hyp
dtype: string
- name: system
dtype: string
- name: score
dtype: float64
- name: score_name
dtype: string
- name: example_id
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 136090001
num_examples: 273027
download_size: 24620149
dataset_size: 136090001
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CO-Bench/FrontierCO | CO-Bench | 2025-06-03T23:03:01Z | 2,456 | 4 | [
"task_categories:text-generation",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"code"
] | [
"text-generation"
] | 2025-05-16T04:02:55Z | null | ---
license: apache-2.0
task_categories:
- text-generation
tags:
- code
---
# FrontierCO: Benchmark Dataset for Frontier Combinatorial Optimization
## Overview
**FrontierCO** is a curated benchmark suite for evaluating ML-based solvers on large-scale and real-world **Combinatorial Optimization (CO)** problems. The benchmark spans **8 classical CO problems** across **5 application domains**, providing both training and evaluation instances specifically designed to test the frontier of ML and LLM capabilities in solving NP-hard problems.
code for evaluating agent https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco
code for running classifical solver, generate training data, evaluating neural solver: https://github.com/sunnweiwei/FrontierCO
Evaluation Code: https://github.com/sunnweiwei/FrontierCO
---
## Dataset Structure
Each subdirectory corresponds to a specific CO task:
```
FrontierCO/
├── CFLP/
│ ├── easy_test_instances/
│ ├── hard_test_instances/
│ ├── valid_instances/
│ └── config.py
├── CPMP/
├── CVRP/
├── FJSP/
├── MIS/
├── MDS/
├── STP/
├── TSP/
└── ...
```
Each task folder contains:
* `easy_test_instances/`: Benchmark instances that are solvable by SOTA human-designed solvers.
* `hard_test_instances/`: Instances that remain computationally intensive or lack known optimal solutions.
* `valid_instances/` *(if applicable)*: Additional instances for validation or development.
* `config.py`: Metadata about instance format, solver settings, and reference solutions.
---
## Tasks Covered
The benchmark currently includes the following problems:
* **MIS** – Maximum Independent Set
* **MDS** – Minimum Dominating Set
* **TSP** – Traveling Salesman Problem
* **CVRP** – Capacitated Vehicle Routing Problem
* **CFLP** – Capacitated Facility Location Problem
* **CPMP** – Capacitated p-Median Problem
* **FJSP** – Flexible Job-shop Scheduling Problem
* **STP** – Steiner Tree Problem
Each task includes:
* Easy and hard test sets with varying difficulty and practical relevance
* Training and validation instances where applicable, generated using problem-specific generators
* Reference results for classical and ML-based solvers
---
## Data Sources
Instances are sourced from a mix of:
* Public repositories (e.g., [TSPLib](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/), [CVRPLib](http://vrp.galgos.inf.puc-rio.br/))
* DIMACS and PACE Challenges
* Synthetic instance generators used in prior ML and optimization research
* Manual curation from recent SOTA solver evaluation benchmarks
For tasks lacking open benchmarks, we include high-quality synthetic instances aligned with real-world difficulty distributions.
---
## Usage
To use this dataset, clone the repository and select the task of interest. Each `config.py` file documents the format and how to parse or evaluate the instances.
```bash
git clone https://huggingface.co/datasets/CO-Bench/FrontierCO
cd FrontierCO/CFLP
```
Load a data instance
```python
from config import load_data
instance = load_data('easy_test_instances/i1000_1.plc')
print(instance)
```
Generate a solution
```python
# Your solution generation code goes here.
# For example:
solution = my_solver_func(**instance)
```
### Evaluate a solution
```python
from config import eval_func
score = eval_func(**instance, **solution)
print("Evaluation score:", score)
```
---
## Citation
If you use **FrontierCO** in your research or applications, please cite the following paper:
```bibtex
@misc{feng2025comprehensive,
title={A Comprehensive Evaluation of Contemporary ML-Based Solvers for Combinatorial Optimization},
author={Shengyu Feng and Weiwei Sun and Shanda Li and Ameet Talwalkar and Yiming Yang},
year={2025},
}
```
---
## License
This dataset is released under the MIT License. Refer to `LICENSE` file for details.
---
|
APTO-Project/patents_7_univ_distill | APTO-Project | 2025-06-03T22:53:58Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T22:53:42Z | null | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: user_prompt
dtype: string
- name: sentiment_prompt
dtype: string
- name: model_name
dtype: string
- name: think
dtype: string
- name: patent
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1175115142
num_examples: 163608
download_size: 444553164
dataset_size: 1175115142
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jiuyal2/eval_so100_marker | jiuyal2 | 2025-06-03T22:46:00Z | 130 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-29T22:20:43Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 607,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.so100": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.iphone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
psg777/gluepickup102 | psg777 | 2025-06-03T22:34:25Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-03T22:33:50Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.2",
"robot_type": "so101",
"total_episodes": 50,
"total_frames": 35802,
"total_tasks": 1,
"total_videos": 150,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.base": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.gripper": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.bird": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
bicat123/testdata1-part2 | bicat123 | 2025-06-03T22:34:21Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:09:49Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: IDDetect
dtype: string
- name: IDTraking
dtype: int64
- name: image
dtype: image
splits:
- name: train
num_bytes: 110788039.11
num_examples: 1337
download_size: 0
dataset_size: 110788039.11
---
# Dataset Card for "testdata1-part2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
VinceEPFL/mmlu_filtered_subset | VinceEPFL | 2025-06-03T22:30:19Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T22:30:17Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 706243.2889901723
num_examples: 1432
download_size: 317765
dataset_size: 706243.2889901723
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
cezarsolo/so100_test | cezarsolo | 2025-06-03T22:24:25Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-03T22:04:54Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1756,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
med-vlrm/med-vlm-pmc_vqa | med-vlrm | 2025-06-03T22:23:14Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:37:29Z | null | ---
dataset_info:
features:
- name: images
sequence: image
- name: question
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer_label
dtype: string
- name: answer
dtype: string
- name: dataset_name
dtype: string
- name: hash
dtype: string
- name: dataset_index
dtype: int32
splits:
- name: train
num_bytes: 23386348703.695
num_examples: 176917
download_size: 17407256175
dataset_size: 23386348703.695
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wick1d/Personalized_Safety_Data | wick1d | 2025-06-03T22:20:02Z | 189 | 2 | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text2text-generation",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.18882",
"region:us"
] | [
"question-answering",
"text-classification",
"text2text-generation"
] | 2025-05-22T19:32:28Z | null | ---
license: mit
task_categories:
- question-answering
- text-classification
- text2text-generation
language:
- en
pretty_name: Personalied Safety Data for LLMs
size_categories:
- 10K<n<100K
---
# 📦 Personalized Risk and Dilemma Dataset for LLM Safety Research
## 📝 Dataset Summary
This is the **first dataset designed to support research on personalized risk and emotional vulnerability in the context of Large Language Models (LLMs)**.
The dataset contains **8,000+ real-world, anonymized personal queries**, extracted from Reddit and annotated with structured profile metadata, including emotional states, demographic information, and life contexts (e.g., health, relationship, education, etc.).
It enables in-depth study of how LLMs should respond safely, empathetically, and contextually to users under psychological or socioeconomic distress.
---
## 🔍 Key Features
- 🧠 **First personalized risk dataset** for LLM safety and alignment
- 🧩 Rich structured context: mental state, emotion, age, gender, etc.
- ⚠️ Ideal for studying LLM behavior under **vulnerable or sensitive inputs**
- ✅ Fully **anonymized**: no Reddit usernames, post content, URLs, or titles
---
## 📂 Dataset Fields
| Field | Description |
|----------------------|--------------------------------------------------------------|
| `query` | A user-submitted personal question or concern |
| `scenario` | Situation context (e.g., life, health, relationship) |
| `age`, `gender` | Demographic info (when available) |
| `education_level` | Educational background |
| `economic_status` | Financial condition |
| `health_status` | Physical or medical condition |
| `mental_health_status`, `emotional_state` | User-expressed mental and emotional state |
| `source` | Always `"real"` to indicate authenticity |
---
## 🎯 Use Cases
This dataset is ideal for:
- ✅ Text-to-text generation of supportive responses
- ✅ Emotion or scenario classification
- ✅ Risk-sensitive LLM fine-tuning and safety analysis
- ✅ Evaluating empathy and alignment in AI models
---
## 🔒 Ethical & Legal Notice
This dataset is derived from public Reddit content and processed for **non-commercial, research-only** use.
- All identifying elements (e.g., URLs, usernames, full post texts) have been removed
- Dataset is compliant with Reddit’s [User Agreement](https://www.redditinc.com/policies/user-agreement)
- Please **do not use** for content reconstruction, commercial applications, or profiling
---
## 📚 Citation
> ```bibtex
> @article{wu2025personalized,
> title={Personalized Safety in LLMs: A Benchmark and A Planning-Based Agent Approach},
> author={Wu, Yuchen and Sun, Edward and Zhu, Kaijie and Lian, Jianxun and Hernandez-Orallo, Jose and Caliskan, Aylin and Wang, Jindong},
> journal={arXiv preprint arXiv:2505.18882},
> year={2025}
> }
>
### Disclaimer
This dataset is derived from publicly available Reddit content and is intended strictly for **research and educational purposes**. All entries have been stripped of direct user content and identifying information, including post URLs and full post texts.
Please note:
- The original content remains the intellectual property of the respective Reddit users.
- This dataset **does not** include any Reddit usernames, links, or verbatim post bodies.
- The dataset should **not** be used for any commercial purposes or user profiling.
- If you are a content owner and have concerns, please contact us to remove specific data.
By using this dataset, you agree to use it in accordance with Reddit’s [User Agreement](https://www.redditinc.com/policies/user-agreement) and Hugging Face’s [Data Use Policy](https://huggingface.co/docs/hub/security#data-use).
|
cloudy-sfu/Manuka-honey | cloudy-sfu | 2025-06-03T22:06:14Z | 103 | 0 | [
"license:cc0-1.0",
"size_categories:n<1K",
"format:csv",
"modality:document",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-27T03:00:02Z | null | ---
license: cc0-1.0
---
This dataset records Manuka honey prices in New Zealand of the following brands and retailers.
Included brands:
- Egmont
- Arataki
- Manuka doctor
Included retailers:
- Woolworths
- New World
- Egmont gift shop
- Arataki honey
- Manuka doctor
The meanings of the columns in the dataset are as follows.
| Name | Data type | Unit | Description |
| -------------- | --------- | ---- | ------------------------------------------------------------ |
| date | text | | The data is collected approximately 10:00 of this date in [Pacific/Auckland](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#AUCKLAND) time zone. |
| brand | text | | The company who produce the pack. |
| retailer | text | | The store where the pack is selling. |
| weight | int | g | Weight of the pack. |
| UMF | float | | See [UMF organization](https://www.umf.org.nz/unique-manuka-factor/). |
| MGO | float | | See [UMF organization](https://www.umf.org.nz/unique-manuka-factor/). |
| price | float | NZD | Price per pack. |
| marginal_price | float | NZD | Terms and conditions applied price. For examples, if the store promotes bundles, "any 2 for \$5", while the first item is \$3, the price is $3 and the marginal price is \$2.5. |
The current price report figures the price per kilogram over MGO currently.
|
ChavyvAkvar/synthetic-trades-BTC-batch-16 | ChavyvAkvar | 2025-06-03T21:45:01Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:44:01Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450625
num_examples: 1000
download_size: 924492054
dataset_size: 923450625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EmaRimoldi/RAG_dataset_test | EmaRimoldi | 2025-06-03T21:41:52Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:36:47Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2360558706
num_examples: 4133298
download_size: 1292282806
dataset_size: 2360558706
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hatemestinbejaia/mmarco_collection_splade_vector | hatemestinbejaia | 2025-06-03T21:33:38Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:46:53Z | null | ---
dataset_info:
features:
- name: id
dtype: int32
- name: sparse_vectors
sequence: bool
splits:
- name: collection
num_bytes: 70805318584
num_examples: 8841823
download_size: 5333493077
dataset_size: 70805318584
configs:
- config_name: default
data_files:
- split: collection
path: data/collection-*
---
|
jsbeaudry/human-creole-text-speech | jsbeaudry | 2025-06-03T21:20:03Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:17:48Z | null | ---
dataset_info:
features:
- name: fileName
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: speaker_id
dtype: string
- name: createdAt
dtype: string
- name: fileSizeBytes
dtype: int64
- name: status
dtype: string
splits:
- name: train
num_bytes: 35896048.0
num_examples: 322
download_size: 35878067
dataset_size: 35896048.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
willx0909/iv_bag_try | willx0909 | 2025-06-03T21:17:58Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"libero",
"easo",
"rlds"
] | [
"robotics"
] | 2025-06-03T20:56:47Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- libero
- easo
- rlds
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "easo",
"total_episodes": 20,
"total_frames": 5314,
"total_tasks": 2,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 50,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.joint_angles": {
"dtype": "float32",
"shape": [
7
]
},
"observation.eef_pose": {
"dtype": "float32",
"shape": [
6
]
},
"observation.target_eef_pose": {
"dtype": "float32",
"shape": [
6
]
},
"actions": {
"dtype": "float32",
"shape": [
8
]
},
"observation.images.forward_diagonal_camera_right": {
"dtype": "image",
"shape": [
240,
424,
3
]
},
"observation.images.hand_camera_right": {
"dtype": "image",
"shape": [
240,
424,
3
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ChavyvAkvar/synthetic-trades-XRP-batch-32 | ChavyvAkvar | 2025-06-03T21:11:00Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T21:09:59Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448078
num_examples: 1000
download_size: 924485696
dataset_size: 923448078
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LinaSad/sft_data_100k | LinaSad | 2025-06-03T20:54:22Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:40:31Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1237330978
num_examples: 100000
download_size: 547714204
dataset_size: 1237330978
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nguyenkhanh87/UMLCode-DeepSeek-32B-Reasoning-RAW | nguyenkhanh87 | 2025-06-03T20:51:41Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:51:32Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: reasoning
dtype: string
- name: uml_code
dtype: string
splits:
- name: train
num_bytes: 14711770
num_examples: 2998
download_size: 5239105
dataset_size: 14711770
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
brygotti/unified-0.8M | brygotti | 2025-06-03T20:45:55Z | 48 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-01T21:29:35Z | null | ---
dataset_info:
features:
- name: dataset
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: question_type
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
- name: relevance_text
dtype: string
- name: relevance_nlp4educ
dtype: float32
- name: relevance_mmlu
dtype: float32
- name: relevance_othereval
dtype: float32
splits:
- name: train
num_bytes: 3331201564
num_examples: 816351
download_size: 1560088775
dataset_size: 3331201564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
futurehouse/answer-or-not-dataset-OPCW-1-2-reworded | futurehouse | 2025-06-03T20:42:19Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:44:19Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: ideal
dtype: string
- name: problem_type
dtype: string
- name: thought
dtype: string
- name: unformatted
dtype: string
splits:
- name: train
num_bytes: 434465
num_examples: 604
download_size: 116278
dataset_size: 434465
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
prerit2k/eval_act_bench01_21_1 | prerit2k | 2025-06-03T20:41:29Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-06-03T20:41:25Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"trossen_subversion": "v1.0",
"robot_type": "trossen_ai_solo",
"total_episodes": 1,
"total_frames": 838,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_main": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
french-datasets/bio-datasets_e3c | french-datasets | 2025-06-03T20:40:34Z | 0 | 0 | [
"language:spa",
"language:eus",
"language:fra",
"language:eng",
"language:ita",
"region:us"
] | [] | 2025-06-03T20:39:29Z | null | ---
language:
- spa
- eus
- fra
- eng
- ita
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [bio-datasets/e3c](https://huggingface.co/datasets/bio-datasets/e3c). |
allenai/PRISM | allenai | 2025-06-03T20:40:21Z | 23 | 0 | [
"task_categories:robotics",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"modality:3d",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"robotics",
"grasp-prediction",
"task-oriented-grasping",
"manipulation",
"3d",
"image",
"text"
] | [
"robotics"
] | 2025-06-02T18:15:50Z | null | ---
license: mit
task_categories:
- robotics
language:
- en
tags:
- robotics
- grasp-prediction
- task-oriented-grasping
- manipulation
- 3d
- image
- text
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: train
path: train.csv
- split: test
path: test.csv
---
# PRISM
Purpose-driven Robotic Interaction in Scene Manipulation (PRISM) is a large-scale synthetic dataset for Task-Oriented Grasping featuring cluttered environments and diverse, realistic task descriptions. We use 2365 object instances from ShapeNet-Sem along with stable grasps from ACRONYM to compose 10,000 unique and diverse scenes. Within each scene we capture 10 views, within which there are multiple tasks to be performed. This results in 379k task-grasp samples in total.
The dataset card contains the tasks and corresponding descriptions for the train/test datasets. The RGB images, point clouds, segmentation maps, etc. are available in the included "data files", which are in the `.tar` files in `PRISM-train` and `PRISM-test`, which can be retrieved as required for each data sample.
## Data Files
Each `.tar` file contains multiple `<scene_id>.hdf5` files, each with the following structure:
```
view_<i>/
rgb: RGB image as (H,W,3) array
xyz: Back-projected point-cloud (from RGB-D view) as (H,W,3) array of XYZ points
seg: Segmentation map as (H,W) array where each pixel is index of object name in object_names
object_names: List of object names visible in view
normals (optional): Point-cloud normals as (H,W,3) array
view_pose: Camera pose in world frame as (4,4) array
cam_params: Camera intrinsics matrix as (3,3) array
obs_<j>/
grasp_pose: Grasp pose in camera frame as (4,4) array
grasp_point: Point being grasped in camera frame as (3,) array
grasp_point_px: Point being grasped projected onto image plane as (2,) array
annot: YAML-formatted object with the following keys: ["annotation_id", "grasp_description", "object_description", "object_category", "object_id", "grasp_id"]
```
### Reading Data Files
Here's an example of how to extract the required information from the data files to create a `datasets.Dataset` of image, task, and corresponding point, as was used to train [GraspMolmo](https://github.com/abhaybd/GraspMolmo).
```python
import os
import datasets
import huggingface_hub as hf_hub
import h5py
from PIL import Image
import numpy as np
def point_to_xml(grasp_pt: np.ndarray):
if grasp_pt.ndim == 2:
assert grasp_pt.shape == (1, 2)
grasp_pt = grasp_pt[0]
assert grasp_pt.shape == (2,)
point_desc = "Where to grasp the object"
return f"<point x=\"{grasp_pt[0]*100:.1f}\" y=\"{grasp_pt[1]*100:.1f}\" alt=\"{point_desc}\">{point_desc}</point>"
def map_sample(file_loc_map: dict[str, str], ex: dict):
h5_path = file_loc_map[ex["scene_path"]]
with h5py.File(h5_path, "r") as f:
img = Image.fromarray(f[ex["view_id"]]["rgb"][:])
grasp_pt_px = f[ex["view_id"]][ex["obs_id"]]["grasp_point_px"][:]
grasp_pt_px = grasp_pt_px / np.array([img.width, img.height])
task = ex["task"]
prompt = f"Point to the grasp that would accomplish the following task: {task}"
point_xml = point_to_xml(grasp_pt_px)
response = f"In order to accomplish the task \"{task}\", the optimal grasp is described as follows: \"{ex['matching_grasp_desc']}\".\n\n{point_xml}"
return dict(
image=img,
prompt=prompt,
text=response,
style="pointing"
)
def build_pointing_dataset(split: str, num_proc: int = 10) -> datasets.Dataset:
hf_fs = hf_hub.HfFileSystem()
chunks = hf_fs.ls(f"datasets/allenai/PRISM/PRISM-{split}", detail=False)
urls = []
for chunk in chunks:
path = chunk[len("datasets/allenai/PRISM/"):]
urls.append(hf_hub.hf_hub_url(repo_id="allenai/PRISM", filename=path, repo_type="dataset"))
dl_manager = datasets.DownloadManager(dataset_name="allenai/PRISM", record_checksums=False)
paths = dl_manager.download_and_extract(urls)
file_loc_map = {}
for path in paths:
path = str(path)
for file in os.listdir(path):
file_loc_map[file] = os.path.join(path, file)
metadata_ds = datasets.load_dataset("allenai/PRISM", split=split)
dataset = metadata_ds.map(lambda ex: map_sample(file_loc_map, ex), num_proc=num_proc)
return dataset
if __name__ == "__main__":
build_pointing_dataset("train")
build_pointing_dataset("test")
``` |
french-datasets/L3-IA-2025_Questions2 | french-datasets | 2025-06-03T20:37:24Z | 0 | 0 | [
"language:fra",
"region:us"
] | [] | 2025-06-03T20:36:20Z | null | ---
language:
- fra
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [L3-IA-2025/Questions2](https://huggingface.co/datasets/L3-IA-2025/Questions2). |
orcn/v3.2-sqr-img | orcn | 2025-06-03T20:13:25Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:12:15Z | null | ---
dataset_info:
features:
- name: image1
dtype: image
- name: image2
dtype: image
- name: image3
dtype: image
- name: image4
dtype: image
- name: image5
dtype: image
- name: image6
dtype: image
- name: image7
dtype: image
- name: image8
dtype: image
- name: image9
dtype: image
- name: image10
dtype: image
- name: image11
dtype: image
- name: image12
dtype: image
- name: image13
dtype: image
- name: image14
dtype: image
- name: image15
dtype: image
- name: image16
dtype: image
- name: image17
dtype: image
- name: image18
dtype: image
- name: image19
dtype: image
- name: image20
dtype: image
- name: image21
dtype: image
- name: image22
dtype: image
- name: image23
dtype: image
- name: image24
dtype: image
- name: image25
dtype: image
splits:
- name: train
num_bytes: 109518543.0
num_examples: 500
download_size: 109024852
dataset_size: 109518543.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
daqc/ibero-characters-es | daqc | 2025-06-03T20:11:49Z | 0 | 0 | [
"task_categories:video-text-to-text",
"task_categories:text-generation",
"language:es",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"history",
"mitología",
"storytelling"
] | [
"video-text-to-text",
"text-generation"
] | 2025-06-03T19:13:09Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: nombre
dtype: string
- name: pais
dtype: string
- name: descripcion
dtype: string
- name: historia
dtype: string
- name: id
dtype: string
- name: url_fuentes
sequence: string
splits:
- name: train
num_bytes: 1389510
num_examples: 573
download_size: 1219977
dataset_size: 1389510
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- video-text-to-text
- text-generation
language:
- es
tags:
- history
- mitología
- storytelling
size_categories:
- n<1K
---
# Conjunto de datos de personajes de mitos y leyendas iberoamericanos.
> ⚠️ Este dataset se encuentra en desarrollo activo. Se planea expandir significativamente el número de registros y mejorar la cobertura de imágenes.
## 📚 Descripción
Dataset de personajes míticos y legendarios de Iberoamérica, diseñado para preservar y promover el patrimonio cultural a través de la inteligencia artificial.
## 🌟 Motivación e Impacto
- 📱 **Preservación Digital**: Conservación del patrimonio cultural iberoamericano
- 🤖 **IA Cultural**: Base para modelos culturalmente sensibles
- 📖 **Educación**: Recurso para estudios culturales
- 🌍 **Accesibilidad**: Democratización de mitos y leyendas
### 📝 Formato de Entrada
```json
{
"image": "ruta/a/imagen.webp",
"nombre": "Nombre del Personaje",
"pais": "País de Origen",
"descripcion": "Descripción detallada del personaje",
"historia": "Historia de origen bajo contexto cultural e histórico",
"id": "identificador_unico",
"url_fuentes": "https://fuente.com"
}
```
### 📈 Distribución Actual
```json
{
"total_personajes": 573,
"personajes_con_imagen": 18,
"personajes_con_null": 555,
"paises_representados": 22
}
```
## 🔄 Proceso de Generación
1. **Recopilación**
Extracción manual de fuentes publicas en internet.
2. **Estandarización**
Estructuración en formato JSONL
3. **Validación**
Revisión manual de entradas y fuentes.
## 💻 Código y Contribución
- Este dataset forma parte del proyecto: [Iberotales](https://github.com/mcdaqc/Iberotales/)
- 🤝 Contribuciones: Issues y Pull Requests bienvenidos
## F. Citación
```
@misc{ibero-characters-es,
title = {Dataset of characters from Ibero-American myths and legends.},
author = {David Quispe},
month = {June},
year = {2025},
url = {https://huggingface.co/datasets/somosnlp-hackathon-2025/ibero-characters-es/}
}
```
---
<p align="center">
<em>Este proyecto fue parte de la hackathon de Somos NLP 2025.</em><br>
<img src="https://raw.githubusercontent.com/mcdaqc/Iberotales/refs/heads/main/img/logo.png" alt="Somos NLP 2025" width="80" />
</p> |
orcn/v3.2-tri-image | orcn | 2025-06-03T20:07:10Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:06:21Z | null | ---
dataset_info:
features:
- name: image1
dtype: image
- name: image2
dtype: image
- name: image3
dtype: image
- name: image4
dtype: image
- name: image5
dtype: image
- name: image6
dtype: image
- name: image7
dtype: image
- name: image8
dtype: image
- name: image9
dtype: image
- name: image10
dtype: image
- name: image11
dtype: image
- name: image12
dtype: image
- name: image13
dtype: image
- name: image14
dtype: image
- name: image15
dtype: image
- name: image16
dtype: image
- name: image17
dtype: image
- name: image18
dtype: image
- name: image19
dtype: image
- name: image20
dtype: image
- name: image21
dtype: image
- name: image22
dtype: image
- name: image23
dtype: image
- name: image24
dtype: image
- name: image25
dtype: image
splits:
- name: train
num_bytes: 80358257.0
num_examples: 500
download_size: 80004859
dataset_size: 80358257.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rshoff/lerobot | rshoff | 2025-06-03T20:03:53Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-03T19:51:05Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 2,
"total_frames": 1699,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.pole": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Blueeeeee/FND_KDD2020_Embeddings | Blueeeeee | 2025-06-03T20:02:17Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T20:02:06Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: bert_embeddings
sequence: float32
splits:
- name: train
num_bytes: 31576295
num_examples: 4487
- name: test
num_bytes: 3612369
num_examples: 499
download_size: 30960369
dataset_size: 35188664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
berczig/ocr-10k-qa | berczig | 2025-06-03T20:02:08Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:54:29Z | null | ---
dataset_info:
features:
- name: pdf_id
dtype: string
- name: metadata
struct:
- name: dataset
dtype: string
- name: file_path
dtype: string
- name: math_filter_classification
struct:
- name: elementary
dtype: int64
- name: has_exercises
dtype: bool
- name: highschool
dtype: int64
- name: highschool_competition
dtype: int64
- name: research
dtype: int64
- name: university
dtype: int64
- name: university_competition
dtype: int64
- name: paper_score
dtype: float64
- name: qa_json_pair
list:
- name: 'No'
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: formatting_error_count
dtype: string
- name: answer_count
dtype: string
- name: solution_count
dtype: string
- name: classification_stats
struct:
- name: other
dtype: int64
splits:
- name: train
num_bytes: 5384
num_examples: 3
download_size: 10543
dataset_size: 5384
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-26 | ChavyvAkvar | 2025-06-03T19:58:13Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T19:57:11Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923447965
num_examples: 1000
download_size: 924486370
dataset_size: 923447965
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
prerit2k/Bench01-21 | prerit2k | 2025-06-03T19:48:45Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"first collection"
] | [
"robotics"
] | 2025-06-03T19:48:42Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- first collection
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"trossen_subversion": "v1.0",
"robot_type": "trossen_ai_solo",
"total_episodes": 1,
"total_frames": 896,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_main": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ConseggioLigure/zenamt-document-level | ConseggioLigure | 2025-06-03T19:46:30Z | 63 | 0 | [
"task_categories:translation",
"multilinguality:multilingual",
"source_datasets:original",
"language:lij",
"language:it",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation"
] | 2025-04-25T07:29:05Z | null | ---
license: cc-by-4.0
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
pretty_name: ZenaMT (document-level)
multilinguality: multilingual
language:
- lij
- it
- en
dataset_info:
features:
- name: lij
dtype: large_string
- name: ita
dtype: large_string
- name: eng
dtype: large_string
- name: source
dtype: large_string
- name: level
dtype: large_string
splits:
- name: train
num_bytes: 4396355
num_examples: 10046
- name: validation
num_bytes: 14176
num_examples: 79
- name: test
num_bytes: 12485
num_examples: 71
download_size: 2527594
dataset_size: 4423016
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# ZenaMT corpus (document-level)
This is an Italian – Ligurian (Genoese) parallel corpus covering a number of domains of cultural relevance to Ligurian speakers. Parts of the corpus also contain aligned English translations, available in the column `eng`. Whenever an English translation is not available, the corresponding column is set to `null`.
This is the **document-level** version of the corpus. Source elements which were available in document form were retained as full documents, rather than being sentence split, and are marked `level: document`. Some source data -- such as example sentences from our dictionary -- only existed as sentences and are marked `level: sentence`.
If you are training a translation model only on sentences, you may be interested in the [sentence-level version of the corpus](https://huggingface.co/datasets/ConseggioLigure/zenamt-sentence-level) which contains the exact same data, but split at the sentence level.
**Note:** This is a living corpus. It will receive updates as the sources it draws from keep growing.
## Sources
| Subcorpus | Domain |
|---------------|--------|
| `dictionary` | Example sentences from our [Italian-Genoese dictionary](https://conseggio-ligure.org/en/dictionary/deize/) and other study materials. |
| `news` | News from our weekly Ligurian news website [O Zinâ](https://www.ozina.org) |
| `proverbs` | Traditional Ligurian proverbs. |
| `literature` | Essays on the history of Ligurian literature. |
| `dialogues` | Scripted dialogues which capture colloquial usage of the language. |
| `web` | Data from several websites managed by our association. |
| `stories` | Short stories. |
| `entities` | Parallel sentences covering Ligurian toponyms and other culturally-relevant named entities. |
| `weather` | User-contributed weather forecasts. |
## Attribution
If you use this corpus in your own work, please cite the following paper:
```bibtex
@inproceedings{haberland-etal-2024-italian,
title = "{I}talian-{L}igurian Machine Translation in Its Cultural Context",
author = "Haberland, Christopher R. and
Maillard, Jean and
Lusito, Stefano",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
year = "2024",
url = "https://aclanthology.org/2024.sigul-1.21",
}
``` |
canaan000/lerobot | canaan000 | 2025-06-03T19:02:36Z | 0 | 0 | [
"language:en",
"license:unknown",
"region:us",
"art"
] | [] | 2025-06-03T19:01:26Z | null | ---
license: unknown
language:
- en
tags:
- art
--- |
Duyynh/gigaspeech2_test_with_noise | Duyynh | 2025-06-03T18:51:02Z | 26 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-06-02T16:37:09Z | null | ---
license: apache-2.0
---
|
psg777/pickuptest2031 | psg777 | 2025-06-03T18:41:44Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-03T18:41:38Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.2",
"robot_type": "so100",
"total_episodes": 8,
"total_frames": 2186,
"total_tasks": 1,
"total_videos": 24,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:8"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.bird": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
andrewdalpino/CAFA5 | andrewdalpino | 2025-06-03T18:39:43Z | 258 | 1 | [
"task_categories:text-classification",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"proteomics",
"protein",
"gene-ontology"
] | [
"text-classification"
] | 2025-05-12T20:58:50Z | null | ---
dataset_info:
- config_name: all
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: length
dtype: int64
- name: terms
sequence: string
- name: terms_embedding
sequence: float64
- name: taxon_id
dtype: string
- name: stratum_id
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '10'
'3': '11'
'4': '12'
'5': '13'
'6': '14'
'7': '15'
'8': '16'
'9': '17'
'10': '18'
'11': '19'
'12': '2'
'13': '20'
'14': '21'
'15': '22'
'16': '23'
'17': '24'
'18': '25'
'19': '26'
'20': '27'
'21': '28'
'22': '29'
'23': '3'
'24': '30'
'25': '31'
'26': '32'
'27': '33'
'28': '34'
'29': '35'
'30': '36'
'31': '37'
'32': '38'
'33': '39'
'34': '4'
'35': '40'
'36': '41'
'37': '42'
'38': '43'
'39': '44'
'40': '45'
'41': '46'
'42': '47'
'43': '48'
'44': '49'
'45': '5'
'46': '50'
'47': '51'
'48': '52'
'49': '53'
'50': '54'
'51': '55'
'52': '56'
'53': '57'
'54': '58'
'55': '59'
'56': '6'
'57': '60'
'58': '61'
'59': '62'
'60': '63'
'61': '64'
'62': '65'
'63': '66'
'64': '67'
'65': '68'
'66': '69'
'67': '7'
'68': '70'
'69': '71'
'70': '72'
'71': '73'
'72': '74'
'73': '75'
'74': '76'
'75': '77'
'76': '78'
'77': '79'
'78': '8'
'79': '80'
'80': '81'
'81': '82'
'82': '83'
'83': '84'
'84': '85'
'85': '86'
'86': '87'
'87': '88'
'88': '89'
'89': '9'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
splits:
- name: train
num_bytes: 193626393.11677656
num_examples: 128021
- name: test
num_bytes: 21514715.88322343
num_examples: 14225
download_size: 154163126
dataset_size: 215141109.0
- config_name: bp
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: length
dtype: int64
- name: terms
sequence: string
- name: terms_embedding
sequence: float64
- name: taxon_id
dtype: string
- name: stratum_id
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '10'
'3': '11'
'4': '12'
'5': '13'
'6': '14'
'7': '15'
'8': '16'
'9': '17'
'10': '18'
'11': '19'
'12': '2'
'13': '20'
'14': '21'
'15': '22'
'16': '23'
'17': '24'
'18': '25'
'19': '26'
'20': '27'
'21': '28'
'22': '29'
'23': '3'
'24': '30'
'25': '31'
'26': '32'
'27': '33'
'28': '34'
'29': '35'
'30': '36'
'31': '37'
'32': '38'
'33': '39'
'34': '4'
'35': '40'
'36': '41'
'37': '42'
'38': '43'
'39': '44'
'40': '45'
'41': '46'
'42': '47'
'43': '48'
'44': '49'
'45': '5'
'46': '50'
'47': '51'
'48': '52'
'49': '53'
'50': '54'
'51': '55'
'52': '56'
'53': '57'
'54': '58'
'55': '59'
'56': '6'
'57': '60'
'58': '61'
'59': '62'
'60': '63'
'61': '64'
'62': '65'
'63': '66'
'64': '67'
'65': '68'
'66': '69'
'67': '7'
'68': '70'
'69': '71'
'70': '72'
'71': '73'
'72': '74'
'73': '75'
'74': '76'
'75': '77'
'76': '78'
'77': '79'
'78': '8'
'79': '80'
'80': '81'
'81': '82'
'82': '83'
'83': '84'
'84': '85'
'85': '86'
'86': '87'
'87': '88'
'88': '89'
'89': '9'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
splits:
- name: train
num_bytes: 129187926.9
num_examples: 82989
- name: test
num_bytes: 14354214.1
num_examples: 9221
download_size: 107191842
dataset_size: 143542141.0
- config_name: cc
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: length
dtype: int64
- name: terms
sequence: string
- name: terms_embedding
sequence: float64
- name: taxon_id
dtype: string
- name: stratum_id
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '10'
'3': '11'
'4': '12'
'5': '13'
'6': '14'
'7': '15'
'8': '16'
'9': '17'
'10': '18'
'11': '19'
'12': '2'
'13': '20'
'14': '21'
'15': '22'
'16': '23'
'17': '24'
'18': '25'
'19': '26'
'20': '27'
'21': '28'
'22': '29'
'23': '3'
'24': '30'
'25': '31'
'26': '32'
'27': '33'
'28': '34'
'29': '35'
'30': '36'
'31': '37'
'32': '38'
'33': '39'
'34': '4'
'35': '40'
'36': '41'
'37': '42'
'38': '43'
'39': '44'
'40': '45'
'41': '46'
'42': '47'
'43': '48'
'44': '49'
'45': '5'
'46': '50'
'47': '51'
'48': '52'
'49': '53'
'50': '54'
'51': '55'
'52': '56'
'53': '57'
'54': '58'
'55': '59'
'56': '6'
'57': '60'
'58': '61'
'59': '62'
'60': '63'
'61': '64'
'62': '65'
'63': '66'
'64': '67'
'65': '68'
'66': '69'
'67': '7'
'68': '70'
'69': '71'
'70': '72'
'71': '73'
'72': '74'
'73': '75'
'74': '76'
'75': '77'
'76': '78'
'77': '79'
'78': '8'
'79': '80'
'80': '81'
'81': '82'
'82': '83'
'83': '84'
'84': '85'
'85': '86'
'86': '87'
'87': '88'
'88': '89'
'89': '9'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
splits:
- name: train
num_bytes: 76889746.48893577
num_examples: 83620
- name: test
num_bytes: 8544122.511064233
num_examples: 9292
download_size: 64110332
dataset_size: 85433869.0
- config_name: mf
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: length
dtype: int64
- name: terms
sequence: string
- name: terms_embedding
sequence: float64
- name: taxon_id
dtype: string
- name: stratum_id
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '10'
'3': '11'
'4': '12'
'5': '13'
'6': '14'
'7': '15'
'8': '16'
'9': '17'
'10': '18'
'11': '19'
'12': '2'
'13': '20'
'14': '21'
'15': '22'
'16': '23'
'17': '24'
'18': '25'
'19': '26'
'20': '27'
'21': '28'
'22': '29'
'23': '3'
'24': '30'
'25': '31'
'26': '32'
'27': '33'
'28': '34'
'29': '35'
'30': '36'
'31': '37'
'32': '38'
'33': '39'
'34': '4'
'35': '40'
'36': '41'
'37': '42'
'38': '43'
'39': '44'
'40': '45'
'41': '46'
'42': '47'
'43': '48'
'44': '49'
'45': '5'
'46': '50'
'47': '51'
'48': '52'
'49': '53'
'50': '54'
'51': '55'
'52': '56'
'53': '57'
'54': '58'
'55': '59'
'56': '6'
'57': '60'
'58': '61'
'59': '62'
'60': '63'
'61': '64'
'62': '65'
'63': '66'
'64': '67'
'65': '68'
'66': '69'
'67': '7'
'68': '70'
'69': '71'
'70': '72'
'71': '73'
'72': '74'
'73': '75'
'74': '76'
'75': '77'
'76': '78'
'77': '79'
'78': '8'
'79': '80'
'80': '81'
'81': '82'
'82': '83'
'83': '84'
'84': '85'
'85': '86'
'86': '87'
'87': '88'
'88': '89'
'89': '9'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
splits:
- name: train
num_bytes: 69470217.72236988
num_examples: 70773
- name: test
num_bytes: 7719240.277630123
num_examples: 7864
download_size: 63311313
dataset_size: 77189458.0
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
- split: test
path: all/test-*
- config_name: bp
data_files:
- split: train
path: bp/train-*
- split: test
path: bp/test-*
- config_name: cc
data_files:
- split: train
path: cc/train-*
- split: test
path: cc/test-*
- config_name: mf
data_files:
- split: train
path: mf/train-*
- split: test
path: mf/test-*
license: apache-2.0
task_categories:
- text-classification
tags:
- proteomics
- protein
- gene-ontology
pretty_name: CAFA 5
size_categories:
- 100K<n<1M
---
# CAFA 5
This is the [CAFA 5](https://www.kaggle.com/competitions/cafa-5-protein-function-prediction) dataset of 142k protein sequences annotated with their gene ontology (GO) terms. The samples are divided into three subsets each containing a set of GO terms that are associated with one of the three subgraphs of the gene ontology - Molecular Function, Biological Process, and Cellular Component. In addition, we provide a stratified train/test split that utilizes term embeddings to distribute term labels equally. The term embeddings are included in the dataset and can be used to stratify custom splits or to search for sequences with similar gene ontologies.
The code to export this dataset can be found [here](https://github.com/andrewdalpino/CAFA5).
## Subsets
The [CAFA 5](https://huggingface.co/datasets/andrewdalpino/CAFA5) dataset is available on HuggingFace Hub and can be loaded using the HuggingFace [Datasets](https://huggingface.co/docs/datasets) library.
The dataset is divided into three subsets according to the GO terms that the sequences are annotated with.
- `all` - All annotations
- `mf` - Only molecular function terms
- `cc` - Only celluar component terms
- `bp` - Only biological process terms
To load the default CAFA 5 dataset with all function annotations you can use the example below.
```python
from datasets import load_dataset
dataset = load_dataset("andrewdalpino/CAFA5")
```
To load a subset of the CAFA 5 dataset use the example below.
```python
dataset = load_dataset("andrewdalpino/CAFA5", "mf")
```
## Splits
We provide a 90/10 `train` and `test` split for your convenience. The subsets were determined using a stratified approach which assigns cluster numbers to sequences based on their terms embeddings. We've included the stratum IDs so that you can generate additional custom stratified splits as shown in the example below.
```python
from datasets import load_dataset
dataset = load_dataset("andrewdalpino/CAFA5", split="train")
dataset = dataset.class_encode_column("stratum_id")
dataset = dataset.train_test_split(test_size=0.2, stratify_by_column="stratum_id")
```
## Filtering
You can also filter the samples of the dataset like in the example below.
```python
dataset = dataset.filter(lambda sample: sample["length"] <= 2048)
```
## Tokenizing
Some tasks may require you to tokenize the amino acid sequences. In this example, we loop through the samples and add a `tokens` column to store the tokenized sequences.
```python
def tokenize(sample: dict): list[int]:
tokens = tokenizer.tokenize(sample["sequence"])
sample["tokens"] = tokens
return sample
dataset = dataset.map(tokenize, remove_columns="sequence")
```
## Original Dataset
Iddo Friedberg, Predrag Radivojac, Clara De Paolis, Damiano Piovesan, Parnal Joshi, Walter Reade, and Addison Howard. CAFA 5 Protein Function Prediction. https://kaggle.com/competitions/cafa-5-protein-function-prediction, 2023. Kaggle. |
ChavyvAkvar/synthetic-trades-BNB-batch-8 | ChavyvAkvar | 2025-06-03T18:38:51Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:37:53Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450564
num_examples: 1000
download_size: 924491710
dataset_size: 923450564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Franklin0/ReasonGen-R1-SFT-230k | Franklin0 | 2025-06-03T18:37:31Z | 101 | 0 | [
"task_categories:text-to-image",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2505.24875",
"region:us"
] | [
"text-to-image"
] | 2025-05-27T02:07:00Z | null | ---
license: cc-by-4.0
task_categories:
- text-to-image
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: type
dtype: string
- name: brief_caption
dtype: string
- name: raw_prompt
dtype: string
- name: sft_prompt
dtype: string
- name: detailed_caption
dtype: string
- name: image
dtype: string
- name: pth
dtype: string
- name: id
dtype: string
- name: aesthetic
dtype: float32
- name: width
dtype: int32
- name: hash
dtype: string
- name: augmented_prompts
struct:
- name: short_caption
dtype: string
- name: paraphrases
sequence: string
- name: tags
sequence: string
- name: varied_captions
sequence: string
- name: object_prompts
sequence: string
- name: augmented_cots
struct:
- name: step_by_step
dtype: string
- name: object_centric
sequence: string
- name: tags
sequence: string
- name: region_descriptions
sequence: string
splits:
- name: train
num_bytes: 64962413152
num_examples: 234681
download_size: 64231774685
dataset_size: 64962413152
---
SFT Dataset for the paper: ["ReasonGen-R1: Cot for Autoregressive Image generation models through SFT and RL"](https://huggingface.co/papers/2505.24875).
Website: https://aka.ms/reasongen
Code: https://github.com/Franklin-Zhang0/Image-RL
Arxiv: https://arxiv.org/abs/2505.24875 |
neginr/phi_24K_qwq_6K | neginr | 2025-06-03T18:34:20Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:33:02Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1692722617.642405
num_examples: 30000
download_size: 813744124
dataset_size: 1692722617.642405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yoona-J/ASR_Wav2Vec_Preprocess_Peripheral_Neuropathy_Dataset | yoona-J | 2025-06-03T18:11:11Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:07:13Z | null | ---
dataset_info:
features:
- name: input_values
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 4248737624.0
num_examples: 18679
- name: valid
num_bytes: 239472624.0
num_examples: 1040
- name: test
num_bytes: 234697040.0
num_examples: 1036
download_size: 4570203659
dataset_size: 4722907288.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
logicalqubit/QA-RRC-DISTRIC-IT-Dataset | logicalqubit | 2025-06-03T18:10:34Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:10:32Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 573243
num_examples: 2260
download_size: 197275
dataset_size: 573243
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
eliasfiz/french-audio-text-pairs | eliasfiz | 2025-06-03T18:02:47Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:02:39Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 128869769.0
num_examples: 94
download_size: 126404138
dataset_size: 128869769.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
voidful/earica_ms | voidful | 2025-06-03T18:02:46Z | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-01T01:55:49Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: reasoning
dtype: string
- name: answer
dtype: string
- name: audio
dtype: audio
- name: index
dtype: int64
- name: raw_yaml
dtype: string
splits:
- name: train
num_bytes: 2239015.0
num_examples: 4
download_size: 554534
dataset_size: 2239015.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
emrecn/testDataset | emrecn | 2025-06-03T17:00:20Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:23:12Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 9715304.0
num_examples: 877
download_size: 9622417
dataset_size: 9715304.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "testDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arnaultsta/MNLP_M3_wikipedia_camel_chunked_300_rest | arnaultsta | 2025-06-03T16:59:17Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T15:56:57Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 123651985
num_examples: 98790
download_size: 67656747
dataset_size: 123651985
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
un1c0rnio/eval_act_so101_box_pencil3_140000 | un1c0rnio | 2025-06-03T16:41:34Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-03T16:41:10Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 9,
"total_frames": 13523,
"total_tasks": 1,
"total_videos": 18,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:9"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.base": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.extside": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
tingshiuanlai/motherese-prosody-data | tingshiuanlai | 2025-06-03T16:39:23Z | 118 | 0 | [
"license:cc-by-4.0",
"region:us",
"prosody",
"speech-features",
"librispeech"
] | [] | 2025-05-25T21:22:08Z | null | ---
license: cc-by-4.0
tags:
- prosody
- speech-features
- librispeech
---
# Prosody Features for train-clean-100
This repository contains a pickled `ProsodyFeatureExtractor` object trained on the LibriSpeech `train-clean-100` and `dev-clean` subset.
## Contents
- Word-level prosodic features
- F0, energy, duration, pause, prominence
- Extracted using CELEX-based stress localization
## Format
- `.pkl` file — can be loaded using `pickle.load(open(..., "rb"))`
- Compatible with JSON serialization
|
mmosoriov/sampleMMOV2_so100_pick_place | mmosoriov | 2025-06-03T16:36:40Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-03T16:36:30Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 1193,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ChavyvAkvar/synthetic-trades-XRP-batch-8 | ChavyvAkvar | 2025-06-03T16:35:01Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:34:04Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448468
num_examples: 1000
download_size: 924423237
dataset_size: 923448468
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cobordism/LC_RL_easy | cobordism | 2025-06-03T16:30:50Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:30:48Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: problem
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 14884949.0
num_examples: 1000
download_size: 14291740
dataset_size: 14884949.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AitorDL/MNLP_DPO_Math | AitorDL | 2025-06-03T16:28:31Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:28:28Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 10312166
num_examples: 5398
download_size: 4506138
dataset_size: 10312166
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ryota-komatsu/libritts-r-mhubert-2000units | ryota-komatsu | 2025-06-03T16:17:09Z | 21 | 0 | [
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-01T11:00:52Z | null | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: string
- name: units
sequence: int32
- name: transcript
dtype: string
- name: spectrogram
dtype:
array2_d:
shape:
- null
- 80
dtype: float32
splits:
- name: train
num_bytes: 32534599975
num_examples: 354729
- name: dev
num_bytes: 526275690
num_examples: 5736
download_size: 32580521122
dataset_size: 33060875665
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-6 | ChavyvAkvar | 2025-06-03T16:15:58Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:15:06Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448192
num_examples: 1000
download_size: 924486040
dataset_size: 923448192
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
QuanHoangNgoc/cmp_dataset_train | QuanHoangNgoc | 2025-06-03T16:14:42Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T15:38:45Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: audio_file
dtype: string
- name: audio_array16
sequence: float32
splits:
- name: train
num_bytes: 18767276449
num_examples: 15023
download_size: 18765515647
dataset_size: 18767276449
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lukebarousse/data_jobs | lukebarousse | 2025-06-03T16:13:27Z | 10,138 | 42 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-03-21T01:33:05Z | null | ---
license: apache-2.0
---
# 🧠 data_jobs Dataset
A dataset of real-world data analytics job postings from 2023, collected and processed by Luke Barousse.
## Background
I've been collecting data on data job postings since 2022. I've been using a bot to scrape the data from Google, which come from a variety of sources.
You can find the full dataset at my app [datanerd.tech](https://datanerd.tech).
> [Serpapi](https://serpapi.com/) has kindly supported my work by providing me access to their API. Tell them I sent you and get 20% off paid plans.
## 📘 Data Dictionary
| Column Name | Description | Type | Source |
|-------------------------|-----------------------------------------------------------------------------|--------------|------------------|
| `job_title_short` | Cleaned/standardized job title using BERT model (10-class classification) | Calculated | From `job_title` |
| `job_title` | Full original job title as scraped | Raw | Scraped |
| `job_location` | Location string shown in job posting | Raw | Scraped |
| `job_via` | Platform the job was posted on (e.g., LinkedIn, Jobijoba) | Raw | Scraped |
| `job_schedule_type` | Type of schedule (Full-time, Part-time, Contractor, etc.) | Raw | Scraped |
| `job_work_from_home` | Whether the job is remote (`true`/`false`) | Boolean | Parsed |
| `search_location` | Location used by the bot to generate search queries | Generated | Bot logic |
| `job_posted_date` | Date and time when job was posted | Raw | Scraped |
| `job_no_degree_mention` | Whether the posting explicitly mentions no degree is required | Boolean | Parsed |
| `job_health_insurance` | Whether the job mentions health insurance | Boolean | Parsed |
| `job_country` | Country extracted from job location | Calculated | Parsed |
| `salary_rate` | Indicates if salary is annual or hourly | Raw | Scraped |
| `salary_year_avg` | Average yearly salary (calculated from salary ranges when available) | Calculated | Derived |
| `salary_hour_avg` | Average hourly salary (same logic as yearly) | Calculated | Derived |
| `company_name` | Company name listed in job posting | Raw | Scraped |
| `job_skills` | List of relevant skills extracted from job posting using PySpark | Parsed List | NLP Extracted |
| `job_type_skills` | Dictionary mapping skill types (e.g., 'cloud', 'libraries') to skill sets | Parsed Dict | NLP Extracted |
|
ChavyvAkvar/synthetic-trades-BTC-batch-3 | ChavyvAkvar | 2025-06-03T16:11:24Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:10:21Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923450698
num_examples: 1000
download_size: 924474743
dataset_size: 923450698
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-5 | ChavyvAkvar | 2025-06-03T16:04:59Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:04:01Z | null | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448042
num_examples: 1000
download_size: 924453904
dataset_size: 923448042
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RikSaint/neck_surface_vibration_dataset | RikSaint | 2025-06-03T16:03:04Z | 0 | 0 | [
"language:en",
"license:apache-2.0",
"region:us"
] | [] | 2025-06-03T15:57:04Z | null | ---
license: apache-2.0
language:
- en
pretty_name: O
--- |
anonloftune/insurance-30-sft | anonloftune | 2025-06-03T16:02:45Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T16:02:41Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 14864388
num_examples: 16370
- name: validation
num_bytes: 1713240
num_examples: 1980
download_size: 5913674
dataset_size: 16577628
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
mmmanuel/stackexchange_dpo_stem | mmmanuel | 2025-06-03T16:01:59Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T15:41:26Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: small
num_bytes: 36036813
num_examples: 9794
download_size: 20266001
dataset_size: 36036813
configs:
- config_name: default
data_files:
- split: small
path: data/small-*
---
|
zwa73/SoulTide-ImageData-Dataset | zwa73 | 2025-06-03T15:44:49Z | 475 | 0 | [
"license:cc0-1.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-05T08:20:46Z | null | ---
configs:
- config_name: Akaset
data_files:
- split: categorized
path:
- "character/Akaset/categorized/**/*.png"
- "character/Akaset/categorized/**/*.jpg"
- "character/Akaset/categorized/metadata.csv"
- split: processed
path:
- "character/Akaset/processed/**/*.png"
- "character/Akaset/processed/metadata.csv"
- config_name: Alisa
data_files:
- split: categorized
path:
- "character/Alisa/categorized/**/*.png"
- "character/Alisa/categorized/**/*.jpg"
- "character/Alisa/categorized/metadata.csv"
- split: processed
path:
- "character/Alisa/processed/**/*.png"
- "character/Alisa/processed/metadata.csv"
- config_name: AmaneInori
data_files:
- split: categorized
path:
- "character/AmaneInori/categorized/**/*.png"
- "character/AmaneInori/categorized/**/*.jpg"
- "character/AmaneInori/categorized/metadata.csv"
- split: processed
path:
- "character/AmaneInori/processed/**/*.png"
- "character/AmaneInori/processed/metadata.csv"
- config_name: Andrea
data_files:
- split: categorized
path:
- "character/Andrea/categorized/**/*.png"
- "character/Andrea/categorized/**/*.jpg"
- "character/Andrea/categorized/metadata.csv"
- split: processed
path:
- "character/Andrea/processed/**/*.png"
- "character/Andrea/processed/metadata.csv"
- config_name: Antonina
data_files:
- split: categorized
path:
- "character/Antonina/categorized/**/*.png"
- "character/Antonina/categorized/**/*.jpg"
- "character/Antonina/categorized/metadata.csv"
- split: processed
path:
- "character/Antonina/processed/**/*.png"
- "character/Antonina/processed/metadata.csv"
- config_name: Aoling
data_files:
- split: categorized
path:
- "character/Aoling/categorized/**/*.png"
- "character/Aoling/categorized/**/*.jpg"
- "character/Aoling/categorized/metadata.csv"
- split: processed
path:
- "character/Aoling/processed/**/*.png"
- "character/Aoling/processed/metadata.csv"
- config_name: Asuna
data_files:
- split: categorized
path:
- "character/Asuna/categorized/**/*.png"
- "character/Asuna/categorized/**/*.jpg"
- "character/Asuna/categorized/metadata.csv"
- split: processed
path:
- "character/Asuna/processed/**/*.png"
- "character/Asuna/processed/metadata.csv"
- config_name: Aurora
data_files:
- split: categorized
path:
- "character/Aurora/categorized/**/*.png"
- "character/Aurora/categorized/**/*.jpg"
- "character/Aurora/categorized/metadata.csv"
- split: processed
path:
- "character/Aurora/processed/**/*.png"
- "character/Aurora/processed/metadata.csv"
- config_name: Benten
data_files:
- split: categorized
path:
- "character/Benten/categorized/**/*.png"
- "character/Benten/categorized/**/*.jpg"
- "character/Benten/categorized/metadata.csv"
- split: processed
path:
- "character/Benten/processed/**/*.png"
- "character/Benten/processed/metadata.csv"
- config_name: Cecilia
data_files:
- split: categorized
path:
- "character/Cecilia/categorized/**/*.png"
- "character/Cecilia/categorized/**/*.jpg"
- "character/Cecilia/categorized/metadata.csv"
- split: processed
path:
- "character/Cecilia/processed/**/*.png"
- "character/Cecilia/processed/metadata.csv"
- config_name: Clarice
data_files:
- split: categorized
path:
- "character/Clarice/categorized/**/*.png"
- "character/Clarice/categorized/**/*.jpg"
- "character/Clarice/categorized/metadata.csv"
- split: processed
path:
- "character/Clarice/processed/**/*.png"
- "character/Clarice/processed/metadata.csv"
- config_name: Clotho
data_files:
- split: categorized
path:
- "character/Clotho/categorized/**/*.png"
- "character/Clotho/categorized/**/*.jpg"
- "character/Clotho/categorized/metadata.csv"
- split: processed
path:
- "character/Clotho/processed/**/*.png"
- "character/Clotho/processed/metadata.csv"
- config_name: Colcher
data_files:
- split: categorized
path:
- "character/Colcher/categorized/**/*.png"
- "character/Colcher/categorized/**/*.jpg"
- "character/Colcher/categorized/metadata.csv"
- split: processed
path:
- "character/Colcher/processed/**/*.png"
- "character/Colcher/processed/metadata.csv"
- config_name: Dolores
data_files:
- split: categorized
path:
- "character/Dolores/categorized/**/*.png"
- "character/Dolores/categorized/**/*.jpg"
- "character/Dolores/categorized/metadata.csv"
- split: processed
path:
- "character/Dolores/processed/**/*.png"
- "character/Dolores/processed/metadata.csv"
- config_name: Dora
data_files:
- split: categorized
path:
- "character/Dora/categorized/**/*.png"
- "character/Dora/categorized/**/*.jpg"
- "character/Dora/categorized/metadata.csv"
- split: processed
path:
- "character/Dora/processed/**/*.png"
- "character/Dora/processed/metadata.csv"
- config_name: Dreizehn
data_files:
- split: categorized
path:
- "character/Dreizehn/categorized/**/*.png"
- "character/Dreizehn/categorized/**/*.jpg"
- "character/Dreizehn/categorized/metadata.csv"
- split: processed
path:
- "character/Dreizehn/processed/**/*.png"
- "character/Dreizehn/processed/metadata.csv"
- config_name: Ennis
data_files:
- split: categorized
path:
- "character/Ennis/categorized/**/*.png"
- "character/Ennis/categorized/**/*.jpg"
- "character/Ennis/categorized/metadata.csv"
- split: processed
path:
- "character/Ennis/processed/**/*.png"
- "character/Ennis/processed/metadata.csv"
- config_name: Erinnern
data_files:
- split: categorized
path:
- "character/Erinnern/categorized/**/*.png"
- "character/Erinnern/categorized/**/*.jpg"
- "character/Erinnern/categorized/metadata.csv"
- split: processed
path:
- "character/Erinnern/processed/**/*.png"
- "character/Erinnern/processed/metadata.csv"
- config_name: EtsukazuMiko
data_files:
- split: categorized
path:
- "character/EtsukazuMiko/categorized/**/*.png"
- "character/EtsukazuMiko/categorized/**/*.jpg"
- "character/EtsukazuMiko/categorized/metadata.csv"
- split: processed
path:
- "character/EtsukazuMiko/processed/**/*.png"
- "character/EtsukazuMiko/processed/metadata.csv"
- config_name: Freesia
data_files:
- split: categorized
path:
- "character/Freesia/categorized/**/*.png"
- "character/Freesia/categorized/**/*.jpg"
- "character/Freesia/categorized/metadata.csv"
- split: processed
path:
- "character/Freesia/processed/**/*.png"
- "character/Freesia/processed/metadata.csv"
- config_name: Gawana
data_files:
- split: categorized
path:
- "character/Gawana/categorized/**/*.png"
- "character/Gawana/categorized/**/*.jpg"
- "character/Gawana/categorized/metadata.csv"
- split: processed
path:
- "character/Gawana/processed/**/*.png"
- "character/Gawana/processed/metadata.csv"
- config_name: HagakureRuri
data_files:
- split: categorized
path:
- "character/HagakureRuri/categorized/**/*.png"
- "character/HagakureRuri/categorized/**/*.jpg"
- "character/HagakureRuri/categorized/metadata.csv"
- split: processed
path:
- "character/HagakureRuri/processed/**/*.png"
- "character/HagakureRuri/processed/metadata.csv"
- config_name: Haliva
data_files:
- split: categorized
path:
- "character/Haliva/categorized/**/*.png"
- "character/Haliva/categorized/**/*.jpg"
- "character/Haliva/categorized/metadata.csv"
- split: processed
path:
- "character/Haliva/processed/**/*.png"
- "character/Haliva/processed/metadata.csv"
- config_name: HazukiYuki
data_files:
- split: categorized
path:
- "character/HazukiYuki/categorized/**/*.png"
- "character/HazukiYuki/categorized/**/*.jpg"
- "character/HazukiYuki/categorized/metadata.csv"
- split: processed
path:
- "character/HazukiYuki/processed/**/*.png"
- "character/HazukiYuki/processed/metadata.csv"
- config_name: HeLing
data_files:
- split: categorized
path:
- "character/HeLing/categorized/**/*.png"
- "character/HeLing/categorized/**/*.jpg"
- "character/HeLing/categorized/metadata.csv"
- split: processed
path:
- "character/HeLing/processed/**/*.png"
- "character/HeLing/processed/metadata.csv"
- config_name: Ithil
data_files:
- split: categorized
path:
- "character/Ithil/categorized/**/*.png"
- "character/Ithil/categorized/**/*.jpg"
- "character/Ithil/categorized/metadata.csv"
- split: processed
path:
- "character/Ithil/processed/**/*.png"
- "character/Ithil/processed/metadata.csv"
- config_name: JoanofArcLoire
data_files:
- split: categorized
path:
- "character/JoanofArcLoire/categorized/**/*.png"
- "character/JoanofArcLoire/categorized/**/*.jpg"
- "character/JoanofArcLoire/categorized/metadata.csv"
- split: processed
path:
- "character/JoanofArcLoire/processed/**/*.png"
- "character/JoanofArcLoire/processed/metadata.csv"
- config_name: Juewa
data_files:
- split: categorized
path:
- "character/Juewa/categorized/**/*.png"
- "character/Juewa/categorized/**/*.jpg"
- "character/Juewa/categorized/metadata.csv"
- split: processed
path:
- "character/Juewa/processed/**/*.png"
- "character/Juewa/processed/metadata.csv"
- config_name: LightCloud
data_files:
- split: categorized
path:
- "character/LightCloud/categorized/**/*.png"
- "character/LightCloud/categorized/**/*.jpg"
- "character/LightCloud/categorized/metadata.csv"
- split: processed
path:
- "character/LightCloud/processed/**/*.png"
- "character/LightCloud/processed/metadata.csv"
- config_name: Lilyiro
data_files:
- split: categorized
path:
- "character/Lilyiro/categorized/**/*.png"
- "character/Lilyiro/categorized/**/*.jpg"
- "character/Lilyiro/categorized/metadata.csv"
- split: processed
path:
- "character/Lilyiro/processed/**/*.png"
- "character/Lilyiro/processed/metadata.csv"
- config_name: Louisa
data_files:
- split: categorized
path:
- "character/Louisa/categorized/**/*.png"
- "character/Louisa/categorized/**/*.jpg"
- "character/Louisa/categorized/metadata.csv"
- split: processed
path:
- "character/Louisa/processed/**/*.png"
- "character/Louisa/processed/metadata.csv"
- config_name: Micha
data_files:
- split: categorized
path:
- "character/Micha/categorized/**/*.png"
- "character/Micha/categorized/**/*.jpg"
- "character/Micha/categorized/metadata.csv"
- split: processed
path:
- "character/Micha/processed/**/*.png"
- "character/Micha/processed/metadata.csv"
- config_name: Minerdwen
data_files:
- split: categorized
path:
- "character/Minerdwen/categorized/**/*.png"
- "character/Minerdwen/categorized/**/*.jpg"
- "character/Minerdwen/categorized/metadata.csv"
- split: processed
path:
- "character/Minerdwen/processed/**/*.png"
- "character/Minerdwen/processed/metadata.csv"
- config_name: Mist
data_files:
- split: categorized
path:
- "character/Mist/categorized/**/*.png"
- "character/Mist/categorized/**/*.jpg"
- "character/Mist/categorized/metadata.csv"
- split: processed
path:
- "character/Mist/processed/**/*.png"
- "character/Mist/processed/metadata.csv"
- config_name: NankungLin
data_files:
- split: categorized
path:
- "character/NankungLin/categorized/**/*.png"
- "character/NankungLin/categorized/**/*.jpg"
- "character/NankungLin/categorized/metadata.csv"
- split: processed
path:
- "character/NankungLin/processed/**/*.png"
- "character/NankungLin/processed/metadata.csv"
- config_name: Netsuki
data_files:
- split: categorized
path:
- "character/Netsuki/categorized/**/*.png"
- "character/Netsuki/categorized/**/*.jpg"
- "character/Netsuki/categorized/metadata.csv"
- split: processed
path:
- "character/Netsuki/processed/**/*.png"
- "character/Netsuki/processed/metadata.csv"
- config_name: NicoletteLamel
data_files:
- split: categorized
path:
- "character/NicoletteLamel/categorized/**/*.png"
- "character/NicoletteLamel/categorized/**/*.jpg"
- "character/NicoletteLamel/categorized/metadata.csv"
- split: processed
path:
- "character/NicoletteLamel/processed/**/*.png"
- "character/NicoletteLamel/processed/metadata.csv"
- config_name: Philodoxy
data_files:
- split: categorized
path:
- "character/Philodoxy/categorized/**/*.png"
- "character/Philodoxy/categorized/**/*.jpg"
- "character/Philodoxy/categorized/metadata.csv"
- split: processed
path:
- "character/Philodoxy/processed/**/*.png"
- "character/Philodoxy/processed/metadata.csv"
- config_name: QingHao
data_files:
- split: categorized
path:
- "character/QingHao/categorized/**/*.png"
- "character/QingHao/categorized/**/*.jpg"
- "character/QingHao/categorized/metadata.csv"
- split: processed
path:
- "character/QingHao/processed/**/*.png"
- "character/QingHao/processed/metadata.csv"
- config_name: QuLing
data_files:
- split: categorized
path:
- "character/QuLing/categorized/**/*.png"
- "character/QuLing/categorized/**/*.jpg"
- "character/QuLing/categorized/metadata.csv"
- split: processed
path:
- "character/QuLing/processed/**/*.png"
- "character/QuLing/processed/metadata.csv"
- config_name: RubyRose
data_files:
- split: categorized
path:
- "character/RubyRose/categorized/**/*.png"
- "character/RubyRose/categorized/**/*.jpg"
- "character/RubyRose/categorized/metadata.csv"
- split: processed
path:
- "character/RubyRose/processed/**/*.png"
- "character/RubyRose/processed/metadata.csv"
- config_name: SakuyaMako
data_files:
- split: categorized
path:
- "character/SakuyaMako/categorized/**/*.png"
- "character/SakuyaMako/categorized/**/*.jpg"
- "character/SakuyaMako/categorized/metadata.csv"
- split: processed
path:
- "character/SakuyaMako/processed/**/*.png"
- "character/SakuyaMako/processed/metadata.csv"
- config_name: Satya
data_files:
- split: categorized
path:
- "character/Satya/categorized/**/*.png"
- "character/Satya/categorized/**/*.jpg"
- "character/Satya/categorized/metadata.csv"
- split: processed
path:
- "character/Satya/processed/**/*.png"
- "character/Satya/processed/metadata.csv"
- config_name: Silenus
data_files:
- split: categorized
path:
- "character/Silenus/categorized/**/*.png"
- "character/Silenus/categorized/**/*.jpg"
- "character/Silenus/categorized/metadata.csv"
- split: processed
path:
- "character/Silenus/processed/**/*.png"
- "character/Silenus/processed/metadata.csv"
- config_name: Truda
data_files:
- split: categorized
path:
- "character/Truda/categorized/**/*.png"
- "character/Truda/categorized/**/*.jpg"
- "character/Truda/categorized/metadata.csv"
- split: processed
path:
- "character/Truda/processed/**/*.png"
- "character/Truda/processed/metadata.csv"
- config_name: TsukinoMiyo
data_files:
- split: categorized
path:
- "character/TsukinoMiyo/categorized/**/*.png"
- "character/TsukinoMiyo/categorized/**/*.jpg"
- "character/TsukinoMiyo/categorized/metadata.csv"
- split: processed
path:
- "character/TsukinoMiyo/processed/**/*.png"
- "character/TsukinoMiyo/processed/metadata.csv"
- config_name: Virgina
data_files:
- split: categorized
path:
- "character/Virgina/categorized/**/*.png"
- "character/Virgina/categorized/**/*.jpg"
- "character/Virgina/categorized/metadata.csv"
- split: processed
path:
- "character/Virgina/processed/**/*.png"
- "character/Virgina/processed/metadata.csv"
license: cc0-1.0
---
character
____[char]
______resource - 原始资源
________rotate - 旋转变体资源
________alpha_bg - 透明背景的资源
________white_bg - 从透明背景添加黑色背景的资源
________black_bg - 从透明背景添加白色背景的资源
________unused - 不使用的资源
________ready - 准备使用但还未分类的资源
______categorized - 分类完成的资源
______processed - 完成预处理的资源
______training_set - 以 [训练次数_概念] 命名的训练集
搭配此管理器来生成所需的训练集:
https://github.com/Sosarciel/SoulTide-ImageData-Manager |
apurvaga/go-browse-wa | apurvaga | 2025-06-03T15:30:37Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T15:30:03Z | null | ---
dataset_info:
features:
- name: step_idx
dtype: int64
- name: step_data
struct:
- name: prompt
list:
- name: role
dtype: string
- name: content
dtype: string
- name: completion
list:
- name: role
dtype: string
- name: content
dtype: string
- name: traj_reward
dtype: float64
- name: next_step_idx
dtype: int64
- name: traj_length
dtype: int64
- name: step_number
dtype: int64
splits:
- name: train
num_bytes: 4062141934
num_examples: 164533
download_size: 848108687
dataset_size: 4062141934
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amaurypllx/MNLP_test_dataset | amaurypllx | 2025-06-03T15:22:46Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T15:22:44Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1812781
num_examples: 5342
download_size: 948077
dataset_size: 1812781
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
LiuYuanCheng/rl_think | LiuYuanCheng | 2025-06-03T15:15:02Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T14:55:27Z | null | ---
configs:
- config_name: gsm8k
data_files:
- split: train
path: gsm8k_rl_think/train/gsm8k_rl_think.jsonl
- config_name: kk
data_files:
- split: train
path: kk_rl_think/train/kk_rl_think.jsonl
- config_name: math
data_files:
- split: train
path: math_rl_think/train/math_rl_think.jsonl
- config_name: orca
data_files:
- split: train
path: orca_rl_think/train/orca_rl_think.jsonl
--- |
Subsets and Splits