datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-06 00:37:09
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-06 00:34:34
| trending_score
float64 0
40
⌀ | card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Yuto2007/scFoundationEmbeddings_Detailed_Clusters | Yuto2007 | 2025-06-03T15:14:48Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T13:48:37Z | null | ---
dataset_info:
features:
- name: Detailed_Cluster_names
dtype: string
- name: input_ids
sequence: float32
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 18877493822
num_examples: 1533093
- name: test
num_bytes: 2359689710
num_examples: 191637
- name: validation
num_bytes: 2359691130
num_examples: 191637
download_size: 24771881770
dataset_size: 23596874662
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
VGraf/self-talk_gpt3.5_gpt4o_prefpairs_with_Meta-Llama-3.1-8B-Instruct_chosen | VGraf | 2025-06-03T15:11:06Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T15:10:54Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 24629
num_examples: 2
download_size: 25672
dataset_size: 24629
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cobordism/LC_train_3_no_percep | cobordism | 2025-06-03T15:05:50Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T15:05:47Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 22820687.0
num_examples: 999
download_size: 21800134
dataset_size: 22820687.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
thomas-kuntz/MNLP_M2_dpo_dataset | thomas-kuntz | 2025-06-03T14:54:23Z | 57 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-26T15:17:34Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 3777155.9881329113
num_examples: 1011
- name: test
num_bytes: 945223.0118670886
num_examples: 253
download_size: 2440962
dataset_size: 4722379.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
LivingOptics/hyperspectral-grapes | LivingOptics | 2025-06-03T14:48:38Z | 37 | 0 | [
"language:en",
"license:mit",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-01-23T09:22:15Z | null | ---
license: mit
language:
- en
size_categories:
- n<1K
---
# Non-contact sugar estimation with hyperpsectral data

## Access the data
You can now access this dataset via the [Living Optics Cloud Portal](https://cloud.livingoptics.com/shared-resources?file=data/annotated-datasets/Grapes-Dataset.zip)
## Motivation
### Precision viticulture and adaptive harvesting
Wine quality is heavily dependent on the grape maturity at harvest and can decline by 10\% in a week. To maximise quality, the harvest time of grape berries should be optimised for
- **sugar levels**, usually measured as total soluble solids (TSS) or Brix
- berry acidity, often expressed as pH and titratable acidity (TA);
- concentrations of the main organic acids in the berry, such as tartaric and malic acid; and for red varieties anthocyanin and
- total phenol concentrations.
These values are typically analysed using wet chemistry procedures on periodically sampled berries one to three weeks before harvest. These analytical methods are **destructive, require time-consuming berry sampling, as well as sample preparation in most instances**.
### Why Hyperspectral imaging?
Hyperspectral imaging offers a method for non-destructive, high-throughput testing of grape berries. These measurements require less specialised labour, no reagents and can have relatively low cost per analysis. Hyperspectral imagers, combined with statistical modelling techniques, have been shown to accurately predict grape parameters in a non-destructive manner for table and wine grapes using methods such as partial least squares regression analysis (PLSA).
Living Optics are developing pioneering hyperspectral cameras for the mass market. Our mission is to enable the next generation of computer vision through Spatial Spectral Information.

> This is a notebook showing how hyperspectral data, collected with the Living Optics camera, can be paired with statistical analysis to train a regressor for extracting grape parameters.
## Method
Individual grapes were extracted from six boxes of white table grapes (Agrimessina, Italy). These 300 individual table grapes were imaged using a custom lighting rig. The Living Optics camera was mounted on a downwards facing tripod directly above the sample to achieve a (45/0◦) imaging geometry. Twelve grape samples were placed on a black PLA tray per imaging round shown in Figure 9(b). Additionally, a white reference was collected by imaging a sheet of Tyvek in place of the tray. Using an objective lens focal length of 18 mm, approximately 150 sampling points were obtained per grape on average.
After imaging, 3-4 drops of juice from each grape ( 0.2ml) were extracted and measured with a handheld
BRIX refractometer (AS-Q6, Aicevoos, China). The error of the instrument is given as ±0.2 ◦Bx.

## Dataset contains
- 🍇 300 processed diffuse reflectance spectra of white grapes collected with the Living Optics camera
- 🧑🔬 Paired sugar content values for each grape

## Citation
Raw data is available by request |
LivingOptics/hyperspectral-plant-virus | LivingOptics | 2025-06-03T14:47:12Z | 51 | 0 | [
"language:en",
"license:mit",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-29T14:11:05Z | null | ---
license: mit
language:
- en
size_categories:
- n<1K
---
# Super Beet Virus Classification Dataset

## Access the Data
You can access this dataset via the [Living Optics Cloud Portal](https://cloud.livingoptics.com/shared-resources?file=data/annotated-datasets/Field-Crop-Classification-Dataset.zip).
## Motivation
### Enhancing Sugar Beet Health Monitoring
Sugar beet crops are susceptible to various viral infections that can significantly impact yield and quality. Early and accurate detection of these viruses is crucial for effective disease management and crop protection.

### Leveraging Hyperspectral Imaging for Disease Classification
Hyperspectral imaging provides a non-destructive and high-throughput method for detecting plant diseases by capturing detailed spectral information. This technology, combined with machine learning techniques, enables the classification of different virus types affecting sugar beet plants.
## Method
The dataset comprises 97 high-resolution images of sugar beet plants, each annotated to indicate the presence of specific viral infections. A total of 146 annotations are included, covering the following classes:
- **BChV (Beet Chlorosis Virus)**: 24 instances
- **BMYV (Beet Mild Yellowing Virus)**: 16 instances
- **BYV (Beet Yellows Virus)**: 24 instances
- **Uninoculated (Healthy Plants)**: 30 instances
Annotations were performed considering clear gaps between plants, ensuring accurate labeling. Some images include white reference targets to aid in spectral calibration.
## Dataset Contains
- 🖼️ 97 images of sugar beet plants under various inoculation statuses
- 🔖 146 annotations across 4 classes (3 virus types and healthy plants)
- 🎯 Labels indicating inoculation status or reference targets
- ⚠️ Note: The dataset exhibits some class imbalance
## Virus Descriptions
- **Beet Chlorosis Virus (BChV)**: A polerovirus causing interveinal yellowing in sugar beet leaves. Transmitted by aphids, BChV can lead to significant yield losses if not managed properly.
- **Beet Mild Yellowing Virus (BMYV)**: Another polerovirus spread by aphids, BMYV results in mild yellowing symptoms and can reduce sugar content in beets.
- **Beet Yellows Virus (BYV)**: A closterovirus known for causing severe yellowing and necrosis in sugar beet leaves. BYV is considered one of the most damaging viruses affecting sugar beet crops.
## Citation
Raw data is available upon request.
For more information on the viruses and their impact on sugar beet crops, refer to the following resources:
- [Virus Yellows - Bayer Crop Science UK](https://cropscience.bayer.co.uk/agronomy-id/diseases/sugar-beet-diseases/virus-yellows-beet)
- [Disease control: Learn about Virus Yellows - NFU](https://www.nfuonline.com/updates-and-information/disease-control-learn-about-virus-yellows/)
- [Beet yellows virus - Wikipedia](https://en.wikipedia.org/wiki/Beet_yellows_virus) |
adamezzaim/M3_mcqa_context | adamezzaim | 2025-06-03T14:39:08Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T14:38:54Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: dataset
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: train
num_bytes: 303770784
num_examples: 35460
download_size: 173517952
dataset_size: 303770784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
baogui123/test | baogui123 | 2025-06-03T14:36:40Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-06-03T14:36:40Z | null | ---
license: apache-2.0
---
|
CHOOSEIT/MCQA_small_alignment_1000 | CHOOSEIT | 2025-06-03T14:10:43Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T14:10:39Z | null | ---
dataset_info:
features:
- name: source_dataset
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: rationale
dtype: string
- name: split
dtype: string
- name: subject
dtype: string
splits:
- name: train
num_bytes: 3391469
num_examples: 4898
- name: test
num_bytes: 373739
num_examples: 1080
- name: validation
num_bytes: 133973
num_examples: 346
download_size: 2391571
dataset_size: 3899181
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
anonloftune/insurance-10-facttune-mc | anonloftune | 2025-06-03T13:42:16Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T13:42:13Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 40642652
num_examples: 23371
- name: validation
num_bytes: 4778090
num_examples: 2866
download_size: 3915129
dataset_size: 45420742
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
while0628/vqasynth_sample_spatial_new_ttt | while0628 | 2025-06-03T13:39:00Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"vqasynth",
"remyx"
] | [] | 2025-06-03T13:38:54Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: messages
sequence: 'null'
splits:
- name: train
num_bytes: 1162100.0
num_examples: 8
download_size: 1163534
dataset_size: 1162100.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- vqasynth
- remyx
---
|
Kyleyee/train_data_Helpful_drdpo_preference | Kyleyee | 2025-06-03T13:36:05Z | 80 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-17T16:07:59Z | null | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: a_1
list:
- name: content
dtype: string
- name: role
dtype: string
- name: a_2
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen_preference
dtype: float64
- name: rejected_preference
dtype: float64
- name: a_1_preference
dtype: float64
- name: a_2_preference
dtype: float64
splits:
- name: train
num_bytes: 69438428
num_examples: 43835
- name: test
num_bytes: 3812201
num_examples: 2354
download_size: 42617495
dataset_size: 73250629
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ManTang034/so101_test | ManTang034 | 2025-06-03T13:32:28Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-03T13:32:11Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 10,
"total_frames": 5960,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Tamnemtf/SGU_BOOK | Tamnemtf | 2025-06-03T13:23:10Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T13:23:06Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: description
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2632329
num_examples: 2252
download_size: 351845
dataset_size: 2632329
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
youssefbelghmi/MNLP_M3_mcqa_dataset_2 | youssefbelghmi | 2025-06-03T13:16:38Z | 44 | 0 | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"multiple-choice"
] | 2025-06-03T11:12:01Z | null | ---
annotations_creators:
- expert-generated
language:
- en
license: mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
pretty_name: MNLP M3 MCQA Dataset
---
# MNLP M3 MCQA Dataset
The **MNLP M3 MCQA Dataset** is a carefully curated collection of **Multiple-Choice Question Answering (MCQA)** examples, unified from several academic and benchmark datasets.
Developed as part of the *CS-552: Modern NLP* course at EPFL (Spring 2025), this dataset is designed for training and evaluating models on multiple-choice QA tasks, particularly in the **STEM** and general knowledge domains.
## Key Features
- ~30,000 MCQA questions
- 6 diverse sources: `SciQ`, `OpenBookQA`, `MathQA`, `ARC-Easy`, `ARC-Challenge`, and `MedMCQA`
- Each question has exactly 4 options (A–D) and one correct answer
- Covers a wide range of topics: science, technology, engineering, mathematics, and general knowledge
## Dataset Structure
Each example is a dictionary with the following fields:
| Field | Type | Description |
|-----------|----------|---------------------------------------------------|
| `dataset` | `string` | Source dataset (`sciq`, `openbookqa`, etc.) |
| `id` | `string` | Unique identifier for the question |
| `question`| `string` | The question text |
| `choices` | `list` | List of 4 answer options (corresponding to A–D) |
| `answer` | `string` | The correct option, as a letter: `"A"`, `"B"`, `"C"`, or `"D"` |
```markdown
Example:
```json
{
"dataset": "sciq",
"id": "sciq_01_00042",
"question": "What does a seismograph measure?",
"choices": ["Earthquakes", "Rainfall", "Sunlight", "Temperature"],
"answer": "A"
}
```
## Source Datasets
This dataset combines multiple high-quality MCQA sources to support research and fine-tuning in STEM education and reasoning. The full corpus contains **29,870 multiple-choice questions** from the following sources:
| Source (Hugging Face) | Name | Size | Description & Role in the Dataset |
| ------------------------------------------- | ------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `allenai/sciq` | **SciQ** | 11,679 | **Science questions** (Physics, Chemistry, Biology, Earth science). Crowdsourced with 4 answer choices and optional supporting evidence. Used to provide **well-balanced, factual STEM questions** at a middle/high-school level. |
| `allenai/openbookqa` | **OpenBookQA** | 4,957 | Science exam-style questions requiring **multi-step reasoning** and use of **commonsense or external knowledge**. Contributes more **challenging** and **inference-based** questions. |
| `allenai/math_qa` | **MathQA** | 5,000 | Subsample of quantitative math word problems derived from AQuA-RAT, annotated with structured answer options. Introduces **numerical reasoning** and **problem-solving** components into the dataset. |
| `allenai/ai2_arc` (config: `ARC-Easy`) | **ARC-Easy** | 2,140 | Science questions at the middle school level. Useful for testing **basic STEM understanding** and **factual recall**. Filtered to retain only valid 4-choice entries. |
| `allenai/ai2_arc` (config: `ARC-Challenge`) | **ARC-Challenge** | 1,094 | More difficult science questions requiring **reasoning and inference**. Widely used as a benchmark for evaluating LLMs. Also filtered for clean MCQA format compatibility. |
| `openlifescienceai/medmcqa` | **MedMCQA** | 5,000 | A subsample of multiple-choice questions on **medical topics** from various exams, filtered for a single-choice format. Contains real-world and domain-specific **clinical reasoning** questions covering various medical disciplines. |
## Intended Applications and Structure
This dataset is split into three parts:
- `train` (~70%) — for training MCQA models
- `validation` (~15%) — for tuning and monitoring performance during training
- `test` (~15%) — for final evaluation on unseen questions
It is suitable for multiple-choice question answering tasks, especially in the **STEM** domain (Science, Technology, Engineering, Mathematics).
## Author
This dataset was created and published by [Youssef Belghmi](https://huggingface.co/youssefbelghmi) as part of the *CS-552: Modern NLP* course at EPFL (Spring 2025).
|
ricdomolm/lawma-reasoning-qwen4b-v0 | ricdomolm | 2025-06-03T13:13:34Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T13:13:06Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: response
dtype: string
splits:
- name: train
num_bytes: 1555621301
num_examples: 272800
download_size: 504545897
dataset_size: 1555621301
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anfindsen/M3_fixed_ds | anfindsen | 2025-06-03T13:10:06Z | 78 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-30T13:09:27Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: openr1_source
dtype: string
- name: id
dtype: string
- name: dataset
dtype: string
- name: choices
sequence: string
splits:
- name: open_train
num_bytes: 261175677.3129466
num_examples: 209341
- name: open_eval
num_bytes: 29020628.687053423
num_examples: 23261
- name: train
num_bytes: 148520607.22336814
num_examples: 99920
- name: test
num_bytes: 16503445.77663187
num_examples: 11103
- name: final_train
num_bytes: 150518.10557768925
num_examples: 451
- name: final_test
num_bytes: 17020.894422310757
num_examples: 51
download_size: 252044861
dataset_size: 455387898.00000006
configs:
- config_name: default
data_files:
- split: open_train
path: data/open_train-*
- split: open_eval
path: data/open_eval-*
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: final_train
path: data/final_train-*
- split: final_test
path: data/final_test-*
---
|
interstellarninja/atropos_salesforce_apigen | interstellarninja | 2025-06-03T13:03:52Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T13:03:27Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 111375968
num_examples: 4574
download_size: 20591781
dataset_size: 111375968
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vietnhat/grandpa-interview-dataset | vietnhat | 2025-06-03T12:35:06Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T12:34:59Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 2356619.0
num_examples: 9
download_size: 2168827
dataset_size: 2356619.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
albertfares/NLP4Education_filtered | albertfares | 2025-06-03T12:04:36Z | 37 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-31T13:15:39Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: num_options
dtype: int64
splits:
- name: train
num_bytes: 978662
num_examples: 2656
download_size: 572520
dataset_size: 978662
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RainS/MEG-Multi-Exposure-Gradient-dataset | RainS | 2025-06-03T11:42:10Z | 0 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | [] | 2025-06-03T08:39:45Z | null | ---
license: mit
---
This is a Multi-Exposure Gradient dataset for low light enhancement and also for overexposure recovery, which contains about 1000 group of photos.
The contents include variety of scenes such as indoor, outdoor, sunny, rainy, cloudy, nightime, city, rural, campus and so on.
There are 5 photos in each group with different exposure gradient, from low light to overexposure noted as 01 to 05.
We provide two versions, one is continuous shooting data, whose contents of each group maybe a little bit different because of the moving scene, but they have more natural illumination and color.
The other is post processing data, whose contents of each group are strictly same, and the exposure is adjuste by PS.
You can choose one version according to your requirement, for example the continuous shooting data for unsupervised learning and the post processing data for supervised learning. |
ljnlonoljpiljm/BIGstockimage-1.5M-scored-pt-one | ljnlonoljpiljm | 2025-06-03T11:19:37Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T10:58:10Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: text
dtype: string
- name: similarity
dtype: float64
splits:
- name: train
num_bytes: 29323222699.0
num_examples: 750000
download_size: 29309357098
dataset_size: 29323222699.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
louisebrix/smk_only_paintings | louisebrix | 2025-06-03T11:13:11Z | 44 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T11:12:56Z | null | ---
dataset_info:
features:
- name: smk_id
dtype: string
- name: period
dtype: string
- name: start_year
dtype: int64
- name: title
dtype: string
- name: first_artist
dtype: string
- name: all_artists
sequence: string
- name: num_artists
dtype: int64
- name: main_type
dtype: string
- name: all_types
sequence: string
- name: image_thumbnail
dtype: string
- name: gender
sequence: string
- name: birth_death
sequence: string
- name: nationality
sequence: string
- name: history
sequence: string
- name: artist_roles
sequence: string
- name: creator_roles
sequence: string
- name: num_creators
dtype: int64
- name: techniques
sequence: string
- name: enrichment_url
dtype: string
- name: content_person
sequence: string
- name: has_text
dtype: bool
- name: colors
sequence: string
- name: geo_location
dtype: string
- name: entropy
dtype: float64
- name: tags_en
sequence: string
- name: image
dtype: image
- name: rgb
dtype: string
splits:
- name: train
num_bytes: 237193836.69
num_examples: 1687
download_size: 232341252
dataset_size: 237193836.69
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sumuks/openalex | sumuks | 2025-06-03T11:11:08Z | 0 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-02T23:57:53Z | null | ---
dataset_info:
- config_name: authors
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 782950242225
num_examples: 103480180
download_size: 128603695157
dataset_size: 782950242225
- config_name: concepts
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 434567730
num_examples: 65073
download_size: 149586112
dataset_size: 434567730
- config_name: domains
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 5899
num_examples: 4
download_size: 8723
dataset_size: 5899
- config_name: fields
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 56037
num_examples: 26
download_size: 21602
dataset_size: 56037
- config_name: funders
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 127287864
num_examples: 32437
download_size: 27402892
dataset_size: 127287864
- config_name: institutions
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 2247783574
num_examples: 114883
download_size: 391692914
dataset_size: 2247783574
- config_name: publishers
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 35165382
num_examples: 10741
download_size: 7180922
dataset_size: 35165382
- config_name: sources
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 4985902135
num_examples: 260798
download_size: 767043697
dataset_size: 4985902135
- config_name: subfields
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 986219
num_examples: 252
download_size: 245766
dataset_size: 986219
- config_name: topics
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 29540660
num_examples: 4516
download_size: 8240326
dataset_size: 29540660
- config_name: works
features:
- name: id
dtype: string
- name: data
dtype: string
- name: updated_date
dtype: string
splits:
- name: train
num_bytes: 47184276960
num_examples: 2322509
- name: updated_2025_05_28
num_bytes: 44222267882
num_examples: 2634576
- name: updated_2025_05_27
num_bytes: 33156479642
num_examples: 2099881
download_size: 31002445366
dataset_size: 124563024484
configs:
- config_name: authors
data_files:
- split: train
path: authors/train-*
- config_name: concepts
data_files:
- split: train
path: concepts/train-*
- config_name: domains
data_files:
- split: train
path: domains/train-*
- config_name: fields
data_files:
- split: train
path: fields/train-*
- config_name: funders
data_files:
- split: train
path: funders/train-*
- config_name: institutions
data_files:
- split: train
path: institutions/train-*
- config_name: publishers
data_files:
- split: train
path: publishers/train-*
- config_name: sources
data_files:
- split: train
path: sources/train-*
- config_name: subfields
data_files:
- split: train
path: subfields/train-*
- config_name: topics
data_files:
- split: train
path: topics/train-*
- config_name: works
data_files:
- split: train
path: works/train-*
- split: updated_2025_05_28
path: works/updated_2025_05_28-*
- split: updated_2025_05_27
path: works/updated_2025_05_27-*
---
|
jaeyong2/Reason-Qwen3-06B-En-3 | jaeyong2 | 2025-06-03T11:09:11Z | 203 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-25T05:46:08Z | null | ---
dataset_info:
features:
- name: content
dtype: string
- name: response
sequence: string
splits:
- name: train
num_bytes: 2235927107
num_examples: 18000
download_size: 738754934
dataset_size: 2235927107
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hassno/synth_cv_parser_faker | hassno | 2025-06-03T10:53:32Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T10:52:56Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 18326097.0
num_examples: 9000
- name: test
num_bytes: 2036233.0
num_examples: 1000
download_size: 10180009
dataset_size: 20362330.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
haraouikouceil/doc | haraouikouceil | 2025-06-03T10:33:27Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T10:33:22Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 31117380
num_examples: 71452
download_size: 2843773
dataset_size: 31117380
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davanstrien/dataset_cards_with_metadata | davanstrien | 2025-06-03T10:20:08Z | 422 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-17T09:48:47Z | null | ---
dataset_info:
features:
- name: datasetId
dtype: large_string
- name: author
dtype: large_string
- name: last_modified
dtype: large_string
- name: downloads
dtype: int64
- name: likes
dtype: int64
- name: tags
large_list: large_string
- name: task_categories
large_list: large_string
- name: createdAt
dtype: large_string
- name: trending_score
dtype: float64
- name: card
dtype: large_string
splits:
- name: train
num_bytes: 110530629
num_examples: 32315
download_size: 30124925
dataset_size: 110530629
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EQX55/test_voice2 | EQX55 | 2025-06-03T10:14:58Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T10:14:55Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 17464862.0
num_examples: 26
download_size: 13153100
dataset_size: 17464862.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gaianet/gaianet | gaianet | 2025-06-03T10:14:47Z | 13 | 1 | [
"license:apache-2.0",
"region:us"
] | [] | 2024-05-08T04:01:05Z | null | ---
license: apache-2.0
---
|
daniel-dona/sparql-dataset-reasoning-test3 | daniel-dona | 2025-06-03T10:13:56Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T10:13:51Z | null | ---
dataset_info:
features:
- name: qid
dtype: string
- name: lang
dtype: string
- name: nlq
dtype: string
- name: classes
sequence: string
- name: properties
sequence: string
- name: features
sequence: string
- name: sparql
dtype: string
- name: reasoning
dtype: string
splits:
- name: train
num_bytes: 11712015
num_examples: 2500
download_size: 961054
dataset_size: 11712015
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gisako/multiwoz-chat | gisako | 2025-06-03T09:28:19Z | 0 | 0 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | 2025-06-03T09:23:31Z | null | ---
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: multiwoz-chat-llama-gpt
size_categories:
- 1K<n<10K
--- |
burtenshaw/testing-dedup-in-space | burtenshaw | 2025-06-03T09:20:07Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T09:19:57Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 146629
num_examples: 2195
download_size: 72048
dataset_size: 146629
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yycgreentea/so100_test_v2 | yycgreentea | 2025-06-03T09:17:26Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-03T06:57:34Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1491,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 25,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 25,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 25,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ustc-zyt/time-r1-data | ustc-zyt | 2025-06-03T09:13:46Z | 0 | 0 | [
"task_categories:time-series-forecasting",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"time-series-forecasting"
] | 2025-06-03T09:01:15Z | null | ---
license: apache-2.0
task_categories:
- time-series-forecasting
language:
- en
pretty_name: a
size_categories:
- 1K<n<10K
---
# 📊 Time-R1 RL Training Dataset
This dataset is used in the **Reinforcement Learning (RL)** phase of the paper:
**"Time Series Forecasting as Reasoning: A Slow-Thinking Approach with Reinforced LLMs"**.
---
## 📁 Data Format Overview
The dataset is stored in **Parquet** format. Each sample includes:
| Field | Type | Description |
| -------------- | ------------ | ---------------------------------------------------------------------------- |
| `prompt` | `list[dict]` | Natural language instruction including 96-step historical input sequence. |
| `reward_model` | `dict` | Contains the `ground_truth` field – the target values for the next 96 steps. |
| `data_source` | `string` | Dataset name (e.g., `"ETTh1"`). |
| `ability` | `string` | Task type – here always `"TimeSeriesForecasting"`. |
| `extra_info` | `dict` | Metadata including sample `index` and data `split` (e.g., `"train"`). |
---
## 🧾 Example Sample
```json
{
"prompt": [
{
"content": "Here is the High Useful Load data of the transformer. (dataset is ETTh1)..."
}
],
"data_source": "ETTh1",
"ability": "TimeSeriesForecasting",
"reward_model": {
"ground_truth": "date HUFL\n2016-07-05 00:00:00 11.989\n2016-07-05 01:00:00 12.525\n..."
},
"extra_info": {
"index": 0,
"split": "train"
}
}
```
Each prompt contains structured temporal input (96 steps) in a language-style format.
The `ground_truth` contains corresponding 96-step future targets with timestamps and values. |
pepijn223/record-test | pepijn223 | 2025-06-03T09:08:47Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-03T09:08:43Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 2,
"total_frames": 510,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
1080,
1920,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 1080,
"video.width": 1920,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
StonyBrook-CVLab/ZoomLDM-demo-dataset | StonyBrook-CVLab | 2025-06-03T09:07:27Z | 333 | 0 | [
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"region:us"
] | [] | 2025-05-19T10:57:35Z | null | ---
license: apache-2.0
language:
- en
size_categories:
- n<1K
---
Demo dataset for our CVPR 2025 paper "ZoomLDM: Latent Diffusion Model for multi-scale image generation". We extract patches from TCGA-BRCA Whole slide images.
## Usage
```python
from datasets import load_dataset
ds = load_dataset("StonyBrook-CVLab/ZoomLDM-demo-dataset", name="5x", trust_remote_code=True, split='train')
print(np.array(ds[0]['ssl_feat']).shape)
>>> (1024, 16, 16)
```
## Citations
```bibtex
@InProceedings{Yellapragada_2025_CVPR,
author = {Yellapragada, Srikar and Graikos, Alexandros and Triaridis, Kostas and Prasanna, Prateek and Gupta, Rajarsi and Saltz, Joel and Samaras, Dimitris},
title = {ZoomLDM: Latent Diffusion Model for Multi-scale Image Generation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {23453-23463}
}
```
```
@article{lingle2016cancer,
title={The cancer genome atlas breast invasive carcinoma collection (TCGA-BRCA)},
author={Lingle, Wilma and Erickson, Bradley J and Zuley, Margarita L and Jarosz, Rose and Bonaccio, Ermelinda and Filippini, Joe and Net, Jose M and Levi, Len and Morris, Elizabeth A and Figler, Gloria G and others},
year={2016},
publisher={The Cancer Imaging Archive}
}
```
|
3sara/colpali_italian_documents | 3sara | 2025-06-03T09:06:53Z | 129 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-29T17:04:02Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: domanda1
dtype: string
- name: risposta1
dtype: string
- name: domanda2
dtype: string
- name: risposta2
dtype: string
- name: domanda3
dtype: string
- name: risposta3
dtype: string
- name: query_generica
dtype: string
- name: query_specifica
dtype: string
- name: query_visuale
dtype: string
- name: documento
dtype: string
- name: anno
dtype: string
splits:
- name: train
num_bytes: 1465913549.0
num_examples: 934
download_size: 1464198933
dataset_size: 1465913549.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
imdatta0/aime | imdatta0 | 2025-06-03T09:05:30Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T09:05:19Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: solution
dtype: string
splits:
- name: aime_2025
num_bytes: 16342
num_examples: 30
- name: aime_2024
num_bytes: 136649
num_examples: 30
download_size: 93497
dataset_size: 152991
configs:
- config_name: default
data_files:
- split: aime_2025
path: data/aime_2025-*
- split: aime_2024
path: data/aime_2024-*
---
|
Nitish906099/dream11-eng-wi-_7 | Nitish906099 | 2025-06-03T08:58:46Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T08:58:44Z | null | ---
dataset_info:
features:
- name: Name
dtype: string
- name: Mat
dtype: int64
- name: Inns
dtype: int64
- name: 'NO'
dtype: float64
- name: Runs
dtype: int64
- name: Ball
dtype: int64
- name: Avg
dtype: float64
- name: SR
dtype: float64
- name: HS
dtype: int64
- name: 100s
dtype: float64
- name: 50s
dtype: float64
- name: 0s
dtype: float64
- name: 6s
dtype: float64
- name: 4s
dtype: float64
- name: SR.1
dtype: float64
- name: Dream Team
dtype: int64
- name: Tot Pts
dtype: int64
- name: Bat Pts
dtype: int64
- name: Bowl Pts
dtype: float64
- name: Field Pts
dtype: float64
- name: Pace Bowl
dtype: float64
- name: Spin Bowl
dtype: float64
- name: RHB
dtype: float64
- name: LHB
dtype: float64
- name: Match Type
dtype: string
splits:
- name: train
num_bytes: 1027
num_examples: 5
download_size: 10323
dataset_size: 1027
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dream11-eng-wi-_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nitish906099/dream11-eng-wi-___ | Nitish906099 | 2025-06-03T08:58:32Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T08:58:31Z | null | ---
dataset_info:
features:
- name: Player
dtype: string
- name: Avg Fpts
dtype: float64
- name: Runs
dtype: int64
- name: WK
dtype: int64
- name: RR1
dtype: int64
- name: RR2
dtype: int64
- name: RR3
dtype: int64
- name: RR4
dtype: int64
- name: RR5
dtype: int64
- name: RW1
dtype: int64
- name: RW2
dtype: int64
- name: RW3
dtype: int64
- name: RW4
dtype: int64
- name: RW5
dtype: int64
splits:
- name: train
num_bytes: 622
num_examples: 5
download_size: 5893
dataset_size: 622
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dream11-eng-wi-___"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nitish906099/dream11-eng-wi-__ | Nitish906099 | 2025-06-03T08:58:30Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T08:58:29Z | null | ---
dataset_info:
features:
- name: Player Name
dtype: string
- name: Team
dtype: string
- name: Bowling Style
dtype: string
- name: Avg Fpts
dtype: int64
- name: Avg Fpts Bowling 1st
dtype: string
- name: Avg Fpts Bowling 2nd
dtype: string
- name: Avg Fpts vs Opposition
dtype: string
- name: Avg Fpts at Venue
dtype: string
- name: Wkts
dtype: int64
- name: PP Wkts
dtype: int64
- name: Death Wkts
dtype: int64
- name: Overs
dtype: float64
- name: Bowled PP
dtype: string
- name: Bowled Death
dtype: string
splits:
- name: train
num_bytes: 571
num_examples: 5
download_size: 6095
dataset_size: 571
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dream11-eng-wi-__"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
EQUES/YakugakuQA | EQUES | 2025-06-03T08:46:36Z | 68 | 0 | [
"task_categories:question-answering",
"language:ja",
"license:cc-by-sa-4.0",
"arxiv:2505.16661",
"region:us"
] | [
"question-answering"
] | 2025-04-26T03:33:34Z | null | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- ja
viewer: true
columns:
- name: problem_id
type: string
- name: problem_text
type: string
- name: choices
type: list[string]
- name: text_only
type: bool
- name: answer
type: list[string]
- name: comment
type: string
- name: num_images
type: int
---
# YakugakuQA
<!-- Provide a quick summary of the dataset. -->
YakugakuQA is a question answering dataset, consisting of 13 years (2012-2024) of past questions and answers from the Japanese National License Examination for Pharmacists. It contains over 4K pairs of questions, answers, and commentaries.
**2025-5-29: Leaderboard added.**
**2025-2-17: Image data added.**
**2024-12-10: Dataset release.**
## Leaderboard
3-shot Accuracy (%)
|| [YakugakuQA](https://huggingface.co/datasets/EQUES/YakugakuQA/) | [IgakuQA](https://github.com/jungokasai/IgakuQA)|
| ---- | ---- | ---- |
| o1-preview | 87.9 | |
| GPT-4o | 83.6 | 86.6 |
| [pfnet/Preferred-MedLLM-Qwen-72B](https://huggingface.co/pfnet/Preferred-MedLLM-Qwen-72B) | 77.2 | |
| [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) | 73.6 | |
| [google/medgemma-27b-text-it](https://huggingface.co/google/medgemma-27b-text-it) | 62.2 (*)| |
| [EQUES/JPharmatron-7B](https://huggingface.co/EQUES/JPharmatron-7B) | 62.0 | 64.7 |
| [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) (**) | 59.9 | |
(*) Several issues in instruction-following, e.g., think and reason too much to reach token limit.
(**) enable_thinking=False for fair evaluation.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** EQUES Inc.
- **Funded by [optional]:** [GENIAC Project](https://www.meti.go.jp/policy/mono_info_service/geniac/index.html)
- **Shared by [optional]:**
- **Language(s) (NLP):** Japanese
- **License:** cc-by-sa-4.0
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
YakugakuQA is intended to be used as a benchmark for evaluating the knowledge of large language models (LLMs) in the field of pharmacy.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
Any usage except above.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
YakugakuQA consists of two files: `data.jsonl`, which contains the questions, answers, and commentaries, and `metadata.jsonl`, which holds supplementary information about the question categories and additional details related to the answers.
### data.jsonl
- "problem_id" : unique ID, represented by a six-digit integer. The higher three digits indicate the exam number, while the lower three digits represent the question number within that specific exam.
- "problem_text" : problem statement.
- "choices" : choices corresponding to each question. Note that the Japanese National License Examination for Pharmacists is a multiple-choice format examination.
- "text_only" : whether the question includes images or tables. The corresponding images or tables are not included in this dataset, even if `text_only` is marked as `false`.
- "answer" : list of indices of the correct choices. Note the following points:
- the choices are 1-indexed.
- multiple choices may be included, depending on the question format.
- "解なし" indicates there is no correct choice. The reason for this is documented in `metadata.jsonl` in most cases.
- "comment" : commentary text.
- "num_images" : number of images included in the question.
### metadata.jsonl
- "problem_id" : see above.
- "category" : question caterogy. One of the `["Physics", "Chemistry", "Biology", "Hygiene", "Pharmacology", "Pharmacy", "Pathology", "Law", "Practice"]`.
- "note" : additional information about the question.
### images
The image filenames follow the format:
`problem_id_{image_id}.png`
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
YakugakuQA aims to provide a Japanese-language evaluation benchmark for assessing the domain knowledge of LLMs.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
All questions, answers and commentaries for the target years have been collected. The parsing process has been performed automatically.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
All question, answers, and commentaries have been obtained from [yakugaku lab](https://yakugakulab.info/). All metadata has been obtained from the website of the Ministry of Health, Labour and Welfare. It should be noted that the original questions and answers are also sourced from materials published by the Ministry of Health, Labour and Welfare.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{sukeda2025japaneselanguagemodelnew,
title={A Japanese Language Model and Three New Evaluation Benchmarks for Pharmaceutical NLP},
author={Issey Sukeda and Takuro Fujii and Kosei Buma and Shunsuke Sasaki and Shinnosuke Ono},
year={2025},
eprint={2505.16661},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.16661},
}
```
## Contributions
Thanks to [@shinnosukeono](https://github.com/shinnosukeono) for adding this dataset.
## Acknowledgement
本データセットは、経済産業省及び国立研究開発法人新エネルギー・産業技術総合開発機構(NEDO)による生成AI開発力強化プロジェクト「GENIAC」により支援を受けた成果の一部である。 |
infinite-dataset-hub/LegalCasePrecedent | infinite-dataset-hub | 2025-06-03T08:44:58Z | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | [] | 2025-06-03T08:44:57Z | null | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# LegalCasePrecedent
tags: legal, precedent, classification
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'LegalCasePrecedent' dataset contains a collection of legal case documents which have been previously adjudicated. Each case document has been labeled with a classification that represents the type of legal precedent it set. This dataset is aimed at helping machine learning practitioners train models to automatically classify legal cases based on their precedent value.
**CSV Content Preview:**
```
case_id,document_text,label
001,"In the case of Smith v. Jones, the court held that electronic communication can constitute a breach of contract.",ContractBreach
002,"In the landmark case of Brown v. Board of Education, the Supreme Court declared state laws establishing separate public schools for black and white students to be unconstitutional.",EducationRights
003,"In the matter of Doe v. City, the precedent was set that municipalities are not immune from lawsuits related to traffic violations.",TrafficLaw
004,"In the case of Roe v. Wade, the court recognized a woman's constitutional right to an abortion.",ReproductiveRights
005,"The ruling in Miller v. Alabama established that mandatory life sentences without parole for juveniles violate the Eighth Amendment.",JuvenileJustice
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'legal':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=legal&dataset=LegalCasePrecedent&tags=legal,+precedent,+classification
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
clairedhx/edu3-clinical-fr-mesh-4 | clairedhx | 2025-06-03T08:35:54Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T08:35:51Z | null | ---
dataset_info:
features:
- name: article_id
dtype: string
- name: article_text
dtype: string
- name: document_type
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float32
- name: detected_entities
list:
- name: label
dtype: string
- name: mesh_id
dtype: string
- name: term
dtype: string
- name: mesh_from_gliner
sequence: string
- name: pubmed_mesh
sequence: string
- name: mesh_clean
sequence: string
- name: icd10_codes
sequence: string
splits:
- name: train
num_bytes: 690563
num_examples: 309
download_size: 342375
dataset_size: 690563
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Gusanidas/countdown-tasks-dataset-med-vl5 | Gusanidas | 2025-06-03T08:35:12Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T08:35:09Z | null | ---
dataset_info:
features:
- name: numbers
sequence: int64
- name: target
dtype: int64
- name: solution
dtype: string
- name: attempts
dtype: int64
- name: tag
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 28045
num_examples: 256
download_size: 11867
dataset_size: 28045
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ktzoras/shipping_features | Ktzoras | 2025-06-03T08:32:19Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T08:08:58Z | null | ---
dataset_info:
features:
- name: link
dtype: string
- name: date
dtype: timestamp[ns]
- name: title
dtype: string
- name: content
dtype: string
- name: led_summ
dtype: string
- name: bart_summ
dtype: string
- name: impact_idrbfe
dtype: int64
- name: sen_emb
sequence: float64
- name: sen_emb_mean
sequence: float64
- name: sen_emb_max
sequence: float64
- name: sen_emb_mix
sequence: float64
- name: sen_emb_sum
sequence: float64
- name: sen_emb_concat
sequence: float64
- name: sen_emb_mix2
sequence: float64
- name: full_emb
sequence: float64
- name: pr_en_vessel_type_fe
dtype: int64
- name: pr_en_size_of_vessel_idfe
dtype: int64
- name: pr_en_vessel_type_idrbfe
dtype: int64
- name: pr_en_vessel_type_rag
dtype: int64
- name: pr_en_vessel_type_idfe
dtype: int64
- name: pr_en_route_idrbfe
dtype: int64
- name: pr_en_route_fe
dtype: int64
- name: pr_en_size_of_vessel_rag
dtype: int64
- name: pr_en_size_of_vessel_fe
dtype: int64
- name: pr_en_size_of_vessel_idrbfe
dtype: int64
- name: pr_en_impact_idfe
dtype: int64
- name: pr_en_route_rag
dtype: int64
- name: pr_en_route_idfe
dtype: int64
- name: pr_en_scale_idfe
dtype: int64
- name: pr_en_duration_fe
dtype: int64
- name: pr_en_duration_idrbfe
dtype: int64
- name: pr_en_scale_fe
dtype: int64
- name: pr_en_duration_rag
dtype: int64
- name: pr_en_scale_idrbfe
dtype: int64
- name: pr_en_impact_rag
dtype: int64
- name: pr_en_scale_rag
dtype: int64
- name: pr_en_impact_fe
dtype: int64
- name: pr_en_impact_idrbfe
dtype: int64
- name: pr_en_duration_idfe
dtype: int64
- name: pr_en_impact_size_fe
dtype: int64
- name: pr_en_impact_size_idrbfe
dtype: int64
- name: pr_en_impact_size_idfe
dtype: int64
- name: pr_en_impact_size_rag
dtype: int64
splits:
- name: train
num_bytes: 2182756728
num_examples: 40013
download_size: 1719038846
dataset_size: 2182756728
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LLM360/guru_RL | LLM360 | 2025-06-03T08:26:44Z | 0 | 0 | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:question-answering",
"language:aa",
"license:cc-by-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"code",
"math",
"reasoning",
"logic",
"tabular"
] | [
"text2text-generation",
"text-generation",
"table-question-answering",
"question-answering"
] | 2025-06-03T04:39:38Z | null | ---
license: cc-by-2.0
task_categories:
- text2text-generation
- text-generation
- table-question-answering
- question-answering
language:
- aa
tags:
- code
- math
- reasoning
- logic
- tabular
pretty_name: >-
GURU: Incentivizing General Reasoning Skills with a Curated Open Reinforcement
Learning Dataset
size_categories:
- 10K<n<100K
---
# GURU: Incentivizing General Reasoning Skills with a Curated Open Reinforcement Learning Dataset
## Dataset Description
**GURU** is a meticulously curated cross-domain dataset specifically designed for training large language models on complex reasoning tasks. The dataset contains 91.9K high-quality samples spanning six diverse reasoning-intensive domains, processed through a comprehensive five-stage curation pipeline to ensure both domain diversity and reward verifiability.
### Dataset Summary
GURU addresses the critical need for robust cross-domain reasoning capabilities in LLMs by providing a carefully balanced collection of problems across mathematics, coding, science, logic, simulation, and tabular reasoning. Each sample has been filtered for quality and equipped with automated verification mechanisms, making it ideal for reinforcement learning applications.
### Key Features
- **Cross-Domain Coverage**: Six distinct reasoning domains ensuring comprehensive skill development
- **Quality Assurance**: Five-stage curation pipeline with deduplication and heuristic filtering
- **Automated Verification**: Domain-specific reward functions for reliable evaluation
- **Difficulty Calibration**: Samples filtered to maintain appropriate challenge levels
- **RL-Ready**: Binary reward system compatible with reinforcement learning frameworks
## Dataset Structure
### Domains and Statistics
| Domain | Datasets Included | Final Sample Count | Key Focus Areas |
|--------|------------------|-------------------|-----------------|
| **Math** | OR1, DAPO, DeepScaler | 54.4K | Competition problems, symbolic reasoning |
| **Code** | LeetCode, TACO-Verified, PrimeIntellect, LiveCodeBench | 18.1K | Programming challenges, algorithm design |
| **Science** | WebInstruct-Verified | 3.6K | University/PhD-level physics, chemistry, biology |
| **Logic** | ARC-AGI, BARC, Custom puzzles | 6.3K | Symbolic reasoning, constraint satisfaction |
| **Simulation** | Code I/O (PyEdu) | 3.7K | Code behavior prediction without execution |
| **Tabular** | HiTab, MultiHierTT | 6.1K | Single and multi-table reasoning |
**Total Samples**: 91.9K (filtered from 684.3K raw samples)
## Citation
If you use this dataset in your research, please cite:
```bibtex
```
*This dataset card follows the Hugging Face dataset card template and provides comprehensive information about the GURU dataset structure, creation process, and intended use cases.* |
Multilingual-Multimodal-NLP/MMEval | Multilingual-Multimodal-NLP | 2025-06-03T08:26:06Z | 0 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-21T08:26:41Z | null | ---
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: lang
dtype: string
- name: task_id
dtype: string
- name: instruction
dtype: string
- name: image
dtype: image
- name: task
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: signature
dtype: string
- name: entry_point
dtype: string
splits:
- name: test
num_bytes: 23662583.0
num_examples: 300
download_size: 7097693
dataset_size: 23662583.0
---
|
willcb/V3-wordle | willcb | 2025-06-03T08:18:41Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T08:18:39Z | null | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
- name: answer
dtype: string
- name: reward
dtype: float64
- name: task
dtype: string
splits:
- name: train
num_bytes: 6585001.5
num_examples: 1000
download_size: 1592223
dataset_size: 6585001.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
voidljc/cherry | voidljc | 2025-06-03T08:06:28Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-06-03T08:06:28Z | null | ---
license: apache-2.0
---
|
freococo/raw_1hr_myanmar_asr_audio | freococo | 2025-06-03T08:06:12Z | 0 | 0 | [
"task_categories:automatic-speech-recognition",
"language:my",
"license:mit",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"Myanmar",
"Burmese",
"Speech",
"RawAudio",
"PVTV",
"NUG",
"ASR"
] | [
"automatic-speech-recognition"
] | 2025-06-03T07:33:08Z | null | ---
license: mit
pretty_name: Raw 1-Hour Burmese ASR Audio Dataset
dataset_type: audio
task_categories:
- automatic-speech-recognition
language:
- my
tags:
- Myanmar
- Burmese
- Speech
- RawAudio
- PVTV
- NUG
- ASR
---
# 🇲🇲 Raw 1-Hour Burmese ASR Audio Dataset
A 1-hour dataset of Burmese (Myanmar language) spoken audio clips with transcripts, curated from official public-service media broadcasts by **PVTV Myanmar** — the media voice of Myanmar’s National Unity Government (NUG).
This dataset is intended for automatic speech recognition (ASR) and Burmese speech-processing research.
➡️ **Author**: [freococo](https://huggingface.co/freococo)
➡️ **License**: MIT
➡️ **Language**: Burmese (`my`)
---
## 📦 Dataset Summary
- **Duration**: ~1 hour
- **Chunks**: Short utterances (0.84s to 25.66s)
- **Total Samples**: 583
- **Audio Format**: `.mp3` mono files
- **Transcription Source**: Aligned manually using `.srt` files
- **Structure**: `file_name`, `transcript`, `duration_seconds`
The dataset was created entirely from public content with no modification or noise reduction applied.
---
## ⚠️ Data Quality Notes
- This dataset contains **raw speech audio** extracted from public media without denoising or filtering.
- Some chunks contain **background music**, instrumental intros/outros, or ambient reverb.
- Transcripts were manually aligned via subtitle files (`.srt`) and are mostly accurate.
- Estimated transcription error rate: **1–9%**, due to:
- Minor typos or spacing issues in Burmese script
- Occasional missing particles or honorifics
These conditions reflect real-world media audio and are left untouched to improve robustness in training and evaluation.
---
## 💬 Motivation
I created this dataset because I'm crazy about languages — especially Myanmar language technology.
I noticed a severe shortage of public, modern Burmese audio datasets for speech recognition and wanted to help fix that.
This project is fully self-initiated and unfunded — no grants, sponsorships, or institutional backing. Just passion, time, and a lot of cleaning 😄
If you find it helpful, let me know — I’d love to collaborate or help with related research!
---
## 🎙️ Source Acknowledgement
All audio was derived from **PVTV Myanmar** — a public voice media channel established by Myanmar’s National Unity Government (NUG).
Their mission is to amplify the people's voice in pursuit of freedom, justice, and federal democracy.
> ⚠️ This dataset contains raw audio, including background music or ambiance. It is **not denoised** or processed — intended to reflect real-world conditions.
The original public content remains available on [PVTV’s YouTube channel](https://www.youtube.com/@PVTVMyanmar).
---
## 🗂️ Dataset Structure
Each row in `metadata.csv` includes:
| Column | Description |
|-------------------|----------------------------------------|
| `file_name` | Relative path to audio file (e.g., `audio/my_audio_001.mp3`) |
| `transcript` | Burmese-language transcription |
| `duration_seconds`| Duration of the audio file in seconds |
The audio files are mono `.mp3` files stored in the `audio/` folder.
---
## 🌟 In Honor of the Voices Behind the Revolution
This dataset would not exist without the tireless, fearless voices of **PVTV Myanmar** —
🎙️ the journalists who speak truth,
✍️ the editors who shape it,
📢 and the citizens who carry it forward.
They speak not from studios, but from shadows,
not for fame, but for freedom.
Their words echo through uncertainty,
yet land on ears yearning for light.
> **This dataset is only a shadow of their work —
> the real heroes are the ones who dare to speak when silence is safer.**
To the PVTV media team and all those risking safety to tell the truth:
**Your voice is our history. Your courage is our future.**
🇲🇲🕊️ *Long live the Spring Revolution.*
---
## 🔌 How to Load in Python
```python
from datasets import load_dataset, Audio
ds = load_dataset("freococo/raw_1hr_myanmar_asr_audio")
ds = ds.cast_column("file_name", Audio())
ds[0]
```
## 📚 Citation
If you use this dataset in your research or product, please cite it:
```
@dataset{freococo_myanmar_asr_2025,
title = {Raw 1-Hour Myanmar ASR Audio Dataset},
author = {freococo},
year = {2025},
url = {https://huggingface.co/datasets/freococo/raw_1hr_myanmar_asr_audio},
note = {Curated from PVTV Myanmar public media, licensed under MIT}
}
``` |
javierbarbaesparcia/spanish_legal_ner_non_annotated | javierbarbaesparcia | 2025-06-03T07:50:44Z | 59 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | [] | 2025-05-11T20:07:18Z | null | ---
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for spanish_legal_ner_non_annotated
This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.Dataset.from_hub("javierbarbaesparcia/spanish_legal_ner_non_annotated", settings="auto")
```
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
## Using this dataset with `datasets`
To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("javierbarbaesparcia/spanish_legal_ner_non_annotated")
```
This will only load the records of the dataset, but not the Argilla settings.
## Dataset Structure
This dataset repo contains:
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
* A dataset configuration folder conforming to the Argilla dataset format in `.argilla`.
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
### Fields
The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
| Field Name | Title | Type | Required |
| ---------- | ----- | ---- | -------- |
| law_cont | law_cont | text | True |
### Questions
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| label | label | span | True | N/A | ['PERSON', 'ORG', 'LOC', 'DATE', 'REF'] |
| label_multi | label_multi | multi_label_selection | True | N/A | ['RIGHT', 'DUTY', 'SANC'] |
<!-- check length of metadata properties -->
### Metadata
The **metadata** is a dictionary that can be used to provide additional information about the dataset record.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
| fecha_actualizacion | Update date | terms | - | True |
| identificador | Identifier | terms | - | True |
| ambito | Domain | terms | - | True |
| departamento | Department | terms | - | True |
| rango | Type of legislation | terms | - | True |
| fecha_disposicion | Provision date | terms | - | True |
| numero_oficial | Official number | terms | - | True |
| titulo | Title | terms | - | True |
| diario | Paper | terms | - | True |
| fecha_publicacion | Publication date | terms | - | True |
| diario_numero | Paper number | terms | - | True |
| fecha_vigencia | Validity date | terms | - | True |
| estatus_derogacion | Update date | terms | - | True |
| fecha_derogacion | Repeal date | terms | - | True |
| estatus_anulacion | Cancellation state | terms | - | True |
| vigencia_agotada | Validity runned out | terms | - | True |
| estado_consolidacion | Consolidation state | terms | - | True |
| url_eli | Link | terms | - | True |
| url_html_consolidada | Html link | terms | - | True |
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
These are some guidelines.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
pch11/final01 | pch11 | 2025-06-03T07:50:43Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T07:15:37Z | null | ---
license: apache-2.0
dataset_info:
features:
- name: file_name
dtype: string
- name: image
dtype: image
- name: caption_sdxl
dtype: string
splits:
- name: train
num_bytes: 7011818.0
num_examples: 47
download_size: 7008247
dataset_size: 7011818.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HusseinBashir/jelle | HusseinBashir | 2025-06-03T07:41:33Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T07:39:21Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 130238452.0
num_examples: 1000
download_size: 101795874
dataset_size: 130238452.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
OnnieNLP/InformationExtractionQA | OnnieNLP | 2025-06-03T07:36:25Z | 150 | 0 | [
"task_categories:question-answering",
"language:ro",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2025-05-15T18:54:34Z | null | ---
task_categories:
- question-answering
language:
- ro
--- |
dbaeka/indeed-ca-scraping-fin | dbaeka | 2025-06-03T07:31:39Z | 35 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-13T07:16:06Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: job_id
dtype: string
- name: job_title
dtype: string
- name: company
dtype: string
- name: location
dtype: string
- name: url
dtype: string
- name: pay
dtype: string
- name: job_type
dtype: string
- name: shift_and_schedule
dtype: string
- name: benefits
dtype: string
- name: description
dtype: string
- name: description_html
dtype: string
- name: match_score
dtype: int64
- name: match_reason
dtype: string
- name: date_scraped
dtype: string
- name: likelihood_score
dtype: int64
- name: last_synced
dtype: string
- name: date_updated
dtype: string
splits:
- name: train
num_bytes: 15904548
num_examples: 1156
download_size: 7414904
dataset_size: 15904548
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ioveeagle/s1K-1.1_mistral_tokenized_alpaca_format | ioveeagle | 2025-06-03T07:29:07Z | 64 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-26T08:49:49Z | null | ---
dataset_info:
features:
- name: solution
dtype: string
- name: question
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: gemini_thinking_trajectory
dtype: string
- name: gemini_attempt
dtype: string
- name: deepseek_thinking_trajectory
dtype: string
- name: deepseek_attempt
dtype: string
- name: gemini_grade
dtype: string
- name: gemini_grade_reason
dtype: string
- name: deepseek_grade
dtype: string
- name: deepseek_grade_reason
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 78107869
num_examples: 1000
download_size: 36679177
dataset_size: 78107869
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
winston1214/VENUS-5K | winston1214 | 2025-06-03T07:21:45Z | 755 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2506.00958",
"region:us"
] | [] | 2025-04-16T04:28:55Z | null | ---
dataset_info:
features:
- name: channel_id
dtype: string
- name: video_id
dtype: string
- name: segment_id
dtype: int32
- name: duration
dtype: string
- name: fps
dtype: int32
- name: conversation
sequence:
- name: utterance_id
dtype: int32
- name: speaker
dtype: int32
- name: text
dtype: string
- name: start_time
dtype: float32
- name: end_time
dtype: float32
- name: words
sequence:
- name: word
dtype: string
- name: start_time
dtype: float32
- name: end_time
dtype: float32
- name: facial_expression
sequence:
- name: utt_id
dtype: string
- name: frame
dtype: string
- name: features
sequence: float32
- name: body_language
sequence:
- name: utt_id
dtype: string
- name: frame
dtype: string
- name: features
sequence: float32
- name: harmful_utterance_id
sequence: int32
- name: speaker_bbox
list:
- name: bbox
sequence: int64
- name: frame_id
dtype: int64
splits:
- name: train
num_bytes: 70203413441
num_examples: 3923
- name: test
num_bytes: 18253160963
num_examples: 996
download_size: 84263010100
dataset_size: 88456574404
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
## Dataset Card for VENUS
### Dataset Summary
Data from: Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues
```
@article{Kim2025speaking,
title={Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues},
author={Youngmin Kim, Jiwan Chung, Jisoo Kim, Sunghyun Le, Sangkyu Lee, Junhyeok Ki, Cheoljong Yang, Youngjae Yu},
journal={arXiv preprint arXiv:2506.00958},
year={2025},
archivePrefix={arXiv},
eprint={2506.00958},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2506.00958}
}
We provide a multimodal large-scale video dataset based on nonverbal communication.
Please cite our work if you find our data helpful. (**We will update citation format.**)
### Dataset Statistic
| Split | Channels | Videos | Segments (10 min) | Frames (Nonverbal annotations) | Utterances | Words |
|:---------------:|:--------:|:---------:|:-------:|:-------:|:----------:|:----------:|
| Train |41 | 1,210 | 3,923 | ~ | ~ | |
| Test | 10 | 331 | 996 | ~ | ~ | |
### Language
English
### Other Version
- **VENUS-1K**: <a href='https://huggingface.co/datasets/winston1214/VENUS-1K'>This link</a>
- **VENUS-10K**: <a href=''>This link</a>
- **VENUS-50K**: <a href=''>This link</a>
- **VENUS-100K** (Full): <a href=''>This link</a>
### Data Structure
Here's an overview of our dataset structure:
```
{
'channel_id': str, # YouTube channel ID
'video_id': str, # Video ID
'segment_id': int, # Segment ID within the video
'duration': str, # Total segment duration (e.g., '00:11:00 ~ 00:21:00')
'fps': int, # Frames per second
'conversation': [ # Conversation information (consisting of multiple utterances)
{
'utterance_id': int, # Utterance ID
'speaker': int, # Speaker ID (represented as an integer)
'text': str, # Full utterance text
'start_time': float, # Start time of the utterance (in seconds)
'end_time': float, # End time of the utterance (in seconds)
'words': [ # Word-level information
{
'word': str, # The word itself
'start_time': float, # Word-level start time
'end_time': float, # Word-level end time
}
]
}
],
'facial_expression': [ # Facial expression features
{
'utt_id': str, # ID of the utterance this expression is aligned to
'frame': str, # Frame identifier
'features': [float], # Facial feature vector (153-dimensional)
}
],
'body_language': [ # Body language features
{
'utt_id': str, # ID of the utterance this body language is aligned to
'frame': str, # Frame identifier
'features': [float], # Body movement feature vector (179-dimensional)
}
],
'harmful_utterance_id': [int], # List of utterance IDs identified as harmful
}
```
### Data Instances
See above
### Data Fields
See above
### Data Splits
Data splits can be accessed as:
```python
from datasets import load_dataset
train_dataset = load_dataset("winston1214/VENUS-5K", split = "train")
test_dataset = load_dataset("winston1214/VENUS-5K", split = "test")
```
### Curation Rationale
Full details are in the paper.
### Source Data
We retrieve natural videos from YouTube and annotate the FLAME and SMPL-X parameters from EMOCAv2 and OSX.
### Initial Data Collection
Full details are in the paper.
### Annotations
Full details are in the paper.
### Annotation Process
Full details are in the paper.
### Who are the annotators?
We used an automatic annotation method, and the primary annotator was Youngmin Kim, the first author of the paper.
For any questions regarding the dataset, please contact <a href='winston1214@yonsei.ac.kr'>e-mail</a>
### Considerations for Using the Data
This dataset (VENUS) consists of 3D annotations of human subjects and text extracted from conversations in the videos.
Please note that the dialogues are sourced from online videos and may include informal or culturally nuanced expressions.
Use of this dataset should be done with care, especially in applications involving human-facing interactions.
### Licensing Information
The annotations we provide are licensed under CC-BY-4.0.
|
winston1214/VENUS-1K | winston1214 | 2025-06-03T07:20:52Z | 366 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2506.00958",
"region:us"
] | [] | 2025-04-14T12:58:53Z | null | ---
dataset_info:
features:
- name: channel_id
dtype: string
- name: video_id
dtype: string
- name: segment_id
dtype: int32
- name: duration
dtype: string
- name: fps
dtype: int32
- name: conversation
sequence:
- name: utterance_id
dtype: int32
- name: speaker
dtype: int32
- name: text
dtype: string
- name: start_time
dtype: float32
- name: end_time
dtype: float32
- name: words
sequence:
- name: word
dtype: string
- name: start_time
dtype: float32
- name: end_time
dtype: float32
- name: facial_expression
sequence:
- name: utt_id
dtype: string
- name: frame
dtype: string
- name: features
sequence: float32
- name: body_language
sequence:
- name: utt_id
dtype: string
- name: frame
dtype: string
- name: features
sequence: float32
- name: harmful_utterance_id
sequence: int32
- name: speaker_bbox
list:
- name: bbox
sequence: int64
- name: frame_id
dtype: int64
splits:
- name: train
num_bytes: 13538714571
num_examples: 800
- name: test
num_bytes: 3766543145
num_examples: 200
download_size: 16484082115
dataset_size: 17305257716
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
## Dataset Card for VENUS
### Dataset Summary
Data from: Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues
```
@article{Kim2025speaking,
title={Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues},
author={Youngmin Kim, Jiwan Chung, Jisoo Kim, Sunghyun Le, Sangkyu Lee, Junhyeok Ki, Cheoljong Yang, Youngjae Yu},
journal={arXiv preprint arXiv:2506.00958},
year={2025},
archivePrefix={arXiv},
eprint={2506.00958},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2506.00958}
}
```
We provide a multimodal large-scale video dataset based on nonverbal communication.
Please cite our work if you find our data helpful. (**We will update citation format.**)
### Dataset Statistic
| Split | Channels | Videos | Segments (10 min) | Frames (Nonverbal annotations) | Utterances | Words |
|:---------------:|:--------:|:---------:|:-------:|:-------:|:----------:|:----------:|
| Train | 12 | 293 | 800 | ~ | ~ | % |
| Test | 4 | 113 | 200 | ~ | ~ | % |
### Language
English
### Other Version
- **VENUS-5K**: <a href=''>This link</a>
- **VENUS-10K**: <a href=''>This link</a>
- **VENUS-50K**: <a href=''>This link</a>
- **VENUS-100K** (Full): <a href=''>This link</a>
### Data Structure
Here's an overview of our dataset structure:
```
{
'channel_id': str, # YouTube channel ID
'video_id': str, # Video ID
'segment_id': int, # Segment ID within the video
'duration': str, # Total segment duration (e.g., '00:11:00 ~ 00:21:00')
'fps': int, # Frames per second
'conversation': [ # Conversation information (consisting of multiple utterances)
{
'utterance_id': int, # Utterance ID
'speaker': int, # Speaker ID (represented as an integer)
'text': str, # Full utterance text
'start_time': float, # Start time of the utterance (in seconds)
'end_time': float, # End time of the utterance (in seconds)
'words': [ # Word-level information
{
'word': str, # The word itself
'start_time': float, # Word-level start time
'end_time': float, # Word-level end time
}
]
}
],
'facial_expression': [ # Facial expression features
{
'utt_id': str, # ID of the utterance this expression is aligned to
'frame': str, # Frame identifier
'features': [float], # Facial feature vector (153-dimensional)
}
],
'body_language': [ # Body language features
{
'utt_id': str, # ID of the utterance this body language is aligned to
'frame': str, # Frame identifier
'features': [float], # Body movement feature vector (179-dimensional)
}
],
'harmful_utterance_id': [int], # List of utterance IDs identified as harmful
}
```
### Data Instances
See above
### Data Fields
See above
### Data Splits
Data splits can be accessed as:
```python
from datasets import load_dataset
train_dataset = load_dataset("winston1214/VENUS-1K", split = "train")
test_dataset = load_dataset("winston1214/VENUS-1K", split = "test")
```
### Curation Rationale
Full details are in the paper.
### Source Data
We retrieve natural videos from YouTube and annotate the FLAME and SMPL-X parameters from EMOCAv2 and OSX.
### Initial Data Collection
Full details are in the paper.
### Annotations
Full details are in the paper.
### Annotation Process
Full details are in the paper.
### Who are the annotators?
We used an automatic annotation method, and the primary annotator was Youngmin Kim, the first author of the paper.
For any questions regarding the dataset, please contact <a href='winston1214@yonsei.ac.kr'>e-mail</a>
### Considerations for Using the Data
This dataset (VENUS) consists of 3D annotations of human subjects and text extracted from conversations in the videos.
Please note that the dialogues are sourced from online videos and may include informal or culturally nuanced expressions.
Use of this dataset should be done with care, especially in applications involving human-facing interactions.
### Licensing Information
The annotations we provide are licensed under CC-BY-4.0. |
AhinsaAI/Ahinsa-nli-triplet | AhinsaAI | 2025-06-03T07:08:49Z | 0 | 0 | [
"license:cc-by-nc-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T07:03:58Z | null | ---
license: cc-by-nc-4.0
---
Great! Based on your clarification, here is an updated and **specific** `README.md` for your project focused on **fine-tuning `sentence-transformers/multilingual-mpnet-base-v2` for Hindi** using triplet loss:
---
# 🇮🇳 Hindi Semantic Similarity Dataset for Multilingual MPNet Fine-tuning
## 🧠 Project Overview
This project focuses on **fine-tuning the `sentence-transformers/multilingual-mpnet-base-v2` model** using Hindi-language sentence triplets. The objective is to create a robust Hindi sentence embedding model capable of semantic similarity, clustering, and retrieval tasks.
---
## 🗂 Dataset Structure
The dataset is structured as **triplets** suitable for **triplet loss**:
| Column | Description |
| ---------- | ------------------------------------------------------------ |
| `anchor` | A Hindi sentence (reference) |
| `positive` | A semantically similar sentence to the anchor |
| `negative` | A semantically different or contradictory sentence to anchor |
### 🔍 Sample:
```csv
anchor,positive,negative
"एक बुजुर्ग आदमी बर्गर तल रहा है।","एक आदमी ग्रिल पर बर्गर बना रहा है।","एक बूढ़ा आदमी कच्चा बर्गर खा रहा है।"
```
---
## 🧪 Data Splits
The cleaned dataset is split into:
* `export/train.csv` – 80%
* `export/dev.csv` – 10%
* `export/test.csv` – 10%
Each file follows the same structure (`anchor`, `positive`, `negative`).
---
## 🤖 Model & Training
### 🏷 Base Model:
* [`sentence-transformers/multilingual-mpnet-base-v2`](https://huggingface.co/sentence-transformers/multilingual-mpnet-base-v2)
* Supports 50+ languages, including **Hindi**
### 🧪 Objective:
* **Triplet Loss** fine-tuning using sentence-transformers framework
* Input triplets: `(anchor, positive, negative)`
* Output: embeddings that place `anchor` closer to `positive` than to `negative`
### 🛠 Training Framework:
* [Sentence-Transformers](https://www.sbert.net/)
* Triplet loss with a margin (default: 0.5)
* Evaluation with cosine similarity and embedding ranking
---
## 📦 How to Load
You can load the dataset with Hugging Face:
```python
from datasets import load_dataset
dataset = load_dataset('csv', data_files={
'train': 'export/train.csv',
'validation': 'export/dev.csv',
'test': 'export/test.csv'
})
```
Or directly with pandas:
```python
import pandas as pd
train = pd.read_csv("export/train.csv")
```
---
## 💡 Use Cases
* Hindi semantic search
* Paraphrase mining and deduplication
* Hindi text clustering
* Text-based retrieval systems
---
## ⚠️ Limitations
* Relatively small dataset size; more data will improve performance.
* Sentence triplets are heuristic or manually generated, not crowdsourced.
* May not cover complex linguistic phenomena or dialectal variations.
---
## 📚 Citation
```
@dataset{,
title = {Hindi Triplet Dataset for Multilingual MPNet Fine-tuning},
author = {Your Name},
year = {2025},
url = {https://github.com/yourusername/hindi-triplet-mpnet}
}
```
---
## 📬 Contact
For questions, contributions, or feedback, contact:
📧 `your.email@example.com`
🐙 GitHub: [@yourusername](https://github.com/yourusername)
---
|
jingmingcn/PPI_dataset | jingmingcn | 2025-06-03T07:08:05Z | 66 | 0 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-31T13:34:46Z | null | ---
license: apache-2.0
configs:
- config_name: clinvar
data_files: "clinvar.csv"
- config_name: clinvar_sample
data_files: "clinvar_sample_10000.csv"
- config_name: gnomad
data_files: "gnomad.csv"
- config_name: ukb
data_files: "ukb.csv"
- config_name: proteome
data_files: "proteome.csv"
---
|
InfiniAILab/Kinetics-generations | InfiniAILab | 2025-06-03T07:05:12Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-06-03T06:57:25Z | null | ---
license: apache-2.0
---
|
metchee/sticker-queries | metchee | 2025-06-03T06:36:45Z | 74 | 0 | [
"task_categories:text-generation",
"language:zh",
"language:en",
"license:mit",
"arxiv:2506.01668",
"region:us"
] | [
"text-generation"
] | 2025-05-31T08:59:32Z | null | ---
license: mit
task_categories:
- text-generation
language:
- zh
- en
---
# StickerQueries 🧷🗨️
**StickerQueries** is a multilingual dataset for sticker query generation and retrieval. It features human-annotated query-sticker pairs that capture the expressive, emotional, and cultural semantics embedded in stickers.
## Dataset Structure
- `stickers_queries_zh_released.csv`: Chinese sticker annotations.
- `stickers_queries_en_released.csv`: English sticker annotations.
- `stickers/`: Sticker images in `.gif`, `.png`, or `.webm` formats.
Each row in the CSV files includes:
- `sticker_id`: The file path to the corresponding sticker image.
- `labeled_queries`: A comma-separated list of sticker queries representing the intended emotion, tone, or expression.
## Annotation Process
- Each annotation was reviewed by at least **two people**.
- In total, **42 English** and **18 Chinese** annotators contributed, with **over 60 hours** spent ensuring high-quality and diverse expressions.
## Looking for a sticker query generator?
- 🈶 **Chinese Model**: [Sticker Query Generator ZH](https://huggingface.co/metchee/sticker-query-generator-zh)
- 🇬🇧 **English Model**: [Sticker Query Generator EN](https://huggingface.co/metchee/sticker-query-generator-en)
## Large-scale Sticker Dataset
Explore the broader dataset: [U-Sticker](https://huggingface.co/datasets/metchee/u-sticker)
---
## Citation
If you use StickerQueries in your work, please cite us as:
```bibtex
@misc{chee2025smallstickersbigmeanings,
title={Small Stickers, Big Meanings: A Multilingual Sticker Semantic Understanding Dataset with a Gamified Approach},
author={Heng Er Metilda Chee and Jiayin Wang and Zhiqiang Guo and Weizhi Ma and Min Zhang},
year={2025},
eprint={2506.01668},
archivePrefix={arXiv},
primaryClass={cs.MM},
url={https://arxiv.org/abs/2506.01668},
}
``` |
8wali8/example_dataset | 8wali8 | 2025-06-03T06:31:08Z | 0 | 0 | [
"task_categories:robotics",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-06-03T02:07:17Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# example_dataset
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
Thanarit/Thai-Voice-S1-S10-Test | Thanarit | 2025-06-03T06:25:21Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T06:24:23Z | null | ---
dataset_info:
features:
- name: ID
dtype: string
- name: speaker_id
dtype: string
- name: Language
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcript
dtype: string
- name: length
dtype: float32
- name: dataset_name
dtype: string
- name: confidence_score
dtype: float64
splits:
- name: train
num_examples: 20
download_size: 0
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train/*.parquet
---
# Thanarit/Thai-Voice
Combined Thai audio dataset from multiple sources
## Dataset Details
- **Total samples**: 20
- **Total duration**: 0.02 hours
- **Language**: Thai (th)
- **Audio format**: 16kHz mono WAV
- **Volume normalization**: -20dB
## Sources
Processed 1 datasets in streaming mode
## Source Datasets
1. **GigaSpeech2**: Large-scale multilingual speech corpus
## Usage
```python
from datasets import load_dataset
# Load with streaming to avoid downloading everything
dataset = load_dataset("Thanarit/Thai-Voice-S1-S10-Test", streaming=True)
# Iterate through samples
for sample in dataset['train']:
print(sample['ID'], sample['transcript'][:50])
# Process audio: sample['audio']
break
```
## Schema
- `ID`: Unique identifier (S1, S2, S3, ...)
- `speaker_id`: Speaker identifier (SPK_00001, SPK_00002, ...)
- `Language`: Language code (always "th" for Thai)
- `audio`: Audio data with 16kHz sampling rate
- `transcript`: Text transcript of the audio
- `length`: Duration in seconds
- `dataset_name`: Source dataset name (e.g., "GigaSpeech2", "ProcessedVoiceTH", "MozillaCommonVoice")
- `confidence_score`: Confidence score of the transcript (0.0-1.0)
- 1.0: Original transcript from source dataset
- <1.0: STT-generated transcript
- 0.0: Fallback transcript (e.g., [NO_TRANSCRIPT])
## Processing Details
This dataset was created using streaming processing to handle large-scale data without requiring full downloads.
Audio has been standardized to 16kHz mono with -20dB volume normalization.
|
CUAIStudents/Main-Dataset | CUAIStudents | 2025-06-03T06:23:31Z | 94 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-21T21:05:49Z | null | ---
dataset_info:
features:
- name: clean
dtype: string
splits:
- name: train
num_bytes: 2072326764
num_examples: 4158107
- name: test
num_bytes: 14034145
num_examples: 29059
- name: valid
num_bytes: 25016951
num_examples: 51765
download_size: 965862057
dataset_size: 2111377860
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
jinkhye/v5_markdown_mix | jinkhye | 2025-06-03T06:17:39Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T02:58:36Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
- name: images
list: image
splits:
- name: train
num_bytes: 1136622364.968
num_examples: 2504
download_size: 448708791
dataset_size: 1136622364.968
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
willcb/V3-wordle-test | willcb | 2025-06-03T06:16:39Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T06:16:38Z | null | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
- name: answer
dtype: string
- name: reward
dtype: float64
- name: task
dtype: string
splits:
- name: train
num_bytes: 69107.5
num_examples: 10
download_size: 20842
dataset_size: 69107.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yangfengzzz/so101_test | yangfengzzz | 2025-06-03T06:11:07Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-06-03T06:10:43Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 2,
"total_frames": 1746,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
tinisoft/indicvoices-tamil-valid-subset | tinisoft | 2025-06-03T06:08:36Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T06:07:59Z | null | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: duration
dtype: float64
- name: lang
dtype: string
- name: samples
dtype: int64
- name: verbatim
dtype: string
- name: normalized
dtype: string
- name: speaker_id
dtype: string
- name: scenario
dtype: string
- name: task_name
dtype: string
- name: gender
dtype: string
- name: age_group
dtype: string
- name: job_type
dtype: string
- name: qualification
dtype: string
- name: area
dtype: string
- name: district
dtype: string
- name: state
dtype: string
- name: occupation
dtype: string
- name: verification_report
dtype: string
- name: unsanitized_verbatim
dtype: string
- name: unsanitized_normalized
dtype: string
splits:
- name: train
num_bytes: 162451871.2
num_examples: 800
- name: validation
num_bytes: 40612967.8
num_examples: 200
download_size: 197132715
dataset_size: 203064839.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Hitchwiki/dumpster_diving_spots | Hitchwiki | 2025-06-03T06:04:47Z | 3 | 1 | [
"language:en",
"language:de",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"modality:geospatial",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"geospatial"
] | [] | 2025-06-01T17:42:43Z | null | ---
dataset_info:
features:
- name: Latitude
dtype: float64
- name: Longitude
dtype: float64
- name: dumpster_created
dtype: string
- name: voting
dtype: string
- name: comment
dtype: string
- name: voting_created
dtype: string
- name: name
dtype: string
splits:
- name: 2025.06.03
num_bytes: 1001
num_examples: 5
download_size: 4558
dataset_size: 1001
configs:
- config_name: default
data_files:
- split: 2025.06.03
path: data/2025.06.03-*
language:
- en
- de
tags:
- geospatial
---
Community-collected dumpster diving spots and their ratings from https://www.dumpstermap.org (or better https://dumpstermap.herokuapp.com/dumpsters).
Updated monthly using https://huggingface.co/spaces/Hitchwiki/fetch-dumpstermap.
---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{{ card_data }}
---
# Dataset Card for {{ pretty_name | default("Dataset Name", true) }}
<!-- Provide a quick summary of the dataset. -->
{{ dataset_summary | default("", true) }}
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
{{ dataset_description | default("", true) }}
- **Curated by:** {{ curators | default("[More Information Needed]", true)}}
- **Funded by [optional]:** {{ funded_by | default("[More Information Needed]", true)}}
- **Shared by [optional]:** {{ shared_by | default("[More Information Needed]", true)}}
- **Language(s) (NLP):** {{ language | default("[More Information Needed]", true)}}
- **License:** {{ license | default("[More Information Needed]", true)}}
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** {{ repo | default("[More Information Needed]", true)}}
- **Paper [optional]:** {{ paper | default("[More Information Needed]", true)}}
- **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}}
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
{{ direct_use | default("[More Information Needed]", true)}}
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
{{ out_of_scope_use | default("[More Information Needed]", true)}}
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
{{ dataset_structure | default("[More Information Needed]", true)}}
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
{{ curation_rationale_section | default("[More Information Needed]", true)}}
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
{{ data_collection_and_processing_section | default("[More Information Needed]", true)}}
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
{{ source_data_producers_section | default("[More Information Needed]", true)}}
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
{{ annotation_process_section | default("[More Information Needed]", true)}}
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
{{ who_are_annotators_section | default("[More Information Needed]", true)}}
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
{{ personal_and_sensitive_information | default("[More Information Needed]", true)}}
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
{{ bias_risks_limitations | default("[More Information Needed]", true)}}
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
{{ bias_recommendations | default("Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.", true)}}
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
{{ citation_bibtex | default("[More Information Needed]", true)}}
**APA:**
{{ citation_apa | default("[More Information Needed]", true)}}
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
{{ glossary | default("[More Information Needed]", true)}}
## More Information [optional]
{{ more_information | default("[More Information Needed]", true)}}
## Dataset Card Authors [optional]
{{ dataset_card_authors | default("[More Information Needed]", true)}}
## Dataset Card Contact
{{ dataset_card_contact | default("[More Information Needed]", true)}} |
kanishka/babylm2-clean-spacy_no-multi-adj-strict | kanishka | 2025-06-03T06:01:32Z | 0 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T06:01:22Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 516559880
num_examples: 11802970
- name: validation
num_bytes: 58115371
num_examples: 1227839
download_size: 339072290
dataset_size: 574675251
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
ragunath-ravi/YouTube-VideoArchive-Queue-Volume1 | ragunath-ravi | 2025-06-03T05:57:17Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T05:57:13Z | null | ---
dataset_info:
features:
- name: video_id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: uploader
dtype: string
- name: upload_date
dtype: string
- name: duration
dtype: int64
- name: view_count
dtype: int64
- name: like_count
dtype: int64
- name: description
dtype: string
- name: categories
dtype: string
- name: tags
dtype: string
- name: thumbnail
dtype: string
- name: downloaded_at
dtype: string
- name: volume
dtype: int64
- name: video_file
dtype: string
- name: audio_file
dtype: string
- name: has_video
dtype: bool
- name: has_audio
dtype: bool
splits:
- name: train
num_bytes: 39179
num_examples: 21
download_size: 38923
dataset_size: 39179
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "YouTube-VideoArchive-Queue-Volume1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jiseop11892/gung | jiseop11892 | 2025-06-03T05:35:38Z | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T05:35:08Z | null | ---
license: mit
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 112998
num_examples: 127
download_size: 68370
dataset_size: 112998
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
h3llohihi/lao-asr-thesis-dataset | h3llohihi | 2025-06-03T05:13:18Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T03:25:29Z | null | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: duration
dtype: float32
- name: speaker_id
dtype: string
- name: accent
dtype: string
- name: gender
dtype: string
- name: sentence_id
dtype: string
splits:
- name: train
num_bytes: 1262919087.688
num_examples: 3848
- name: validation
num_bytes: 184987455.0
num_examples: 499
- name: test
num_bytes: 353702444.2
num_examples: 1200
- name: dev
num_bytes: 26699086.0
num_examples: 80
download_size: 1786334131
dataset_size: 1828308072.888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
---
|
hatch-sc/so100_test_v3 | hatch-sc | 2025-06-03T05:09:51Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-03T05:08:52Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1786,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
David-Chen-DynamoFL/longctx-inj-2500-compliant-internal-documents-prompt-injection-for-long-context-may22-10000-5494 | David-Chen-DynamoFL | 2025-06-03T04:57:43Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T04:57:35Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: injection_text
dtype: string
- name: position_bucket
dtype: int64
- name: insert_at_char
dtype: int64
- name: insert_fraction
dtype: float64
- name: long_context_idx
dtype: int64
- name: injection_idx
dtype: int64
splits:
- name: train
num_bytes: 296816677
num_examples: 10000
download_size: 108058956
dataset_size: 296816677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Thanarit/Thai-Voice-Test-Speaker-Fix | Thanarit | 2025-06-03T04:49:35Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T04:48:43Z | null | ---
dataset_info:
features:
- name: ID
dtype: string
- name: speaker_id
dtype: string
- name: Language
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcript
dtype: string
- name: length
dtype: float32
- name: dataset_name
dtype: string
- name: confidence_score
dtype: float64
splits:
- name: train
num_examples: 20
download_size: 0
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train/*.parquet
---
# Thanarit/Thai-Voice
Combined Thai audio dataset from multiple sources
## Dataset Details
- **Total samples**: 20
- **Total duration**: 0.02 hours
- **Language**: Thai (th)
- **Audio format**: 16kHz mono WAV
- **Volume normalization**: -20dB
## Sources
Processed 1 datasets in streaming mode
## Source Datasets
1. **GigaSpeech2**: Large-scale multilingual speech corpus
## Usage
```python
from datasets import load_dataset
# Load with streaming to avoid downloading everything
dataset = load_dataset("Thanarit/Thai-Voice-Test-Speaker-Fix", streaming=True)
# Iterate through samples
for sample in dataset['train']:
print(sample['ID'], sample['transcript'][:50])
# Process audio: sample['audio']
break
```
## Schema
- `ID`: Unique identifier (S1, S2, S3, ...)
- `speaker_id`: Speaker identifier (SPK_00001, SPK_00002, ...)
- `Language`: Language code (always "th" for Thai)
- `audio`: Audio data with 16kHz sampling rate
- `transcript`: Text transcript of the audio
- `length`: Duration in seconds
- `dataset_name`: Source dataset name (e.g., "GigaSpeech2", "ProcessedVoiceTH", "MozillaCommonVoice")
- `confidence_score`: Confidence score of the transcript (0.0-1.0)
- 1.0: Original transcript from source dataset
- <1.0: STT-generated transcript
- 0.0: Fallback transcript (e.g., [NO_TRANSCRIPT])
## Processing Details
This dataset was created using streaming processing to handle large-scale data without requiring full downloads.
Audio has been standardized to 16kHz mono with -20dB volume normalization.
|
AlignCoder/Data4AlignCoder | AlignCoder | 2025-06-03T04:42:08Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-06-03T04:35:42Z | null | ---
license: apache-2.0
---
|
cat-claws/trial | cat-claws | 2025-06-03T04:40:34Z | 983 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-14T14:01:17Z | null | ---
dataset_info:
- config_name: 01-simclr-train
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 115010021.0
num_examples: 50000
download_size: 119141133
dataset_size: 115010021.0
- config_name: 01-some-train
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 115063524.0
num_examples: 50000
download_size: 119191831
dataset_size: 115063524.0
- config_name: 01-some-train-logistic
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 115158757.0
num_examples: 50000
download_size: 119298218
dataset_size: 115158757.0
- config_name: resnet18-ad2-1
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 127551564.0
num_examples: 50000
download_size: 132324086
dataset_size: 127551564.0
- config_name: resnet18-ad2-3
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 116539015.0
num_examples: 50000
download_size: 120706833
dataset_size: 116539015.0
- config_name: resnet18-ad2-3-1
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 119061740.0
num_examples: 50000
download_size: 123509854
dataset_size: 119061740.0
- config_name: resnet18-ad2-3-2
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 119499725.0
num_examples: 50000
download_size: 124059316
dataset_size: 119499725.0
- config_name: resnet18-ad2-3-3
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 120428457.0
num_examples: 50000
download_size: 125053566
dataset_size: 120428457.0
- config_name: resnet18-ad2-3-4
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 118487076.0
num_examples: 50000
download_size: 122619326
dataset_size: 118487076.0
- config_name: resnet18-ad2-3-5
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 118035839.0
num_examples: 50000
download_size: 122385186
dataset_size: 118035839.0
- config_name: resnet18-ad2-3-6
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 119294931.0
num_examples: 50000
download_size: 123779321
dataset_size: 119294931.0
- config_name: resnet18-ad2-3-7sgd
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 116700604.0
num_examples: 50000
download_size: 120859494
dataset_size: 116700604.0
- config_name: resnet18-eps-4-iclr23
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 114497260.0
num_examples: 50000
download_size: 118548892
dataset_size: 114497260.0
- config_name: resnet18-erm
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 115159802.0
num_examples: 50000
download_size: 119309499
dataset_size: 115159802.0
- config_name: resnet18-erm-normalise
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 123028492.0
num_examples: 50000
download_size: 127716123
dataset_size: 123028492.0
- config_name: resnet18-retrain
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 119088361.0
num_examples: 50000
download_size: 123597988
dataset_size: 119088361.0
- config_name: resnet18-retrain-1
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 121989386.0
num_examples: 50000
download_size: 126676508
dataset_size: 121989386.0
- config_name: resnet18-retrain-2
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 122410729.0
num_examples: 50000
download_size: 127124679
dataset_size: 122410729.0
- config_name: resnet18-retrain-3
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 120728027.0
num_examples: 50000
download_size: 125359985
dataset_size: 120728027.0
- config_name: resnet18-some-train-85
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 115357061.0
num_examples: 50000
download_size: 119516504
dataset_size: 115357061.0
- config_name: wideresnet28-erm-normalise
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 123374597.0
num_examples: 50000
download_size: 128085356
dataset_size: 123374597.0
configs:
- config_name: 01-simclr-train
data_files:
- split: train
path: 01-simclr-train/train-*
- config_name: 01-some-train
data_files:
- split: train
path: 01-some-train/train-*
- config_name: 01-some-train-logistic
data_files:
- split: train
path: 01-some-train-logistic/train-*
- config_name: resnet18-ad2-1
data_files:
- split: train
path: resnet18-ad2-1/train-*
- config_name: resnet18-ad2-3
data_files:
- split: train
path: resnet18-ad2-3/train-*
- config_name: resnet18-ad2-3-1
data_files:
- split: train
path: resnet18-ad2-3-1/train-*
- config_name: resnet18-ad2-3-2
data_files:
- split: train
path: resnet18-ad2-3-2/train-*
- config_name: resnet18-ad2-3-3
data_files:
- split: train
path: resnet18-ad2-3-3/train-*
- config_name: resnet18-ad2-3-4
data_files:
- split: train
path: resnet18-ad2-3-4/train-*
- config_name: resnet18-ad2-3-5
data_files:
- split: train
path: resnet18-ad2-3-5/train-*
- config_name: resnet18-ad2-3-6
data_files:
- split: train
path: resnet18-ad2-3-6/train-*
- config_name: resnet18-ad2-3-7sgd
data_files:
- split: train
path: resnet18-ad2-3-7sgd/train-*
- config_name: resnet18-eps-4-iclr23
data_files:
- split: train
path: resnet18-eps-4-iclr23/train-*
- config_name: resnet18-erm
data_files:
- split: train
path: resnet18-erm/train-*
- config_name: resnet18-erm-normalise
data_files:
- split: train
path: resnet18-erm-normalise/train-*
- config_name: resnet18-retrain
data_files:
- split: train
path: resnet18-retrain/train-*
- config_name: resnet18-retrain-1
data_files:
- split: train
path: resnet18-retrain-1/train-*
- config_name: resnet18-retrain-2
data_files:
- split: train
path: resnet18-retrain-2/train-*
- config_name: resnet18-retrain-3
data_files:
- split: train
path: resnet18-retrain-3/train-*
- config_name: resnet18-some-train-85
data_files:
- split: train
path: resnet18-some-train-85/train-*
- config_name: wideresnet28-erm-normalise
data_files:
- split: train
path: wideresnet28-erm-normalise/train-*
---
|
blue01223/math_splits | blue01223 | 2025-06-03T04:28:04Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T04:18:41Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: generation1
dtype: string
splits:
- name: split_0
num_bytes: 101481462
num_examples: 6209
- name: split_1
num_bytes: 96316459
num_examples: 6209
- name: split_2
num_bytes: 100568237
num_examples: 6209
- name: split_3
num_bytes: 114766966
num_examples: 6209
- name: split_4
num_bytes: 118059166
num_examples: 6208
- name: split_5
num_bytes: 118073198
num_examples: 6208
- name: split_6
num_bytes: 66327312
num_examples: 6208
- name: split_7
num_bytes: 87800818
num_examples: 6208
download_size: 350526359
dataset_size: 803393618
configs:
- config_name: default
data_files:
- split: split_0
path: data/split_0-*
- split: split_1
path: data/split_1-*
- split: split_2
path: data/split_2-*
- split: split_3
path: data/split_3-*
- split: split_4
path: data/split_4-*
- split: split_5
path: data/split_5-*
- split: split_6
path: data/split_6-*
- split: split_7
path: data/split_7-*
---
|
ai4bharat/Indic-Rag-Suite | ai4bharat | 2025-06-03T04:24:07Z | 468 | 0 | [
"task_categories:question-answering",
"task_categories:text-generation",
"multilinguality:multilingual",
"source_datasets:original",
"language:as",
"language:bn",
"language:en",
"language:gu",
"language:hi",
"language:ks",
"language:mai",
"language:ml",
"language:mni",
"language:mr",
"language:ne",
"language:or",
"language:pa",
"language:sat",
"language:ta",
"language:te",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2506.01615",
"region:us",
"indian-languages",
"multilingual",
"indic",
"qa-dataset"
] | [
"question-answering",
"text-generation"
] | 2025-06-01T20:02:09Z | null | ---
license: mit
task_categories:
- question-answering
- text-generation
language:
- as
- bn
- en
- gu
- hi
- ks
- mai
- ml
- mni
- mr
- ne
- or
- pa
- sat
- ta
- te
multilinguality: multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- indian-languages
- multilingual
- indic
- qa-dataset
pretty_name: "Multilingual Indic RAG Suite"
configs:
- config_name: as
default: false
description: "Assamese language subset"
- config_name: bn
default: false
description: "Bengali language subset"
- config_name: en
default: false
description: "English language subset"
- config_name: gu
default: false
description: "Gujarati language subset"
- config_name: hi
default: false
description: "Hindi language subset"
- config_name: ks
default: false
description: "Kashmiri language subset"
- config_name: mai
default: false
description: "Maithili language subset"
- config_name: ml
default: false
description: "Malayalam language subset"
- config_name: mni
default: false
description: "Manipuri language subset"
- config_name: mr
default: false
description: "Marathi language subset"
- config_name: ne
default: false
description: "Nepali language subset"
- config_name: or
default: false
description: "Odia language subset"
- config_name: pa
default: false
description: "Punjabi language subset"
- config_name: sat
default: false
description: "Santali language subset"
- config_name: ta
default: false
description: "Tamil language subset"
- config_name: te
default: false
description: "Telugu language subset"
---
# 🌏 Multilingual Indic RAG Suite
## 🚀 Quick Start - Load Individual Languages (RECOMMENDED)
```python
from datasets import load_dataset
# Load ONLY Hindi data (fast and efficient!)
hindi_data = load_dataset("ai4bharat/Indic-Rag-Suite", name="hi")
print(f"Hindi samples: {len(hindi_data['train'])}")
# Load ONLY Bengali data
bengali_data = load_dataset("ai4bharat/Indic-Rag-Suite", name="bn")
print(f"Bengali samples: {len(bengali_data['train'])}")
# Access the data directly
for example in hindi_data['train'][:3]:
print(f"Q: {example['question']}")
print(f"A: {example['answer']}")
```
## 📊 Available Languages (16 total)
| Code | Language | Load Command |
|------|----------|--------------|
| `as` | Assamese | `load_dataset('ai4bharat/Indic-Rag-Suite', name='as')` |
| `bn` | Bengali | `load_dataset('ai4bharat/Indic-Rag-Suite', name='bn')` |
| `en` | English | `load_dataset('ai4bharat/Indic-Rag-Suite', name='en')` |
| `gu` | Gujarati | `load_dataset('ai4bharat/Indic-Rag-Suite', name='gu')` |
| `hi` | Hindi | `load_dataset('ai4bharat/Indic-Rag-Suite', name='hi')` |
| `ks` | Kashmiri | `load_dataset('ai4bharat/Indic-Rag-Suite', name='ks')` |
| `mai` | Maithili | `load_dataset('ai4bharat/Indic-Rag-Suite', name='mai')` |
| `ml` | Malayalam | `load_dataset('ai4bharat/Indic-Rag-Suite', name='ml')` |
| `mni` | Manipuri | `load_dataset('ai4bharat/Indic-Rag-Suite', name='mni')` |
| `mr` | Marathi | `load_dataset('ai4bharat/Indic-Rag-Suite', name='mr')` |
| `ne` | Nepali | `load_dataset('ai4bharat/Indic-Rag-Suite', name='ne')` |
| `or` | Odia | `load_dataset('ai4bharat/Indic-Rag-Suite', name='or')` |
| `pa` | Punjabi | `load_dataset('ai4bharat/Indic-Rag-Suite', name='pa')` |
| `sat` | Santali | `load_dataset('ai4bharat/Indic-Rag-Suite', name='sat')` |
| `ta` | Tamil | `load_dataset('ai4bharat/Indic-Rag-Suite', name='ta')` |
| `te` | Telugu | `load_dataset('ai4bharat/Indic-Rag-Suite', name='te')` |
## 💡 Usage Examples
### Single Language Loading (Fastest)
```python
from datasets import load_dataset
# Method 1: Direct loading
dataset = load_dataset("ai4bharat/Indic-Rag-Suite", name="hi") # Only Hindi
train_data = dataset['train']
# Method 2: With streaming for large files
dataset = load_dataset("ai4bharat/Indic-Rag-Suite", name="hi", streaming=True)
for example in dataset['train']:
print(example['question'])
break
```
### Multiple Languages
```python
# Load specific languages you need
languages = ['hi', 'bn', 'ta', 'en']
datasets = {}
for lang in languages:
datasets[lang] = load_dataset("ai4bharat/Indic-Rag-Suite", name=lang)
print(f"{lang}: {len(datasets[lang]['train'])} samples")
```
### Data Processing
```python
# Load and process
dataset = load_dataset("ai4bharat/Indic-Rag-Suite", name="hi")
train_data = dataset['train']
# Convert to pandas for analysis
import pandas as pd
df = train_data.to_pandas()
print(df.head())
# Filter by criteria
long_questions = train_data.filter(lambda x: len(x['question']) > 100)
```
## 📋 Dataset Structure
```python
{
"text": "Question: भारत की राजधानी क्या है? | Answer: भारत की राजधानी नई दिल्ली है। | Reasoning: संविधान के अनुसार...",
"language": "hi",
"question": "भारत की राजधानी क्या है?",
"answer": "भारत की राजधानी नई दिल्ली है।",
"reasoning": "संविधान और प्रशासनिक तथ्यों के आधार पर...",
"paragraph": "विकिपीडिया से संदर्भ पैराग्राफ...",
"title": "भारत",
"wiki_id": "14533",
"url": "https://hi.wikipedia.org/wiki/भारत",
"source_lang": "hi",
"meta": "{\"model_name\": \"Meta-Llama-3.3-70B-Instruct\", ...}"
}
```
## ⚡ Performance & Tips
- **Always use `name` parameter**: Loads only specified language
- **Access train split**: Use `dataset['train']` to get the data
- **Use streaming**: `streaming=True` for memory efficiency
- **Partial loading**: `split="train[:1000]"` for testing
- **Batch processing**: Use `.map()` for efficient processing
## 🎯 Use Cases
- 🤖 **RAG Systems**: Retrieval-augmented generation
- ❓ **QA Training**: Question-answering model training
- 🔄 **Cross-lingual**: Transfer learning research
- 📚 **Language Models**: Fine-tuning multilingual models
## 📖 Citation
If you use IndicMSMARCO in your research, please cite:
```bibtex
@dataset{indic_msmarco_2024,
title={IndicRAGSuite: LargeScale Datasets and a Benchmark for Indian Language RAG Systems},
author={Pasunuti Prasanjith,Prathmesh B More,Anoop Kunchukuttan, Raj Dabre},
year={2025},
{journal = {arXiv preprint arXiv:2506.01615},
url={https://huggingface.co/datasets/ai4bharat/IndicMSMARCO}
}
```
---
*Optimized for individual language loading • Built for multilingual NLP*
|
YongqiLi/PCogAlignBench | YongqiLi | 2025-06-03T03:45:15Z | 123 | 0 | [
"license:cc-by-4.0",
"modality:image",
"arxiv:2506.00930",
"region:us"
] | [] | 2025-05-31T02:22:11Z | null | ---
configs:
- config_name: default
data_files:
- split: HCMAS_train
path: "version_v4/HCMAS-train.json"
- split: HCMAS_test
path: "version_v4/HCMAS-test.json"
- split: HCSHR_train
path: "version_v4/HCSHR-train.json"
- split: HCSHR_test
path: "version_v4/HCSHR-test.json"
license: cc-by-4.0
---
# Aligning VLM Assistants with Personalized Situated Cognition (ACL 2025 main)
[](https://github.com/liyongqi2002/PCogAlign)
[](https://huggingface.co/datasets/YongqiLi/PCogAlignBench)
[](https://arxiv.org/abs/2506.00930)
This repository contains the constructed benchmark in our ACL 2025 main paper **"Aligning VLM Assistants with Personalized Situated Cognition"**.
> ⚠️ This project is for academic research only and not intended for commercial use.
## Abstract
Vision-language models (VLMs) aligned with general human objectives, such as being harmless and hallucination-free, have become valuable assistants of humans in managing visual tasks.
However, people with diversified backgrounds have different cognition even in the same situation. Consequently, they may have personalized expectations for VLM assistants.
This highlights the urgent need to align VLM assistants with personalized situated cognition for real-world assistance.
To study this problem, we first simplify it by characterizing individuals based on the sociological concept of Role-Set. Then, we propose to evaluate the individuals' actions to examine whether the personalized alignment is achieved.
Further, we construct a benchmark named PCogAlignBench, which includes 18k instances and 20 individuals with different Role-Sets.
Finally, we present a framework called PCogAlign, which constructs a cognition-aware and action-based reward model for personalized alignment.
Experimental results and human evaluations demonstrate the reliability of the PCogAlignBench and the effectiveness of our proposed PCogAlign.
## 🙌 Acknowledgments
All datasets and models used are obtained through legal and ethical means. For detailed ethical considerations, please refer to our paper's Ethics Statement section.
## 📬 Contact
For any questions or feedback, feel free to reach out to us at [liyongqi@whu.edu.cn].
---
✨ Thank you for your interest in PCogAlign! Stay tuned for more updates.
|
danganhdat/mhardolov-exams | danganhdat | 2025-06-03T03:41:19Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T03:33:45Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: language
dtype: string
- name: subject
dtype: string
- name: grade
dtype: int64
- name: question
dtype: string
- name: choices
sequence: string
- name: answer_key
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 965308
num_examples: 1955
- name: test
num_bytes: 234050
num_examples: 488
download_size: 529107
dataset_size: 1199358
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
> **Note**: I do not **own** this dataset. All credit goes to the original authors.
> If you use this dataset, please cite the original paper: https://aclanthology.org/2020.emnlp-main.438/
> Please see the original dataset and project at: https://github.com/mhardalov/exams-qa
> The original dataset on Hugging Face: https://huggingface.co/datasets/mhardalov/exams |
xwzagan/fortune-telling | xwzagan | 2025-06-03T03:37:11Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T03:36:49Z | null | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Response
dtype: string
- name: Complex_CoT
dtype: string
splits:
- name: train
num_bytes: 672845
num_examples: 207
download_size: 448452
dataset_size: 672845
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Trajes123Tip/DatasetTrajesTipicosParaguayos | Trajes123Tip | 2025-06-03T03:35:31Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-06-03T03:32:30Z | null | ---
license: apache-2.0
---
|
linrany/aime_2024_rep3 | linrany | 2025-06-03T03:32:17Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T03:32:11Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: origin_question
dtype: string
- name: correct_answer
dtype: string
splits:
- name: train
num_bytes: 31800
num_examples: 90
download_size: 10439
dataset_size: 31800
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xwzagan/web-security-distill | xwzagan | 2025-06-03T03:30:21Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T03:30:13Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: cot
dtype: string
splits:
- name: train
num_bytes: 28402008
num_examples: 2876
download_size: 15372994
dataset_size: 28402008
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CohenQu/deepscalar_RL_hard_1_verl | CohenQu | 2025-06-03T03:21:42Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T03:21:40Z | null | ---
dataset_info:
features:
- name: data_source
dtype: 'null'
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: extra_info
struct:
- name: index
dtype: int64
- name: no_hint_prompt
dtype: bool
- name: problem
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 1567875
num_examples: 3000
- name: test
num_bytes: 191369
num_examples: 300
download_size: 151914
dataset_size: 1759244
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
CohenQu/deepscalar_RL_hard_100_verl | CohenQu | 2025-06-03T03:21:33Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T03:21:32Z | null | ---
dataset_info:
features:
- name: data_source
dtype: 'null'
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: extra_info
struct:
- name: index
dtype: int64
- name: no_hint_prompt
dtype: bool
- name: problem
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 1711845
num_examples: 3000
- name: test
num_bytes: 191369
num_examples: 300
download_size: 626494
dataset_size: 1903214
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
r1v3r/auto1_results | r1v3r | 2025-06-03T03:10:34Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T03:10:20Z | null | ---
dataset_info:
features:
- name: repo
dtype: string
- name: pull_number
dtype: int64
- name: instance_id
dtype: string
- name: issue_numbers
sequence: string
- name: base_commit
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: version
dtype: string
- name: updated_at
dtype: string
- name: environment_setup_commit
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: PASS_TO_PASS
sequence: string
- name: FAIL_TO_FAIL
sequence: string
- name: PASS_TO_FAIL
sequence: 'null'
- name: source_dir
dtype: string
splits:
- name: train
num_bytes: 3815683
num_examples: 76
download_size: 847655
dataset_size: 3815683
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yejunliang23/3D-Alpaca | yejunliang23 | 2025-06-03T03:08:40Z | 6 | 1 | [
"task_categories:text2text-generation",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2506.01853",
"region:us",
"Multimodal-Large-Language-Model,mesh-generation"
] | [
"text2text-generation"
] | 2025-05-29T11:09:27Z | null | ---
license: mit
library_name: transformers
pipeline_tag: image-to-3d
tags:
- Multimodal-Large-Language-Model,mesh-generation
task_categories:
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
---
# ShapeLLM-Omni: A Native Multimodal LLM for 3D Generation and Understanding
[**Paper**](https://arxiv.org/abs/2506.01853) | [**Project Page**](https://jamesyjl.github.io/ShapeLLM/) | [**Code**](https://github.com/JAMESYJL/ShapeLLM-Omni/)
A subset from the 3D-Alpaca dataset of ShapeLLM-Omni: a native multimodal LLM for 3D generation and understanding
[Junliang Ye*](https://jamesyjl.github.io/), [Zhengyi Wang*](https://thuwzy.github.io/), [Ruowen Zhao*](https://zhaorw02.github.io/), [Shenghao Xie](), [Jun Zhu](https://ml.cs.tsinghua.edu.cn/~jun/index.shtml)
<p align="center">
<img src="assets/dataset.png"/>
</p>
Recently, the powerful text-to-image capabilities of GPT-4o have led to growing appreciation for native multimodal large language models. However, its multimodal capabilities remain confined to images and text. Yet beyond images, the ability to understand and generate 3D content is equally crucial. To address this gap, we propose ShapeLLM-Omni—a native 3D large language model capable of understanding and generating 3D assets and text in any sequence. First, we train a 3D vector-quantized variational autoencoder (VQ-VAE), which maps 3D objects into a discrete latent space to achieve efficient and accurate shape representation and reconstruction. Building upon the 3D-aware discrete tokens, we innovatively construct a large-scale continuous training dataset named 3D-Alpaca, encompassing generation, comprehension, and editing, thus providing rich resources for future research and training. Finally, by performing instruction-based training of the Qwen-2.5-vl-7B-Instruct model on the 3D-Alpaca dataset. Our work provides an effective attempt at extending multimodal models with basic 3D capabilities, which contributes to future research in 3D-native AI.
<p align="center">
<img src="assets/head.jpg"/>
</p>
## Dataset Structure
```
# 3D-Alpaca
./
├── image_data.tar.gz
│ ├── 0
│ │ ├── original.png
│ │ └── after_edited.png
│ ├── 1
│ ├── 2
│ ├── [...]
│ ├── 62042
│ └── prompt_list.pt: {0:{"prompt":...,"editing_prompt":...},...}
└─── edit_data.json
``` |
youjiang97/test_dataset11 | youjiang97 | 2025-06-03T03:05:48Z | 44 | 0 | [
"task_categories:robotics",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-05-30T06:28:45Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# test_dataset11
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
OpenDriveLab/MTGS | OpenDriveLab | 2025-06-03T03:04:38Z | 139 | 2 | [
"license:cc-by-nc-sa-4.0",
"arxiv:2503.12552",
"region:us"
] | [] | 2025-05-27T05:20:37Z | null | ---
license: cc-by-nc-sa-4.0
---
# MTGS: Multi-Traversal Gaussian Splatting
The data and checkpoints used in the paper *MTGS: Multi-Traversal Gaussian Splatting* (https://arxiv.org/abs/2503.12552).
The official code is released at https://github.com/OpenDriveLab/MTGS.
✒️ Tianyu Li\*, Yihang Qiu\*, Zhenhua Wu\*, Carl Lindström, Peng Su, Matthias Nießner, Hongyang Li
📧 Primary Contact: Tianyu Li (tianyu@opendrivelab.com)
💼 Joint effort by **Shanghai Innovation Institute (SII)** and **OpenDriveLab at The University of Hong Kong**.
|
BestWishYsh/OpenS2V-Eval | BestWishYsh | 2025-06-03T02:47:45Z | 427 | 2 | [
"task_categories:text-to-video",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.20292",
"region:us",
"subject-to-video",
"text-to-video",
"image-to-video",
"video-generation",
"large-scale",
"benchmark",
"evaluation"
] | [
"text-to-video"
] | 2025-05-14T14:30:05Z | null | ---
language:
- en
license: apache-2.0
size_categories:
- 1M<n<10M
task_categories:
- text-to-video
tags:
- subject-to-video
- text-to-video
- image-to-video
- video-generation
- large-scale
- benchmark
- evaluation
configs:
- config_name: default
data_files:
- split: open_domain
path: Open-Domain_Eval.json
- split: human_domain
path: Human-Domain_Eval.json
- split: single_domain
path: Single-Domain_Eval.json
---
<div align=center>
<img src="https://github.com/PKU-YuanGroup/OpenS2V-Nexus/blob/main/__assets__/OpenS2V-Nexus_logo.png?raw=true" width="300px">
</div>
<h2 align="center"> <a href="https://pku-yuangroup.github.io/OpenS2V-Nexus/">OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation</a></h2>
<h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for the latest update. </h5>
## ✨ Summary
**OpenS2V-Eval** introduces 180 prompts from seven major categories of S2V, which incorporate both real and synthetic test data. Furthermore,
to accurately align human preferences with S2V benchmarks, we propose three automatic metrics: **NexusScore**, **NaturalScore**, **GmeScore**
to separately quantify subject consistency, naturalness, and text relevance in generated videos. Building on this, we conduct a comprehensive
evaluation of 18 representative S2V models, highlighting their strengths and weaknesses across different content.
This benchmark is presented in the paper: [OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation](https://huggingface.co/papers/2505.20292)
## Evaluate Your Own Models
For instructions on evaluating your customized model using OpenS2V-Eval, please refer to [this guide](https://github.com/PKU-YuanGroup/OpenS2V-Nexus/tree/main/eval).
## Get Videos Generated by Different S2V models
For details on the videos generated by various S2V models, please refer to [this link](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval/tree/main/Results).
## Description
- **Repository:** [Code](https://github.com/PKU-YuanGroup/OpenS2V-Nexus), [Page](https://pku-yuangroup.github.io/OpenS2V-Nexus/), [Dataset](https://huggingface.co/datasets/BestWishYsh/OpenS2V-5M), [Benchmark](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval)
- **Paper:** [https://huggingface.co/papers/2505.20292](https://huggingface.co/papers/2505.20292)
- **Point of Contact:** [Shenghai Yuan](shyuan-cs@hotmail.com)
## Citation
If you find our paper and code useful in your research, please consider giving a star and citation.
```BibTeX
@article{yuan2025opens2v,
title={OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation},
author={Yuan, Shenghai and He, Xianyi and Deng, Yufan and Ye, Yang and Huang, Jinfa and Lin, Bin and Luo, Jiebo and Yuan, Li},
journal={arXiv preprint arXiv:2505.20292},
year={2025}
}
``` |
mothnaZl/seq_dis_T0.0-Qwen2.5-7B-best_of_n-VLLM-Skywork-o1-Open-PRM-Qwen-2.5-7B-completions | mothnaZl | 2025-06-03T02:42:21Z | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-02T08:16:56Z | null | ---
dataset_info:
config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
- name: pass@n
dtype: float64
- name: div_avg
dtype: float64
- name: div_sum
dtype: float64
- name: div_mean
dtype: float64
- name: Unigrams
dtype: float64
- name: Bigrams
dtype: float64
- name: Trigrams
dtype: float64
- name: Fourgrams
dtype: float64
- name: pass_tag
sequence: 'null'
- name: BM25
dtype: int64
- name: pred_entropy
dtype: float64
splits:
- name: train
num_bytes: 928
num_examples: 8
download_size: 7155
dataset_size: 928
configs:
- config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals
data_files:
- split: train
path: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals/train-*
---
|
mothnaZl/seq_dis_T0.6-Qwen2.5-7B-best_of_n-VLLM-Skywork-o1-Open-PRM-Qwen-2.5-7B-completions | mothnaZl | 2025-06-03T02:41:33Z | 86 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-13T08:08:11Z | null | ---
dataset_info:
config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
- name: pass@n
dtype: float64
- name: div_avg
dtype: float64
- name: div_sum
dtype: float64
- name: div_mean
dtype: float64
- name: Unigrams
dtype: float64
- name: Bigrams
dtype: float64
- name: Trigrams
dtype: float64
- name: Fourgrams
dtype: float64
- name: pass_tag
sequence: 'null'
- name: BM25
dtype: int64
- name: pred_entropy
dtype: float64
splits:
- name: train
num_bytes: 928
num_examples: 8
download_size: 7123
dataset_size: 928
configs:
- config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals
data_files:
- split: train
path: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals/train-*
---
|
mothnaZl/seq_dis_T0.8-Qwen2.5-7B-best_of_n-VLLM-Skywork-o1-Open-PRM-Qwen-2.5-7B-completions | mothnaZl | 2025-06-03T02:36:05Z | 62 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-15T20:20:59Z | null | ---
dataset_info:
config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
- name: pass@n
dtype: float64
- name: div_avg
dtype: float64
- name: div_sum
dtype: float64
- name: div_mean
dtype: float64
- name: Unigrams
dtype: float64
- name: Bigrams
dtype: float64
- name: Trigrams
dtype: float64
- name: Fourgrams
dtype: float64
- name: pass_tag
sequence: 'null'
- name: BM25
dtype: int64
- name: pred_entropy
dtype: float64
splits:
- name: train
num_bytes: 928
num_examples: 8
download_size: 7156
dataset_size: 928
configs:
- config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals
data_files:
- split: train
path: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals/train-*
---
|
withpi/aiewf_workshop_data_no_md_json | withpi | 2025-06-03T02:34:50Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T02:34:48Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Rating
dtype: string
- name: Raw meeting transcript
dtype: string
- name: Extracted output
dtype: string
splits:
- name: train
num_bytes: 550889
num_examples: 126
download_size: 183662
dataset_size: 550889
---
# Dataset Card for "aiewf_workshop_data_no_md_json"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits