Dataset Viewer
Auto-converted to Parquet
datasetId
large_stringlengths
6
116
author
large_stringlengths
2
42
last_modified
large_stringdate
2021-04-29 15:34:29
2025-06-04 04:15:01
downloads
int64
0
3.97M
likes
int64
0
7.74k
tags
large listlengths
1
2.03k
task_categories
large listlengths
0
48
createdAt
large_stringdate
2022-03-02 23:29:22
2025-06-04 04:13:01
trending_score
float64
0
36
card
large_stringlengths
31
1.01M
ChavyvAkvar/synthetic-trades-XRP-batch-37
ChavyvAkvar
2025-06-03T22:05:30Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T22:04:33Z
null
--- dataset_info: features: - name: scenario_id dtype: string - name: final_pnl_ratio dtype: float64 - name: max_drawdown dtype: float64 - name: total_trades dtype: int64 - name: synthetic_ohlc_open sequence: float64 - name: synthetic_ohlc_high sequence: float64 - name: synthetic_ohlc_low sequence: float64 - name: synthetic_ohlc_close sequence: float64 - name: garch_params_used_for_sim_str dtype: string - name: strategy_params_str dtype: string - name: strategy_exit_rules_str dtype: string splits: - name: train num_bytes: 923448393 num_examples: 1000 download_size: 924485504 dataset_size: 923448393 configs: - config_name: default data_files: - split: train path: data/train-* ---
john-1111/x_dataset_060232
john-1111
2025-06-03T16:43:33Z
1,046
0
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:sentiment-analysis", "task_ids:topic-classification", "task_ids:named-entity-recognition", "task_ids:language-modeling", "task_ids:text-scoring", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:extractive-qa", "task_ids:news-articles-summarization", "multilinguality:multilingual", "source_datasets:original", "license:mit", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[ "text-classification", "token-classification", "question-answering", "summarization", "text-generation" ]
2025-01-27T06:45:15Z
null
--- license: mit multilinguality: - multilingual source_datasets: - original task_categories: - text-classification - token-classification - question-answering - summarization - text-generation task_ids: - sentiment-analysis - topic-classification - named-entity-recognition - language-modeling - text-scoring - multi-class-classification - multi-label-classification - extractive-qa - news-articles-summarization --- # Bittensor Subnet 13 X (Twitter) Dataset <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> ## Dataset Description - **Repository:** john-1111/x_dataset_060232 - **Subnet:** Bittensor Subnet 13 - **Miner Hotkey:** 5CMJFHrRUZjZ3pD41DmrrLPXwqqre8RYVLHYiPTFUQaukL3a ### Miner Data Compliance Agreement In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md). ### Dataset Summary This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks. For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe). ### Supported Tasks The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example: - Sentiment Analysis - Trend Detection - Content Analysis - User Behavior Modeling ### Languages Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation. ## Dataset Structure ### Data Instances Each instance represents a single tweet with the following fields: ### Data Fields - `text` (string): The main content of the tweet. - `label` (string): Sentiment or topic category of the tweet. - `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present. - `datetime` (string): The date when the tweet was posted. - `username_encoded` (string): An encoded version of the username to maintain user privacy. - `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present. ### Data Splits This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp. ## Dataset Creation ### Source Data Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines. ### Personal and Sensitive Information All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information. ## Considerations for Using the Data ### Social Impact and Biases Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population. ### Limitations - Data quality may vary due to the decentralized nature of collection and preprocessing. - The dataset may contain noise, spam, or irrelevant content typical of social media platforms. - Temporal biases may exist due to real-time collection methods. - The dataset is limited to public tweets and does not include private accounts or direct messages. - Not all tweets contain hashtags or URLs. ## Additional Information ### Licensing Information The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use. ### Citation Information If you use this dataset in your research, please cite it as follows: ``` @misc{john-11112025datauniversex_dataset_060232, title={The Data Universe Datasets: The finest collection of social media data the web has to offer}, author={john-1111}, year={2025}, url={https://huggingface.co/datasets/john-1111/x_dataset_060232}, } ``` ### Contributions To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms. ## Dataset Statistics [This section is automatically updated] - **Total Instances:** 1300883 - **Date Range:** 2025-01-02T00:00:00Z to 2025-05-24T00:00:00Z - **Last Updated:** 2025-06-03T16:43:32Z ### Data Distribution - Tweets with hashtags: 18.29% - Tweets without hashtags: 81.71% ### Top 10 Hashtags For full statistics, please refer to the `stats.json` file in the repository. | Rank | Topic | Total Count | Percentage | |------|-------|-------------|-------------| | 1 | NULL | 1062956 | 81.71% | | 2 | #thenextprinceep4 | 10897 | 0.84% | | 3 | #箱根駅伝 | 8147 | 0.63% | | 4 | #tiktok | 8034 | 0.62% | | 5 | #thameposeriesep9 | 7605 | 0.58% | | 6 | #riyadh | 6755 | 0.52% | | 7 | #اااااعلانك_ترند_oち32ち9111ち | 5162 | 0.40% | | 8 | #zelena | 4878 | 0.37% | | 9 | #smackdown | 4844 | 0.37% | | 10 | #कबीर_परमेश्वर_निर्वाण_दिवस | 4843 | 0.37% | ## Update History | Date | New Instances | Total Instances | |------|---------------|-----------------| | 2025-01-27T06:45:45Z | 471976 | 471976 | | 2025-02-18T03:42:10Z | 506494 | 978470 | | 2025-06-03T16:43:32Z | 322413 | 1300883 |
james-1111/x_dataset_0306116
james-1111
2025-06-03T16:12:06Z
1,211
0
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:sentiment-analysis", "task_ids:topic-classification", "task_ids:named-entity-recognition", "task_ids:language-modeling", "task_ids:text-scoring", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:extractive-qa", "task_ids:news-articles-summarization", "multilinguality:multilingual", "source_datasets:original", "license:mit", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[ "text-classification", "token-classification", "question-answering", "summarization", "text-generation" ]
2025-01-25T07:09:54Z
null
--- license: mit multilinguality: - multilingual source_datasets: - original task_categories: - text-classification - token-classification - question-answering - summarization - text-generation task_ids: - sentiment-analysis - topic-classification - named-entity-recognition - language-modeling - text-scoring - multi-class-classification - multi-label-classification - extractive-qa - news-articles-summarization --- # Bittensor Subnet 13 X (Twitter) Dataset <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> ## Dataset Description - **Repository:** james-1111/x_dataset_0306116 - **Subnet:** Bittensor Subnet 13 - **Miner Hotkey:** 5CBBUJwfT1ygAPTFaoEQ35qTuqwM5LyHxR2sjZ8isW6B9njQ ### Miner Data Compliance Agreement In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md). ### Dataset Summary This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks. For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe). ### Supported Tasks The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example: - Sentiment Analysis - Trend Detection - Content Analysis - User Behavior Modeling ### Languages Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation. ## Dataset Structure ### Data Instances Each instance represents a single tweet with the following fields: ### Data Fields - `text` (string): The main content of the tweet. - `label` (string): Sentiment or topic category of the tweet. - `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present. - `datetime` (string): The date when the tweet was posted. - `username_encoded` (string): An encoded version of the username to maintain user privacy. - `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present. ### Data Splits This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp. ## Dataset Creation ### Source Data Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines. ### Personal and Sensitive Information All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information. ## Considerations for Using the Data ### Social Impact and Biases Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population. ### Limitations - Data quality may vary due to the decentralized nature of collection and preprocessing. - The dataset may contain noise, spam, or irrelevant content typical of social media platforms. - Temporal biases may exist due to real-time collection methods. - The dataset is limited to public tweets and does not include private accounts or direct messages. - Not all tweets contain hashtags or URLs. ## Additional Information ### Licensing Information The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use. ### Citation Information If you use this dataset in your research, please cite it as follows: ``` @misc{james-11112025datauniversex_dataset_0306116, title={The Data Universe Datasets: The finest collection of social media data the web has to offer}, author={james-1111}, year={2025}, url={https://huggingface.co/datasets/james-1111/x_dataset_0306116}, } ``` ### Contributions To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms. ## Dataset Statistics [This section is automatically updated] - **Total Instances:** 3933757 - **Date Range:** 2025-01-02T00:00:00Z to 2025-05-24T00:00:00Z - **Last Updated:** 2025-06-03T16:12:05Z ### Data Distribution - Tweets with hashtags: 4.31% - Tweets without hashtags: 95.69% ### Top 10 Hashtags For full statistics, please refer to the `stats.json` file in the repository. | Rank | Topic | Total Count | Percentage | |------|-------|-------------|-------------| | 1 | NULL | 1062956 | 86.24% | | 2 | #thenextprinceep4 | 10897 | 0.88% | | 3 | #箱根駅伝 | 8147 | 0.66% | | 4 | #tiktok | 8034 | 0.65% | | 5 | #riyadh | 7740 | 0.63% | | 6 | #thameposeriesep9 | 7605 | 0.62% | | 7 | #اااااعلانك_ترند_oち32ち9111ち | 5162 | 0.42% | | 8 | #zelena | 4878 | 0.40% | | 9 | #smackdown | 4844 | 0.39% | | 10 | #कबीर_परमेश्वर_निर्वाण_दिवस | 4843 | 0.39% | ## Update History | Date | New Instances | Total Instances | |------|---------------|-----------------| | 2025-01-25T07:07:31Z | 453526 | 453526 | | 2025-01-25T07:07:59Z | 453526 | 907052 | | 2025-01-25T07:08:28Z | 453526 | 1360578 | | 2025-01-25T07:08:56Z | 446896 | 1807474 | | 2025-01-25T07:09:24Z | 446896 | 2254370 | | 2025-01-25T07:09:52Z | 446896 | 2701266 | | 2025-01-25T07:10:21Z | 446896 | 3148162 | | 2025-02-18T03:39:54Z | 467290 | 3615452 | | 2025-06-03T16:12:05Z | 318305 | 3933757 |
william-1111/x_dataset_010718
william-1111
2025-06-03T15:30:24Z
1,062
0
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:sentiment-analysis", "task_ids:topic-classification", "task_ids:named-entity-recognition", "task_ids:language-modeling", "task_ids:text-scoring", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:extractive-qa", "task_ids:news-articles-summarization", "multilinguality:multilingual", "source_datasets:original", "license:mit", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[ "text-classification", "token-classification", "question-answering", "summarization", "text-generation" ]
2025-01-25T07:05:51Z
null
--- license: mit multilinguality: - multilingual source_datasets: - original task_categories: - text-classification - token-classification - question-answering - summarization - text-generation task_ids: - sentiment-analysis - topic-classification - named-entity-recognition - language-modeling - text-scoring - multi-class-classification - multi-label-classification - extractive-qa - news-articles-summarization --- # Bittensor Subnet 13 X (Twitter) Dataset <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> ## Dataset Description - **Repository:** william-1111/x_dataset_010718 - **Subnet:** Bittensor Subnet 13 - **Miner Hotkey:** 5EkBRcprHEuZXqNKYg3BaEfSeLqiEQGre1AhHWGtckeS8F36 ### Miner Data Compliance Agreement In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md). ### Dataset Summary This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks. For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe). ### Supported Tasks The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example: - Sentiment Analysis - Trend Detection - Content Analysis - User Behavior Modeling ### Languages Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation. ## Dataset Structure ### Data Instances Each instance represents a single tweet with the following fields: ### Data Fields - `text` (string): The main content of the tweet. - `label` (string): Sentiment or topic category of the tweet. - `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present. - `datetime` (string): The date when the tweet was posted. - `username_encoded` (string): An encoded version of the username to maintain user privacy. - `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present. ### Data Splits This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp. ## Dataset Creation ### Source Data Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines. ### Personal and Sensitive Information All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information. ## Considerations for Using the Data ### Social Impact and Biases Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population. ### Limitations - Data quality may vary due to the decentralized nature of collection and preprocessing. - The dataset may contain noise, spam, or irrelevant content typical of social media platforms. - Temporal biases may exist due to real-time collection methods. - The dataset is limited to public tweets and does not include private accounts or direct messages. - Not all tweets contain hashtags or URLs. ## Additional Information ### Licensing Information The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use. ### Citation Information If you use this dataset in your research, please cite it as follows: ``` @misc{william-11112025datauniversex_dataset_010718, title={The Data Universe Datasets: The finest collection of social media data the web has to offer}, author={william-1111}, year={2025}, url={https://huggingface.co/datasets/william-1111/x_dataset_010718}, } ``` ### Contributions To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms. ## Dataset Statistics [This section is automatically updated] - **Total Instances:** 2655106 - **Date Range:** 2025-01-02T00:00:00Z to 2025-05-24T00:00:00Z - **Last Updated:** 2025-06-03T15:30:23Z ### Data Distribution - Tweets with hashtags: 9.45% - Tweets without hashtags: 90.55% ### Top 10 Hashtags For full statistics, please refer to the `stats.json` file in the repository. | Rank | Topic | Total Count | Percentage | |------|-------|-------------|-------------| | 1 | NULL | 1063379 | 80.90% | | 2 | #sixtonesann | 44706 | 3.40% | | 3 | #thenextprinceep4 | 17391 | 1.32% | | 4 | #mono | 9336 | 0.71% | | 5 | #अध्यात्म_का_बेड़ा_गर्क_करदिया | 8897 | 0.68% | | 6 | #tiktok | 8173 | 0.62% | | 7 | #箱根駅伝 | 8147 | 0.62% | | 8 | #thameposeriesep9 | 7605 | 0.58% | | 9 | #ملعببببب_الإنماء | 7302 | 0.56% | | 10 | #ミッドナイト屋台 | 7210 | 0.55% | ## Update History | Date | New Instances | Total Instances | |------|---------------|-----------------| | 2025-01-25T07:04:53Z | 446896 | 446896 | | 2025-01-25T07:05:21Z | 446896 | 893792 | | 2025-01-25T07:05:50Z | 446896 | 1340688 | | 2025-01-25T07:06:18Z | 446896 | 1787584 | | 2025-02-18T03:35:39Z | 467290 | 2254874 | | 2025-06-03T15:30:23Z | 400232 | 2655106 |
Kyleyee/train_data_Helpful_drdpo_preference
Kyleyee
2025-06-03T13:36:05Z
80
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-17T16:07:59Z
null
--- dataset_info: features: - name: chosen list: - name: content dtype: string - name: role dtype: string - name: rejected list: - name: content dtype: string - name: role dtype: string - name: prompt list: - name: content dtype: string - name: role dtype: string - name: a_1 list: - name: content dtype: string - name: role dtype: string - name: a_2 list: - name: content dtype: string - name: role dtype: string - name: chosen_preference dtype: float64 - name: rejected_preference dtype: float64 - name: a_1_preference dtype: float64 - name: a_2_preference dtype: float64 splits: - name: train num_bytes: 69438428 num_examples: 43835 - name: test num_bytes: 3812201 num_examples: 2354 download_size: 42617495 dataset_size: 73250629 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
ManTang034/so101_test
ManTang034
2025-06-03T13:32:28Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "so101", "tutorial" ]
[ "robotics" ]
2025-06-03T13:32:11Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so101 - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so101", "total_episodes": 10, "total_frames": 5960, "total_tasks": 1, "total_videos": 10, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
QuanHoangNgoc/cmp_dataset_dev
QuanHoangNgoc
2025-06-03T13:26:28Z
0
0
[ "region:us" ]
[]
2025-06-03T13:21:44Z
null
--- dataset_info: features: - name: text dtype: string - name: audio_file dtype: string - name: audio_array16 sequence: float32 splits: - name: dev num_bytes: 2365130410 num_examples: 1900 download_size: 2365317554 dataset_size: 2365130410 configs: - config_name: default data_files: - split: dev path: data/dev-* ---
youssefbelghmi/MNLP_M3_mcqa_dataset_2
youssefbelghmi
2025-06-03T13:16:38Z
44
0
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "multilinguality:monolingual", "language:en", "license:mit", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[ "multiple-choice" ]
2025-06-03T11:12:01Z
null
--- annotations_creators: - expert-generated language: - en license: mit multilinguality: - monolingual size_categories: - 10K<n<100K task_categories: - multiple-choice task_ids: - multiple-choice-qa pretty_name: MNLP M3 MCQA Dataset --- # MNLP M3 MCQA Dataset The **MNLP M3 MCQA Dataset** is a carefully curated collection of **Multiple-Choice Question Answering (MCQA)** examples, unified from several academic and benchmark datasets. Developed as part of the *CS-552: Modern NLP* course at EPFL (Spring 2025), this dataset is designed for training and evaluating models on multiple-choice QA tasks, particularly in the **STEM** and general knowledge domains. ## Key Features - ~30,000 MCQA questions - 6 diverse sources: `SciQ`, `OpenBookQA`, `MathQA`, `ARC-Easy`, `ARC-Challenge`, and `MedMCQA` - Each question has exactly 4 options (A–D) and one correct answer - Covers a wide range of topics: science, technology, engineering, mathematics, and general knowledge ## Dataset Structure Each example is a dictionary with the following fields: | Field | Type | Description | |-----------|----------|---------------------------------------------------| | `dataset` | `string` | Source dataset (`sciq`, `openbookqa`, etc.) | | `id` | `string` | Unique identifier for the question | | `question`| `string` | The question text | | `choices` | `list` | List of 4 answer options (corresponding to A–D) | | `answer` | `string` | The correct option, as a letter: `"A"`, `"B"`, `"C"`, or `"D"` | ```markdown Example: ```json { "dataset": "sciq", "id": "sciq_01_00042", "question": "What does a seismograph measure?", "choices": ["Earthquakes", "Rainfall", "Sunlight", "Temperature"], "answer": "A" } ``` ## Source Datasets This dataset combines multiple high-quality MCQA sources to support research and fine-tuning in STEM education and reasoning. The full corpus contains **29,870 multiple-choice questions** from the following sources: | Source (Hugging Face) | Name | Size | Description & Role in the Dataset | | ------------------------------------------- | ------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `allenai/sciq` | **SciQ** | 11,679 | **Science questions** (Physics, Chemistry, Biology, Earth science). Crowdsourced with 4 answer choices and optional supporting evidence. Used to provide **well-balanced, factual STEM questions** at a middle/high-school level. | | `allenai/openbookqa` | **OpenBookQA** | 4,957 | Science exam-style questions requiring **multi-step reasoning** and use of **commonsense or external knowledge**. Contributes more **challenging** and **inference-based** questions. | | `allenai/math_qa` | **MathQA** | 5,000 | Subsample of quantitative math word problems derived from AQuA-RAT, annotated with structured answer options. Introduces **numerical reasoning** and **problem-solving** components into the dataset. | | `allenai/ai2_arc` (config: `ARC-Easy`) | **ARC-Easy** | 2,140 | Science questions at the middle school level. Useful for testing **basic STEM understanding** and **factual recall**. Filtered to retain only valid 4-choice entries. | | `allenai/ai2_arc` (config: `ARC-Challenge`) | **ARC-Challenge** | 1,094 | More difficult science questions requiring **reasoning and inference**. Widely used as a benchmark for evaluating LLMs. Also filtered for clean MCQA format compatibility. | | `openlifescienceai/medmcqa` | **MedMCQA** | 5,000 | A subsample of multiple-choice questions on **medical topics** from various exams, filtered for a single-choice format. Contains real-world and domain-specific **clinical reasoning** questions covering various medical disciplines. | ## Intended Applications and Structure This dataset is split into three parts: - `train` (~70%) — for training MCQA models - `validation` (~15%) — for tuning and monitoring performance during training - `test` (~15%) — for final evaluation on unseen questions It is suitable for multiple-choice question answering tasks, especially in the **STEM** domain (Science, Technology, Engineering, Mathematics). ## Author This dataset was created and published by [Youssef Belghmi](https://huggingface.co/youssefbelghmi) as part of the *CS-552: Modern NLP* course at EPFL (Spring 2025).
haraouikouceil/n
haraouikouceil
2025-06-03T11:35:30Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T11:35:27Z
null
--- dataset_info: features: - name: prompt dtype: string splits: - name: train num_bytes: 31117380 num_examples: 71452 download_size: 2843773 dataset_size: 31117380 configs: - config_name: default data_files: - split: train path: data/train-* ---
chenxing1234567890/eval_koch_test3
chenxing1234567890
2025-06-03T10:49:37Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "tutorial" ]
[ "robotics" ]
2025-06-03T10:48:21Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "koch", "total_episodes": 10, "total_frames": 7066, "total_tasks": 1, "total_videos": 30, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.laptop": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.phone": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.top": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
StormKing99/x_dataset_55139
StormKing99
2025-06-03T10:22:59Z
1,094
0
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:sentiment-analysis", "task_ids:topic-classification", "task_ids:named-entity-recognition", "task_ids:language-modeling", "task_ids:text-scoring", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:extractive-qa", "task_ids:news-articles-summarization", "multilinguality:multilingual", "source_datasets:original", "license:mit", "size_categories:100M<n<1B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[ "text-classification", "token-classification", "question-answering", "summarization", "text-generation" ]
2025-01-29T01:07:24Z
null
--- license: mit multilinguality: - multilingual source_datasets: - original task_categories: - text-classification - token-classification - question-answering - summarization - text-generation task_ids: - sentiment-analysis - topic-classification - named-entity-recognition - language-modeling - text-scoring - multi-class-classification - multi-label-classification - extractive-qa - news-articles-summarization --- # Bittensor Subnet 13 X (Twitter) Dataset <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> ## Dataset Description - **Repository:** StormKing99/x_dataset_55139 - **Subnet:** Bittensor Subnet 13 - **Miner Hotkey:** 5E4y9kJmMS6XaitQbdhfBRkUGEvCCD6rW32iwj3dm4NiQjbb ### Dataset Summary This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks. For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe). ### Supported Tasks The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example: - Sentiment Analysis - Trend Detection - Content Analysis - User Behavior Modeling ### Languages Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation. ## Dataset Structure ### Data Instances Each instance represents a single tweet with the following fields: ### Data Fields - `text` (string): The main content of the tweet. - `label` (string): Sentiment or topic category of the tweet. - `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present. - `datetime` (string): The date when the tweet was posted. - `username_encoded` (string): An encoded version of the username to maintain user privacy. - `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present. ### Data Splits This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp. ## Dataset Creation ### Source Data Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines. ### Personal and Sensitive Information All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information. ## Considerations for Using the Data ### Social Impact and Biases Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population. ### Limitations - Data quality may vary due to the decentralized nature of collection and preprocessing. - The dataset may contain noise, spam, or irrelevant content typical of social media platforms. - Temporal biases may exist due to real-time collection methods. - The dataset is limited to public tweets and does not include private accounts or direct messages. - Not all tweets contain hashtags or URLs. ## Additional Information ### Licensing Information The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use. ### Citation Information If you use this dataset in your research, please cite it as follows: ``` @misc{StormKing992025datauniversex_dataset_55139, title={The Data Universe Datasets: The finest collection of social media data the web has to offer}, author={StormKing99}, year={2025}, url={https://huggingface.co/datasets/StormKing99/x_dataset_55139}, } ``` ### Contributions To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms. ## Dataset Statistics [This section is automatically updated] - **Total Instances:** 45546252 - **Date Range:** 2025-01-23T00:00:00Z to 2025-02-12T00:00:00Z - **Last Updated:** 2025-02-18T21:02:59Z ### Data Distribution - Tweets with hashtags: 42.66% - Tweets without hashtags: 57.34% ### Top 10 Hashtags For full statistics, please refer to the `stats.json` file in the repository. | Rank | Topic | Total Count | Percentage | |------|-------|-------------|-------------| | 1 | NULL | 26117517 | 57.34% | | 2 | #riyadh | 286662 | 0.63% | | 3 | #zelena | 252537 | 0.55% | | 4 | #tiktok | 184851 | 0.41% | | 5 | #ad | 101282 | 0.22% | | 6 | #bbb25 | 99746 | 0.22% | | 7 | #theheartkillersep11 | 67143 | 0.15% | | 8 | #transferlerlebirliktezafere | 64621 | 0.14% | | 9 | #bbmzansi | 61074 | 0.13% | | 10 | #แจกจริง | 55533 | 0.12% | ## Update History | Date | New Instances | Total Instances | |------|---------------|-----------------| | 2025-01-29T01:07:37Z | 399713 | 399713 | | 2025-02-01T13:11:17Z | 11997555 | 12397268 | | 2025-02-05T01:15:13Z | 10941377 | 23338645 | | 2025-02-08T13:19:17Z | 10005707 | 33344352 | | 2025-02-12T01:24:07Z | 10705327 | 44049679 | | 2025-02-18T06:01:52Z | 696224 | 44745903 | | 2025-02-18T21:02:59Z | 800349 | 45546252 |
davanstrien/dataset_cards_with_metadata
davanstrien
2025-06-03T10:20:08Z
422
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-17T09:48:47Z
null
--- dataset_info: features: - name: datasetId dtype: large_string - name: author dtype: large_string - name: last_modified dtype: large_string - name: downloads dtype: int64 - name: likes dtype: int64 - name: tags large_list: large_string - name: task_categories large_list: large_string - name: createdAt dtype: large_string - name: trending_score dtype: float64 - name: card dtype: large_string splits: - name: train num_bytes: 110530629 num_examples: 32315 download_size: 30124925 dataset_size: 110530629 configs: - config_name: default data_files: - split: train path: data/train-* ---
daniel-dona/sparql-dataset-reasoning-test3
daniel-dona
2025-06-03T10:13:56Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T10:13:51Z
null
--- dataset_info: features: - name: qid dtype: string - name: lang dtype: string - name: nlq dtype: string - name: classes sequence: string - name: properties sequence: string - name: features sequence: string - name: sparql dtype: string - name: reasoning dtype: string splits: - name: train num_bytes: 11712015 num_examples: 2500 download_size: 961054 dataset_size: 11712015 configs: - config_name: default data_files: - split: train path: data/train-* ---
athrv/Embedded13
athrv
2025-06-03T10:11:56Z
131
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-22T09:43:34Z
null
--- dataset_info: features: - name: ID dtype: string - name: Language dtype: string - name: Repository Name dtype: string - name: Base File Name dtype: string - name: File Paths dtype: string - name: Code1 dtype: string - name: Unit Test (.cpp file) dtype: string - name: Category dtype: string - name: CMakeLists dtype: string - name: Total Lines dtype: int64 splits: - name: train num_bytes: 8504 num_examples: 1 download_size: 41534 dataset_size: 8504 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "Embedded13" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
athrv/Embedded12
athrv
2025-06-03T10:10:47Z
88
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-22T09:17:39Z
null
--- dataset_info: features: - name: ID dtype: string - name: Language dtype: string - name: Repository Name dtype: string - name: Base File Name dtype: string - name: File Paths dtype: string - name: Code1 dtype: string - name: Unit Test (.cpp file) dtype: string - name: Category dtype: string - name: CMakeLists dtype: string - name: Total Lines dtype: int64 splits: - name: train num_bytes: 48719 num_examples: 1 download_size: 22445 dataset_size: 48719 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "Embedded12" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
love3165303/so100_train1
love3165303
2025-06-03T09:39:10Z
0
0
[ "task_categories:robotics", "size_categories:n<1K", "modality:video", "library:datasets", "library:mlcroissant", "region:us", "phosphobot", "so100", "phospho-dk" ]
[ "robotics" ]
2025-06-03T09:37:03Z
null
--- tags: - phosphobot - so100 - phospho-dk task_categories: - robotics --- # so100_train1 **This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).** This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
gisako/multiwoz-chat
gisako
2025-06-03T09:28:19Z
0
0
[ "task_categories:text-generation", "language:en", "license:mit", "size_categories:10K<n<100K", "format:json", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "region:us" ]
[ "text-generation" ]
2025-06-03T09:23:31Z
null
--- license: mit task_categories: - text-generation language: - en pretty_name: multiwoz-chat-llama-gpt size_categories: - 1K<n<10K ---
ustc-zyt/time-r1-data
ustc-zyt
2025-06-03T09:13:46Z
0
0
[ "task_categories:time-series-forecasting", "language:en", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[ "time-series-forecasting" ]
2025-06-03T09:01:15Z
null
--- license: apache-2.0 task_categories: - time-series-forecasting language: - en pretty_name: a size_categories: - 1K<n<10K --- # 📊 Time-R1 RL Training Dataset This dataset is used in the **Reinforcement Learning (RL)** phase of the paper: **"Time Series Forecasting as Reasoning: A Slow-Thinking Approach with Reinforced LLMs"**. --- ## 📁 Data Format Overview The dataset is stored in **Parquet** format. Each sample includes: | Field | Type | Description | | -------------- | ------------ | ---------------------------------------------------------------------------- | | `prompt` | `list[dict]` | Natural language instruction including 96-step historical input sequence. | | `reward_model` | `dict` | Contains the `ground_truth` field – the target values for the next 96 steps. | | `data_source` | `string` | Dataset name (e.g., `"ETTh1"`). | | `ability` | `string` | Task type – here always `"TimeSeriesForecasting"`. | | `extra_info` | `dict` | Metadata including sample `index` and data `split` (e.g., `"train"`). | --- ## 🧾 Example Sample ```json { "prompt": [ { "content": "Here is the High Useful Load data of the transformer. (dataset is ETTh1)..." } ], "data_source": "ETTh1", "ability": "TimeSeriesForecasting", "reward_model": { "ground_truth": "date HUFL\n2016-07-05 00:00:00 11.989\n2016-07-05 01:00:00 12.525\n..." }, "extra_info": { "index": 0, "split": "train" } } ``` Each prompt contains structured temporal input (96 steps) in a language-style format. The `ground_truth` contains corresponding 96-step future targets with timestamps and values.
Nitish906099/nitish
Nitish906099
2025-06-03T09:07:57Z
0
0
[ "license:intel-research", "region:us" ]
[]
2025-06-03T09:07:57Z
null
--- license: intel-research ---
Nitish906099/dream11-eng-wi-___
Nitish906099
2025-06-03T08:58:32Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T08:58:31Z
null
--- dataset_info: features: - name: Player dtype: string - name: Avg Fpts dtype: float64 - name: Runs dtype: int64 - name: WK dtype: int64 - name: RR1 dtype: int64 - name: RR2 dtype: int64 - name: RR3 dtype: int64 - name: RR4 dtype: int64 - name: RR5 dtype: int64 - name: RW1 dtype: int64 - name: RW2 dtype: int64 - name: RW3 dtype: int64 - name: RW4 dtype: int64 - name: RW5 dtype: int64 splits: - name: train num_bytes: 622 num_examples: 5 download_size: 5893 dataset_size: 622 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "dream11-eng-wi-___" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
EQX55/voice_test
EQX55
2025-06-03T08:28:55Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T08:28:51Z
null
--- dataset_info: features: - name: audio dtype: audio - name: text dtype: string splits: - name: train num_bytes: 19030645.0 num_examples: 61 download_size: 18992096 dataset_size: 19030645.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
LLM360/guru_RL
LLM360
2025-06-03T08:26:44Z
0
0
[ "task_categories:text2text-generation", "task_categories:text-generation", "task_categories:table-question-answering", "task_categories:question-answering", "language:aa", "license:cc-by-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "code", "math", "reasoning", "logic", "tabular" ]
[ "text2text-generation", "text-generation", "table-question-answering", "question-answering" ]
2025-06-03T04:39:38Z
null
--- license: cc-by-2.0 task_categories: - text2text-generation - text-generation - table-question-answering - question-answering language: - aa tags: - code - math - reasoning - logic - tabular pretty_name: >- GURU: Incentivizing General Reasoning Skills with a Curated Open Reinforcement Learning Dataset size_categories: - 10K<n<100K --- # GURU: Incentivizing General Reasoning Skills with a Curated Open Reinforcement Learning Dataset ## Dataset Description **GURU** is a meticulously curated cross-domain dataset specifically designed for training large language models on complex reasoning tasks. The dataset contains 91.9K high-quality samples spanning six diverse reasoning-intensive domains, processed through a comprehensive five-stage curation pipeline to ensure both domain diversity and reward verifiability. ### Dataset Summary GURU addresses the critical need for robust cross-domain reasoning capabilities in LLMs by providing a carefully balanced collection of problems across mathematics, coding, science, logic, simulation, and tabular reasoning. Each sample has been filtered for quality and equipped with automated verification mechanisms, making it ideal for reinforcement learning applications. ### Key Features - **Cross-Domain Coverage**: Six distinct reasoning domains ensuring comprehensive skill development - **Quality Assurance**: Five-stage curation pipeline with deduplication and heuristic filtering - **Automated Verification**: Domain-specific reward functions for reliable evaluation - **Difficulty Calibration**: Samples filtered to maintain appropriate challenge levels - **RL-Ready**: Binary reward system compatible with reinforcement learning frameworks ## Dataset Structure ### Domains and Statistics | Domain | Datasets Included | Final Sample Count | Key Focus Areas | |--------|------------------|-------------------|-----------------| | **Math** | OR1, DAPO, DeepScaler | 54.4K | Competition problems, symbolic reasoning | | **Code** | LeetCode, TACO-Verified, PrimeIntellect, LiveCodeBench | 18.1K | Programming challenges, algorithm design | | **Science** | WebInstruct-Verified | 3.6K | University/PhD-level physics, chemistry, biology | | **Logic** | ARC-AGI, BARC, Custom puzzles | 6.3K | Symbolic reasoning, constraint satisfaction | | **Simulation** | Code I/O (PyEdu) | 3.7K | Code behavior prediction without execution | | **Tabular** | HiTab, MultiHierTT | 6.1K | Single and multi-table reasoning | **Total Samples**: 91.9K (filtered from 684.3K raw samples) ## Citation If you use this dataset in your research, please cite: ```bibtex ``` *This dataset card follows the Hugging Face dataset card template and provides comprehensive information about the GURU dataset structure, creation process, and intended use cases.*
TessWOfficial/pixtral_finetune_ham10000
TessWOfficial
2025-06-03T08:15:12Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-02T12:37:44Z
null
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 205974104.0 num_examples: 725 download_size: 205962930 dataset_size: 205974104.0 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "pixtral_finetune_ham10000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pch11/final02
pch11
2025-06-03T08:03:25Z
0
0
[ "license:apache-2.0", "size_categories:n<1K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T07:49:45Z
null
--- license: apache-2.0 dataset_info: features: - name: file_name dtype: string - name: image dtype: image - name: caption_flux dtype: string - name: caption_sd3 dtype: string - name: caption_sdxl dtype: string - name: caption_sd15 dtype: string splits: - name: train num_bytes: 8768080.0 num_examples: 47 download_size: 8745947 dataset_size: 8768080.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
mteb/results
mteb
2025-06-03T07:58:16Z
10,416
1
[ "benchmark:mteb", "region:us" ]
[]
2024-07-06T20:19:19Z
null
--- benchmark: mteb type: evaluation submission_name: MTEB --- > [!NOTE] > Previously, it was possible to submit model results to MTEB by adding them to the metadata of the model card on huggingface. However, this is no longer possible as we want to ensure that we can match the results with the model implementation. If you want to add your model, please follow the [guide](https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md) on how to do so. This repository contains the results of the embedding benchmark evaluated using the package `mteb`. | Reference | | | ------------------- | ---------------------------------------------------------------------------------------- | | 🦾 **[Leaderboard]** | An up to date leaderboard of embedding models | | 📚 **[mteb]** | Guides and instructions on how to use `mteb`, including running, submitting scores, etc. | | 🙋 **[Questions]** | Questions about the results | | 🙋 **[Issues]** | Issues or bugs you have found | [Leaderboard]: https://huggingface.co/spaces/mteb/leaderboard [mteb]: https://github.com/embeddings-benchmark/mteb [Questions]: https://github.com/embeddings-benchmark/mteb/discussions [Issues]: https://github.com/embeddings-benchmark/mteb/issues
macrocosm-os/macrobench-bittensor-01
macrocosm-os
2025-06-03T07:05:31Z
2,634
2
[ "license:mit", "region:us" ]
[]
2025-02-05T11:18:22Z
null
--- configs: - config_name: '20241001' data_files: - path: 20241001/miner_evaluations.parquet split: '20241001' - config_name: '20241002' data_files: - path: 20241002/miner_evaluations.parquet split: '20241002' - config_name: '20241003' data_files: - path: 20241003/miner_evaluations.parquet split: '20241003' - config_name: '20241004' data_files: - path: 20241004/miner_evaluations.parquet split: '20241004' - config_name: '20241005' data_files: - path: 20241005/miner_evaluations.parquet split: '20241005' - config_name: '20241006' data_files: - path: 20241006/miner_evaluations.parquet split: '20241006' - config_name: '20241007' data_files: - path: 20241007/miner_evaluations.parquet split: '20241007' - config_name: '20241008' data_files: - path: 20241008/miner_evaluations.parquet split: '20241008' - config_name: '20241009' data_files: - path: 20241009/miner_evaluations.parquet split: '20241009' - config_name: '20241010' data_files: - path: 20241010/miner_evaluations.parquet split: '20241010' - config_name: '20241011' data_files: - path: 20241011/miner_evaluations.parquet split: '20241011' - config_name: '20241012' data_files: - path: 20241012/miner_evaluations.parquet split: '20241012' - config_name: '20241013' data_files: - path: 20241013/miner_evaluations.parquet split: '20241013' - config_name: '20241014' data_files: - path: 20241014/miner_evaluations.parquet split: '20241014' - config_name: '20241015' data_files: - path: 20241015/miner_evaluations.parquet split: '20241015' - config_name: '20241016' data_files: - path: 20241016/miner_evaluations.parquet split: '20241016' - config_name: '20241017' data_files: - path: 20241017/miner_evaluations.parquet split: '20241017' - config_name: '20241018' data_files: - path: 20241018/miner_evaluations.parquet split: '20241018' - config_name: '20241019' data_files: - path: 20241019/miner_evaluations.parquet split: '20241019' - config_name: '20241020' data_files: - path: 20241020/miner_evaluations.parquet split: '20241020' - config_name: '20241021' data_files: - path: 20241021/miner_evaluations.parquet split: '20241021' - config_name: '20241022' data_files: - path: 20241022/miner_evaluations.parquet split: '20241022' - config_name: '20241023' data_files: - path: 20241023/miner_evaluations.parquet split: '20241023' - config_name: '20241024' data_files: - path: 20241024/miner_evaluations.parquet split: '20241024' - config_name: '20241025' data_files: - path: 20241025/miner_evaluations.parquet split: '20241025' - config_name: '20241026' data_files: - path: 20241026/miner_evaluations.parquet split: '20241026' - config_name: '20241027' data_files: - path: 20241027/miner_evaluations.parquet split: '20241027' - config_name: '20241028' data_files: - path: 20241028/miner_evaluations.parquet split: '20241028' - config_name: '20241029' data_files: - path: 20241029/miner_evaluations.parquet split: '20241029' - config_name: '20241030' data_files: - path: 20241030/miner_evaluations.parquet split: '20241030' - config_name: '20241031' data_files: - path: 20241031/miner_evaluations.parquet split: '20241031' - config_name: '20241101' data_files: - path: 20241101/miner_evaluations.parquet split: '20241101' - config_name: '20241102' data_files: - path: 20241102/miner_evaluations.parquet split: '20241102' - config_name: '20241103' data_files: - path: 20241103/miner_evaluations.parquet split: '20241103' - config_name: '20241104' data_files: - path: 20241104/miner_evaluations.parquet split: '20241104' - config_name: '20241105' data_files: - path: 20241105/miner_evaluations.parquet split: '20241105' - config_name: '20241106' data_files: - path: 20241106/miner_evaluations.parquet split: '20241106' - config_name: '20241107' data_files: - path: 20241107/miner_evaluations.parquet split: '20241107' - config_name: '20241108' data_files: - path: 20241108/miner_evaluations.parquet split: '20241108' - config_name: '20241109' data_files: - path: 20241109/miner_evaluations.parquet split: '20241109' - config_name: '20241110' data_files: - path: 20241110/miner_evaluations.parquet split: '20241110' - config_name: '20241111' data_files: - path: 20241111/miner_evaluations.parquet split: '20241111' - config_name: '20241112' data_files: - path: 20241112/miner_evaluations.parquet split: '20241112' - config_name: '20241113' data_files: - path: 20241113/miner_evaluations.parquet split: '20241113' - config_name: '20241114' data_files: - path: 20241114/miner_evaluations.parquet split: '20241114' - config_name: '20241115' data_files: - path: 20241115/miner_evaluations.parquet split: '20241115' - config_name: '20241116' data_files: - path: 20241116/miner_evaluations.parquet split: '20241116' - config_name: '20241117' data_files: - path: 20241117/miner_evaluations.parquet split: '20241117' - config_name: '20241118' data_files: - path: 20241118/miner_evaluations.parquet split: '20241118' - config_name: '20241119' data_files: - path: 20241119/miner_evaluations.parquet split: '20241119' - config_name: '20241120' data_files: - path: 20241120/miner_evaluations.parquet split: '20241120' - config_name: '20241121' data_files: - path: 20241121/miner_evaluations.parquet split: '20241121' - config_name: '20241122' data_files: - path: 20241122/miner_evaluations.parquet split: '20241122' - config_name: '20241123' data_files: - path: 20241123/miner_evaluations.parquet split: '20241123' - config_name: '20241124' data_files: - path: 20241124/miner_evaluations.parquet split: '20241124' - config_name: '20241125' data_files: - path: 20241125/miner_evaluations.parquet split: '20241125' - config_name: '20241126' data_files: - path: 20241126/miner_evaluations.parquet split: '20241126' - config_name: '20241127' data_files: - path: 20241127/miner_evaluations.parquet split: '20241127' - config_name: '20241128' data_files: - path: 20241128/miner_evaluations.parquet split: '20241128' - config_name: '20241129' data_files: - path: 20241129/miner_evaluations.parquet split: '20241129' - config_name: '20241130' data_files: - path: 20241130/miner_evaluations.parquet split: '20241130' - config_name: '20241201' data_files: - path: 20241201/miner_evaluations.parquet split: '20241201' - config_name: '20241202' data_files: - path: 20241202/miner_evaluations.parquet split: '20241202' - config_name: '20241203' data_files: - path: 20241203/miner_evaluations.parquet split: '20241203' - config_name: '20241204' data_files: - path: 20241204/miner_evaluations.parquet split: '20241204' - config_name: '20241205' data_files: - path: 20241205/miner_evaluations.parquet split: '20241205' - config_name: '20241206' data_files: - path: 20241206/miner_evaluations.parquet split: '20241206' - config_name: '20241207' data_files: - path: 20241207/miner_evaluations.parquet split: '20241207' - config_name: '20241208' data_files: - path: 20241208/miner_evaluations.parquet split: '20241208' - config_name: '20241209' data_files: - path: 20241209/miner_evaluations.parquet split: '20241209' - config_name: '20241210' data_files: - path: 20241210/miner_evaluations.parquet split: '20241210' - config_name: '20241211' data_files: - path: 20241211/miner_evaluations.parquet split: '20241211' - config_name: '20241212' data_files: - path: 20241212/miner_evaluations.parquet split: '20241212' - config_name: '20241213' data_files: - path: 20241213/miner_evaluations.parquet split: '20241213' - config_name: '20241214' data_files: - path: 20241214/miner_evaluations.parquet split: '20241214' - config_name: '20241215' data_files: - path: 20241215/miner_evaluations.parquet split: '20241215' - config_name: '20241216' data_files: - path: 20241216/miner_evaluations.parquet split: '20241216' - config_name: '20241217' data_files: - path: 20241217/miner_evaluations.parquet split: '20241217' - config_name: '20241218' data_files: - path: 20241218/miner_evaluations.parquet split: '20241218' - config_name: '20241219' data_files: - path: 20241219/miner_evaluations.parquet split: '20241219' - config_name: '20241220' data_files: - path: 20241220/miner_evaluations.parquet split: '20241220' - config_name: '20241221' data_files: - path: 20241221/miner_evaluations.parquet split: '20241221' - config_name: '20241222' data_files: - path: 20241222/miner_evaluations.parquet split: '20241222' - config_name: '20241223' data_files: - path: 20241223/miner_evaluations.parquet split: '20241223' - config_name: '20241224' data_files: - path: 20241224/miner_evaluations.parquet split: '20241224' - config_name: '20241225' data_files: - path: 20241225/miner_evaluations.parquet split: '20241225' - config_name: '20241226' data_files: - path: 20241226/miner_evaluations.parquet split: '20241226' - config_name: '20241227' data_files: - path: 20241227/miner_evaluations.parquet split: '20241227' - config_name: '20241228' data_files: - path: 20241228/miner_evaluations.parquet split: '20241228' - config_name: '20241229' data_files: - path: 20241229/miner_evaluations.parquet split: '20241229' - config_name: '20241230' data_files: - path: 20241230/miner_evaluations.parquet split: '20241230' - config_name: '20241231' data_files: - path: 20241231/miner_evaluations.parquet split: '20241231' - config_name: '20250101' data_files: - path: 20250101/miner_evaluations.parquet split: '20250101' - config_name: '20250102' data_files: - path: 20250102/miner_evaluations.parquet split: '20250102' - config_name: '20250103' data_files: - path: 20250103/miner_evaluations.parquet split: '20250103' - config_name: '20250104' data_files: - path: 20250104/miner_evaluations.parquet split: '20250104' - config_name: '20250105' data_files: - path: 20250105/miner_evaluations.parquet split: '20250105' - config_name: '20250106' data_files: - path: 20250106/miner_evaluations.parquet split: '20250106' - config_name: '20250107' data_files: - path: 20250107/miner_evaluations.parquet split: '20250107' - config_name: '20250108' data_files: - path: 20250108/miner_evaluations.parquet split: '20250108' - config_name: '20250109' data_files: - path: 20250109/miner_evaluations.parquet split: '20250109' - config_name: '20250110' data_files: - path: 20250110/miner_evaluations.parquet split: '20250110' - config_name: '20250111' data_files: - path: 20250111/miner_evaluations.parquet split: '20250111' - config_name: '20250112' data_files: - path: 20250112/miner_evaluations.parquet split: '20250112' - config_name: '20250113' data_files: - path: 20250113/miner_evaluations.parquet split: '20250113' - config_name: '20250114' data_files: - path: 20250114/miner_evaluations.parquet split: '20250114' - config_name: '20250115' data_files: - path: 20250115/miner_evaluations.parquet split: '20250115' - config_name: '20250116' data_files: - path: 20250116/miner_evaluations.parquet split: '20250116' - config_name: '20250117' data_files: - path: 20250117/miner_evaluations.parquet split: '20250117' - config_name: '20250118' data_files: - path: 20250118/miner_evaluations.parquet split: '20250118' - config_name: '20250119' data_files: - path: 20250119/miner_evaluations.parquet split: '20250119' - config_name: '20250120' data_files: - path: 20250120/miner_evaluations.parquet split: '20250120' - config_name: '20250121' data_files: - path: 20250121/miner_evaluations.parquet split: '20250121' - config_name: '20250122' data_files: - path: 20250122/miner_evaluations.parquet split: '20250122' - config_name: '20250123' data_files: - path: 20250123/miner_evaluations.parquet split: '20250123' - config_name: '20250124' data_files: - path: 20250124/miner_evaluations.parquet split: '20250124' - config_name: '20250125' data_files: - path: 20250125/miner_evaluations.parquet split: '20250125' - config_name: '20250126' data_files: - path: 20250126/miner_evaluations.parquet split: '20250126' - config_name: '20250127' data_files: - path: 20250127/miner_evaluations.parquet split: '20250127' - config_name: '20250128' data_files: - path: 20250128/miner_evaluations.parquet split: '20250128' - config_name: '20250129' data_files: - path: 20250129/miner_evaluations.parquet split: '20250129' - config_name: '20250130' data_files: - path: 20250130/miner_evaluations.parquet split: '20250130' - config_name: '20250131' data_files: - path: 20250131/miner_evaluations.parquet split: '20250131' - config_name: '20250201' data_files: - path: 20250201/miner_evaluations.parquet split: '20250201' - config_name: '20250202' data_files: - path: 20250202/miner_evaluations.parquet split: '20250202' - config_name: '20250203' data_files: - path: 20250203/miner_evaluations.parquet split: '20250203' - config_name: '20250204' data_files: - path: 20250204/miner_evaluations.parquet split: '20250204' - config_name: '20250205' data_files: - path: 20250205/miner_evaluations.parquet split: '20250205' - config_name: '20250206' data_files: - path: 20250206/miner_evaluations.parquet split: '20250206' - config_name: '20250207' data_files: - path: 20250207/miner_evaluations.parquet split: '20250207' - config_name: '20250208' data_files: - path: 20250208/miner_evaluations.parquet split: '20250208' - config_name: '20250209' data_files: - path: 20250209/miner_evaluations.parquet split: '20250209' - config_name: '20250210' data_files: - path: 20250210/miner_evaluations.parquet split: '20250210' - config_name: '20250211' data_files: - path: 20250211/miner_evaluations.parquet split: '20250211' - config_name: '20250212' data_files: - path: 20250212/miner_evaluations.parquet split: '20250212' - config_name: '20250213' data_files: - path: 20250213/miner_evaluations.parquet split: '20250213' - config_name: '20250214' data_files: - path: 20250214/miner_evaluations.parquet split: '20250214' - config_name: '20250215' data_files: - path: 20250215/miner_evaluations.parquet split: '20250215' - config_name: '20250216' data_files: - path: 20250216/miner_evaluations.parquet split: '20250216' - config_name: '20250217' data_files: - path: 20250217/miner_evaluations.parquet split: '20250217' - config_name: '20250218' data_files: - path: 20250218/miner_evaluations.parquet split: '20250218' - config_name: '20250219' data_files: - path: 20250219/miner_evaluations.parquet split: '20250219' - config_name: '20250220' data_files: - path: 20250220/miner_evaluations.parquet split: '20250220' - config_name: '20250221' data_files: - path: 20250221/miner_evaluations.parquet split: '20250221' - config_name: '20250222' data_files: - path: 20250222/miner_evaluations.parquet split: '20250222' - config_name: '20250223' data_files: - path: 20250223/miner_evaluations.parquet split: '20250223' - config_name: '20250224' data_files: - path: 20250224/miner_evaluations.parquet split: '20250224' - config_name: '20250225' data_files: - path: 20250225/miner_evaluations.parquet split: '20250225' - config_name: '20250226' data_files: - path: 20250226/miner_evaluations.parquet split: '20250226' - config_name: '20250227' data_files: - path: 20250227/miner_evaluations.parquet split: '20250227' - config_name: '20250228' data_files: - path: 20250228/miner_evaluations.parquet split: '20250228' - config_name: '20250301' data_files: - path: 20250301/miner_evaluations.parquet split: '20250301' - config_name: '20250302' data_files: - path: 20250302/miner_evaluations.parquet split: '20250302' - config_name: '20250303' data_files: - path: 20250303/miner_evaluations.parquet split: '20250303' - config_name: '20250304' data_files: - path: 20250304/miner_evaluations.parquet split: '20250304' - config_name: '20250305' data_files: - path: 20250305/miner_evaluations.parquet split: '20250305' - config_name: '20250306' data_files: - path: 20250306/miner_evaluations.parquet split: '20250306' - config_name: '20250307' data_files: - path: 20250307/miner_evaluations.parquet split: '20250307' - config_name: '20250308' data_files: - path: 20250308/miner_evaluations.parquet split: '20250308' - config_name: '20250309' data_files: - path: 20250309/miner_evaluations.parquet split: '20250309' - config_name: '20250310' data_files: - path: 20250310/miner_evaluations.parquet split: '20250310' - config_name: '20250311' data_files: - path: 20250311/miner_evaluations.parquet split: '20250311' - config_name: '20250312' data_files: - path: 20250312/miner_evaluations.parquet split: '20250312' - config_name: '20250313' data_files: - path: 20250313/miner_evaluations.parquet split: '20250313' - config_name: '20250314' data_files: - path: 20250314/miner_evaluations.parquet split: '20250314' - config_name: '20250315' data_files: - path: 20250315/miner_evaluations.parquet split: '20250315' - config_name: '20250316' data_files: - path: 20250316/miner_evaluations.parquet split: '20250316' - config_name: '20250317' data_files: - path: 20250317/miner_evaluations.parquet split: '20250317' - config_name: '20250318' data_files: - path: 20250318/miner_evaluations.parquet split: '20250318' - config_name: '20250319' data_files: - path: 20250319/miner_evaluations.parquet split: '20250319' - config_name: '20250320' data_files: - path: 20250320/miner_evaluations.parquet split: '20250320' - config_name: '20250321' data_files: - path: 20250321/miner_evaluations.parquet split: '20250321' - config_name: '20250322' data_files: - path: 20250322/miner_evaluations.parquet split: '20250322' - config_name: '20250323' data_files: - path: 20250323/miner_evaluations.parquet split: '20250323' - config_name: '20250324' data_files: - path: 20250324/miner_evaluations.parquet split: '20250324' - config_name: '20250325' data_files: - path: 20250325/miner_evaluations.parquet split: '20250325' - config_name: '20250326' data_files: - path: 20250326/miner_evaluations.parquet split: '20250326' - config_name: '20250327' data_files: - path: 20250327/miner_evaluations.parquet split: '20250327' - config_name: '20250328' data_files: - path: 20250328/miner_evaluations.parquet split: '20250328' - config_name: '20250329' data_files: - path: 20250329/miner_evaluations.parquet split: '20250329' - config_name: '20250330' data_files: - path: 20250330/miner_evaluations.parquet split: '20250330' - config_name: '20250331' data_files: - path: 20250331/miner_evaluations.parquet split: '20250331' - config_name: '20250401' data_files: - path: 20250401/miner_evaluations.parquet split: '20250401' - config_name: '20250402' data_files: - path: 20250402/miner_evaluations.parquet split: '20250402' - config_name: '20250403' data_files: - path: 20250403/miner_evaluations.parquet split: '20250403' - config_name: '20250404' data_files: - path: 20250404/miner_evaluations.parquet split: '20250404' - config_name: '20250405' data_files: - path: 20250405/miner_evaluations.parquet split: '20250405' - config_name: '20250406' data_files: - path: 20250406/miner_evaluations.parquet split: '20250406' - config_name: '20250407' data_files: - path: 20250407/miner_evaluations.parquet split: '20250407' - config_name: '20250408' data_files: - path: 20250408/miner_evaluations.parquet split: '20250408' - config_name: '20250409' data_files: - path: 20250409/miner_evaluations.parquet split: '20250409' - config_name: '20250410' data_files: - path: 20250410/miner_evaluations.parquet split: '20250410' - config_name: '20250411' data_files: - path: 20250411/miner_evaluations.parquet split: '20250411' - config_name: '20250412' data_files: - path: 20250412/miner_evaluations.parquet split: '20250412' - config_name: '20250413' data_files: - path: 20250413/miner_evaluations.parquet split: '20250413' - config_name: '20250414' data_files: - path: 20250414/miner_evaluations.parquet split: '20250414' - config_name: '20250415' data_files: - path: 20250415/miner_evaluations.parquet split: '20250415' - config_name: '20250416' data_files: - path: 20250416/miner_evaluations.parquet split: '20250416' - config_name: '20250417' data_files: - path: 20250417/miner_evaluations.parquet split: '20250417' - config_name: '20250418' data_files: - path: 20250418/miner_evaluations.parquet split: '20250418' - config_name: '20250419' data_files: - path: 20250419/miner_evaluations.parquet split: '20250419' - config_name: '20250420' data_files: - path: 20250420/miner_evaluations.parquet split: '20250420' - config_name: '20250421' data_files: - path: 20250421/miner_evaluations.parquet split: '20250421' - config_name: '20250422' data_files: - path: 20250422/miner_evaluations.parquet split: '20250422' - config_name: '20250423' data_files: - path: 20250423/miner_evaluations.parquet split: '20250423' - config_name: '20250424' data_files: - path: 20250424/miner_evaluations.parquet split: '20250424' - config_name: '20250425' data_files: - path: 20250425/miner_evaluations.parquet split: '20250425' - config_name: '20250426' data_files: - path: 20250426/miner_evaluations.parquet split: '20250426' - config_name: '20250427' data_files: - path: 20250427/miner_evaluations.parquet split: '20250427' - config_name: '20250428' data_files: - path: 20250428/miner_evaluations.parquet split: '20250428' - config_name: '20250429' data_files: - path: 20250429/miner_evaluations.parquet split: '20250429' - config_name: '20250430' data_files: - path: 20250430/miner_evaluations.parquet split: '20250430' - config_name: '20250501' data_files: - path: 20250501/miner_evaluations.parquet split: '20250501' - config_name: '20250502' data_files: - path: 20250502/miner_evaluations.parquet split: '20250502' - config_name: '20250503' data_files: - path: 20250503/miner_evaluations.parquet split: '20250503' - config_name: '20250504' data_files: - path: 20250504/miner_evaluations.parquet split: '20250504' - config_name: '20250505' data_files: - path: 20250505/miner_evaluations.parquet split: '20250505' - config_name: '20250506' data_files: - path: 20250506/miner_evaluations.parquet split: '20250506' - config_name: '20250507' data_files: - path: 20250507/miner_evaluations.parquet split: '20250507' - config_name: '20250508' data_files: - path: 20250508/miner_evaluations.parquet split: '20250508' - config_name: '20250509' data_files: - path: 20250509/miner_evaluations.parquet split: '20250509' - config_name: '20250510' data_files: - path: 20250510/miner_evaluations.parquet split: '20250510' - config_name: '20250511' data_files: - path: 20250511/miner_evaluations.parquet split: '20250511' - config_name: '20250512' data_files: - path: 20250512/miner_evaluations.parquet split: '20250512' - config_name: '20250513' data_files: - path: 20250513/miner_evaluations.parquet split: '20250513' - config_name: '20250514' data_files: - path: 20250514/miner_evaluations.parquet split: '20250514' - config_name: '20250515' data_files: - path: 20250515/miner_evaluations.parquet split: '20250515' - config_name: '20250516' data_files: - path: 20250516/miner_evaluations.parquet split: '20250516' - config_name: '20250517' data_files: - path: 20250517/miner_evaluations.parquet split: '20250517' - config_name: '20250518' data_files: - path: 20250518/miner_evaluations.parquet split: '20250518' - config_name: '20250519' data_files: - path: 20250519/miner_evaluations.parquet split: '20250519' - config_name: '20250520' data_files: - path: 20250520/miner_evaluations.parquet split: '20250520' - config_name: '20250521' data_files: - path: 20250521/miner_evaluations.parquet split: '20250521' - config_name: '20250522' data_files: - path: 20250522/miner_evaluations.parquet split: '20250522' - config_name: '20250523' data_files: - path: 20250523/miner_evaluations.parquet split: '20250523' - config_name: '20250524' data_files: - path: 20250524/miner_evaluations.parquet split: '20250524' - config_name: '20250525' data_files: - path: 20250525/miner_evaluations.parquet split: '20250525' - config_name: '20250526' data_files: - path: 20250526/miner_evaluations.parquet split: '20250526' - config_name: '20250527' data_files: - path: 20250527/miner_evaluations.parquet split: '20250527' - config_name: '20250528' data_files: - path: 20250528/miner_evaluations.parquet split: '20250528' - config_name: '20250529' data_files: - path: 20250529/miner_evaluations.parquet split: '20250529' - config_name: '20250530' data_files: - path: 20250530/miner_evaluations.parquet split: '20250530' - config_name: '20250531' data_files: - path: 20250531/miner_evaluations.parquet split: '20250531' - config_name: '20250601' data_files: - path: 20250601/miner_evaluations.parquet split: '20250601' - config_name: '20250602' data_files: - path: 20250602/miner_evaluations.parquet split: '20250602' - config_name: '20250603' data_files: - path: 20250603/miner_evaluations.parquet split: '20250603' last_updated: '20250603' license: mit ---
DIaac/m23k-subproblem-analysis_sieved_sft_ready-0603_1459
DIaac
2025-06-03T07:00:23Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T06:59:54Z
null
--- dataset_info: features: - name: answer_idx dtype: int64 - name: source dtype: string - name: metadata dtype: string - name: prompt dtype: string - name: answer_letter dtype: string - name: answer_string dtype: string - name: reasoning dtype: string - name: distilled_answer_string dtype: string - name: text dtype: string - name: decomposer_raw_output dtype: string - name: subproblems list: - name: critical dtype: bool - name: index dtype: int64 - name: subproblem dtype: string - name: analyzer_raw_output dtype: string - name: subproblem_analysis list: - name: explanation dtype: string - name: status dtype: string - name: subproblem_index dtype: int64 - name: evaluator_raw_output dtype: string - name: consistency_evaluation struct: - name: confidence_score dtype: float64 - name: consistent dtype: bool - name: explanation dtype: string splits: - name: train num_bytes: 105259738 num_examples: 5491 download_size: 48487014 dataset_size: 105259738 configs: - config_name: default data_files: - split: train path: data/train-* ---
willcb/V3-wordle-test
willcb
2025-06-03T06:16:39Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T06:16:38Z
null
--- dataset_info: features: - name: prompt list: - name: content dtype: string - name: role dtype: string - name: completion list: - name: content dtype: string - name: role dtype: string - name: answer dtype: string - name: reward dtype: float64 - name: task dtype: string splits: - name: train num_bytes: 69107.5 num_examples: 10 download_size: 20842 dataset_size: 69107.5 configs: - config_name: default data_files: - split: train path: data/train-* ---
kanishka/babylm2-clean-spacy_no-multi-adj-strict
kanishka
2025-06-03T06:01:32Z
0
0
[ "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T06:01:22Z
null
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 516559880 num_examples: 11802970 - name: validation num_bytes: 58115371 num_examples: 1227839 download_size: 339072290 dataset_size: 574675251 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
romban38/x_dataset_51
romban38
2025-06-03T05:52:22Z
760
0
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:sentiment-analysis", "task_ids:topic-classification", "task_ids:named-entity-recognition", "task_ids:language-modeling", "task_ids:text-scoring", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:extractive-qa", "task_ids:news-articles-summarization", "multilinguality:multilingual", "source_datasets:original", "license:mit", "size_categories:100M<n<1B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[ "text-classification", "token-classification", "question-answering", "summarization", "text-generation" ]
2025-04-29T16:27:55Z
null
--- license: mit multilinguality: - multilingual source_datasets: - original task_categories: - text-classification - token-classification - question-answering - summarization - text-generation task_ids: - sentiment-analysis - topic-classification - named-entity-recognition - language-modeling - text-scoring - multi-class-classification - multi-label-classification - extractive-qa - news-articles-summarization --- # Bittensor Subnet 13 X (Twitter) Dataset <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> ## Dataset Description - **Repository:** romban38/x_dataset_51 - **Subnet:** Bittensor Subnet 13 - **Miner Hotkey:** 5DyCJ6P43VwGTYC3gqYB2S7wEBSno5jrV4QbnyszXRwJpEqm ### Miner Data Compliance Agreement In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md). ### Dataset Summary This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks. For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe). ### Supported Tasks The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example: - Sentiment Analysis - Trend Detection - Content Analysis - User Behavior Modeling ### Languages Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation. ## Dataset Structure ### Data Instances Each instance represents a single tweet with the following fields: ### Data Fields - `text` (string): The main content of the tweet. - `label` (string): Sentiment or topic category of the tweet. - `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present. - `datetime` (string): The date when the tweet was posted. - `username_encoded` (string): An encoded version of the username to maintain user privacy. - `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present. ### Data Splits This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp. ## Dataset Creation ### Source Data Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines. ### Personal and Sensitive Information All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information. ## Considerations for Using the Data ### Social Impact and Biases Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population. ### Limitations - Data quality may vary due to the decentralized nature of collection and preprocessing. - The dataset may contain noise, spam, or irrelevant content typical of social media platforms. - Temporal biases may exist due to real-time collection methods. - The dataset is limited to public tweets and does not include private accounts or direct messages. - Not all tweets contain hashtags or URLs. ## Additional Information ### Licensing Information The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use. ### Citation Information If you use this dataset in your research, please cite it as follows: ``` @misc{romban382025datauniversex_dataset_51, title={The Data Universe Datasets: The finest collection of social media data the web has to offer}, author={romban38}, year={2025}, url={https://huggingface.co/datasets/romban38/x_dataset_51}, } ``` ### Contributions To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms. ## Dataset Statistics [This section is automatically updated] - **Total Instances:** 14628693350 - **Date Range:** 2025-04-05T00:00:00Z to 2025-05-23T00:00:00Z - **Last Updated:** 2025-06-03T05:52:18Z ### Data Distribution - Tweets with hashtags: 2.19% - Tweets without hashtags: 97.81% ### Top 10 Hashtags For full statistics, please refer to the `stats.json` file in the repository. | Rank | Topic | Total Count | Percentage | |------|-------|-------------|-------------| | 1 | NULL | 14308697202 | 97.81% | | 2 | #lolfanfest2025d1 | 30146934 | 0.21% | | 3 | #lolfanfest2025d2 | 22551791 | 0.15% | | 4 | #riyadh | 10694211 | 0.07% | | 5 | #goスト | 4159373 | 0.03% | | 6 | #eurovisionrtve | 1374060 | 0.01% | | 7 | #eurovision2025 | 1288784 | 0.01% | | 8 | #eurovision | 1259207 | 0.01% | | 9 | #cometobesiktasronaldo | 1100082 | 0.01% | | 10 | #mygoldenbloodep10 | 888209 | 0.01% | ## Update History | Date | New Instances | Total Instances | |------|---------------|-----------------| | 2025-04-29T16:29:24Z | 1 | 1 | | 2025-04-30T09:30:42Z | 1 | 2 | | 2025-05-01T02:32:13Z | 1 | 3 | | 2025-05-01T19:33:40Z | 1 | 4 | | 2025-05-02T12:35:02Z | 1 | 5 | | 2025-05-03T05:36:19Z | 1 | 6 | | 2025-05-03T22:37:26Z | 1 | 7 | | 2025-05-04T15:38:35Z | 1 | 8 | | 2025-05-05T08:41:01Z | 1 | 9 | | 2025-05-05T09:43:22Z | 1 | 10 | | 2025-05-06T02:44:35Z | 1 | 11 | | 2025-05-06T19:46:13Z | 1 | 12 | | 2025-05-07T12:47:19Z | 1 | 13 | | 2025-05-07T13:49:22Z | 1 | 14 | | 2025-05-08T06:50:45Z | 1 | 15 | | 2025-05-08T23:51:54Z | 1 | 16 | | 2025-05-09T16:53:11Z | 1 | 17 | | 2025-05-10T09:54:21Z | 1 | 18 | | 2025-05-11T02:55:26Z | 1 | 19 | | 2025-05-11T19:57:08Z | 1 | 20 | | 2025-05-11T20:59:25Z | 1 | 21 | | 2025-05-12T14:00:52Z | 1 | 22 | | 2025-05-13T07:02:41Z | 1 | 23 | | 2025-05-14T00:04:03Z | 1 | 24 | | 2025-05-14T17:05:39Z | 1 | 25 | | 2025-05-15T10:08:02Z | 1 | 26 | | 2025-05-31T19:26:08Z | 819137 | 819163 | | 2025-05-31T19:29:47Z | 1609813 | 2428976 | | 2025-05-31T19:33:29Z | 2401448 | 4830424 | | 2025-05-31T19:37:22Z | 3185279 | 8015703 | | 2025-05-31T19:48:36Z | 3960220 | 11975923 | | 2025-05-31T19:53:25Z | 4724018 | 16699941 | | 2025-05-31T20:00:33Z | 5479948 | 22179889 | | 2025-05-31T20:04:08Z | 6235521 | 28415410 | | 2025-05-31T20:07:52Z | 7002509 | 35417919 | | 2025-05-31T20:14:10Z | 7775558 | 43193477 | | 2025-05-31T20:26:32Z | 8555573 | 51749050 | | 2025-05-31T20:32:31Z | 9344100 | 61093150 | | 2025-05-31T20:36:31Z | 10130317 | 71223467 | | 2025-05-31T20:41:53Z | 10898209 | 82121676 | | 2025-05-31T20:50:20Z | 11671337 | 93793013 | | 2025-05-31T20:54:01Z | 12444488 | 106237501 | | 2025-05-31T21:00:08Z | 13208624 | 119446125 | | 2025-05-31T21:07:04Z | 13963498 | 133409623 | | 2025-05-31T21:10:53Z | 14699155 | 148108778 | | 2025-05-31T21:15:28Z | 15447021 | 163555799 | | 2025-05-31T21:19:01Z | 16203507 | 179759306 | | 2025-05-31T21:22:54Z | 16969576 | 196728882 | | 2025-05-31T21:26:51Z | 17759968 | 214488850 | | 2025-05-31T21:33:44Z | 18538015 | 233026865 | | 2025-05-31T21:38:53Z | 19325385 | 252352250 | | 2025-05-31T21:48:23Z | 20106485 | 272458735 | | 2025-05-31T21:52:04Z | 20888119 | 293346854 | | 2025-05-31T21:56:46Z | 21680626 | 315027480 | | 2025-05-31T22:00:53Z | 22454225 | 337481705 | | 2025-05-31T22:04:38Z | 23218937 | 360700642 | | 2025-05-31T22:12:04Z | 23976440 | 384677082 | | 2025-05-31T22:18:24Z | 24733111 | 409410193 | | 2025-05-31T22:25:19Z | 25499977 | 434910170 | | 2025-05-31T22:29:26Z | 26284963 | 461195133 | | 2025-05-31T22:40:01Z | 27086771 | 488281904 | | 2025-05-31T22:44:07Z | 27894206 | 516176110 | | 2025-05-31T22:51:31Z | 28712300 | 544888410 | | 2025-05-31T22:55:29Z | 29503809 | 574392219 | | 2025-05-31T22:59:17Z | 30293616 | 604685835 | | 2025-05-31T23:07:09Z | 31066257 | 635752092 | | 2025-05-31T23:11:00Z | 31831464 | 667583556 | | 2025-05-31T23:16:14Z | 32588297 | 700171853 | | 2025-05-31T23:22:17Z | 33346613 | 733518466 | | 2025-05-31T23:26:10Z | 34106190 | 767624656 | | 2025-05-31T23:30:46Z | 34874999 | 802499655 | | 2025-05-31T23:34:39Z | 35649767 | 838149422 | | 2025-05-31T23:46:23Z | 36430220 | 874579642 | | 2025-05-31T23:50:07Z | 37216456 | 911796098 | | 2025-05-31T23:54:38Z | 38015149 | 949811247 | | 2025-06-01T00:00:58Z | 38833805 | 988645052 | | 2025-06-01T00:05:04Z | 39634133 | 1028279185 | | 2025-06-01T00:13:12Z | 40395737 | 1068674922 | | 2025-06-01T00:16:52Z | 41149114 | 1109824036 | | 2025-06-01T00:21:44Z | 41896170 | 1151720206 | | 2025-06-01T00:31:14Z | 42633936 | 1194354142 | | 2025-06-01T00:40:33Z | 43374794 | 1237728936 | | 2025-06-01T00:46:49Z | 44133534 | 1281862470 | | 2025-06-01T00:53:55Z | 44899792 | 1326762262 | | 2025-06-01T01:03:31Z | 45678925 | 1372441187 | | 2025-06-01T01:07:24Z | 46486639 | 1418927826 | | 2025-06-01T01:11:29Z | 47328802 | 1466256628 | | 2025-06-01T01:17:56Z | 48118154 | 1514374782 | | 2025-06-01T01:21:42Z | 48896689 | 1563271471 | | 2025-06-01T01:25:39Z | 49669203 | 1612940674 | | 2025-06-01T01:32:29Z | 50422070 | 1663362744 | | 2025-06-01T01:36:11Z | 51174925 | 1714537669 | | 2025-06-01T01:40:08Z | 51920571 | 1766458240 | | 2025-06-01T01:52:28Z | 52676569 | 1819134809 | | 2025-06-01T01:58:12Z | 53434237 | 1872569046 | | 2025-06-01T02:06:05Z | 54197152 | 1926766198 | | 2025-06-01T02:12:41Z | 54975028 | 1981741226 | | 2025-06-01T02:16:53Z | 55773522 | 2037514748 | | 2025-06-01T02:23:51Z | 56560985 | 2094075733 | | 2025-06-01T02:27:43Z | 57341387 | 2151417120 | | 2025-06-01T02:32:38Z | 58252437 | 2209669557 | | 2025-06-01T02:40:23Z | 59117168 | 2268786725 | | 2025-06-01T02:44:51Z | 59971152 | 2328757877 | | 2025-06-01T02:52:01Z | 60808453 | 2389566330 | | 2025-06-01T02:56:05Z | 61627940 | 2451194270 | | 2025-06-01T03:00:21Z | 62440982 | 2513635252 | | 2025-06-01T03:07:21Z | 63261054 | 2576896306 | | 2025-06-01T03:17:04Z | 64100639 | 2640996945 | | 2025-06-01T03:21:30Z | 65003673 | 2706000618 | | 2025-06-01T03:29:59Z | 65911819 | 2771912437 | | 2025-06-01T03:34:53Z | 66819481 | 2838731918 | | 2025-06-01T03:41:22Z | 67702510 | 2906434428 | | 2025-06-01T03:48:06Z | 68594992 | 2975029420 | | 2025-06-01T03:52:26Z | 69459690 | 3044489110 | | 2025-06-01T04:03:45Z | 70315144 | 3114804254 | | 2025-06-01T04:09:17Z | 71151213 | 3185955467 | | 2025-06-01T04:13:29Z | 71968879 | 3257924346 | | 2025-06-01T04:20:39Z | 72790119 | 3330714465 | | 2025-06-01T04:30:10Z | 73620731 | 3404335196 | | 2025-06-01T04:37:36Z | 74457069 | 3478792265 | | 2025-06-01T04:41:53Z | 75325869 | 3554118134 | | 2025-06-01T04:46:32Z | 76212492 | 3630330626 | | 2025-06-01T04:55:13Z | 77097629 | 3707428255 | | 2025-06-01T05:01:46Z | 77958308 | 3785386563 | | 2025-06-01T05:06:30Z | 78813390 | 3864199953 | | 2025-06-01T05:11:23Z | 79658265 | 3943858218 | | 2025-06-01T05:16:05Z | 80503428 | 4024361646 | | 2025-06-01T05:24:08Z | 81332863 | 4105694509 | | 2025-06-01T05:28:21Z | 82164562 | 4187859071 | | 2025-06-01T05:35:48Z | 82996608 | 4270855679 | | 2025-06-01T05:40:51Z | 83806958 | 4354662637 | | 2025-06-01T05:50:18Z | 84643709 | 4439306346 | | 2025-06-01T05:56:07Z | 85516039 | 4524822385 | | 2025-06-01T06:00:36Z | 86421122 | 4611243507 | | 2025-06-01T06:05:15Z | 87328220 | 4698571727 | | 2025-06-01T06:10:34Z | 88218428 | 4786790155 | | 2025-06-01T06:16:54Z | 89092983 | 4875883138 | | 2025-06-01T06:21:20Z | 89950598 | 4965833736 | | 2025-06-01T06:25:54Z | 90818066 | 5056651802 | | 2025-06-01T06:31:40Z | 91658010 | 5148309812 | | 2025-06-01T06:40:25Z | 92487284 | 5240797096 | | 2025-06-01T06:47:44Z | 93308923 | 5334106019 | | 2025-06-01T06:51:46Z | 94120367 | 5428226386 | | 2025-06-01T06:56:10Z | 94981968 | 5523208354 | | 2025-06-01T07:06:00Z | 95866211 | 5619074565 | | 2025-06-01T07:10:30Z | 96762381 | 5715836946 | | 2025-06-01T07:15:04Z | 97669684 | 5813506630 | | 2025-06-01T07:19:25Z | 98558453 | 5912065083 | | 2025-06-01T07:23:48Z | 99438475 | 6011503558 | | 2025-06-01T07:28:08Z | 100280809 | 6111784367 | | 2025-06-01T07:37:32Z | 101124982 | 6212909349 | | 2025-06-01T07:42:55Z | 101950482 | 6314859831 | | 2025-06-01T07:47:06Z | 102781557 | 6417641388 | | 2025-06-01T07:51:15Z | 103604366 | 6521245754 | | 2025-06-01T07:55:34Z | 104425801 | 6625671555 | | 2025-06-01T08:04:19Z | 105263345 | 6730934900 | | 2025-06-01T08:08:50Z | 106130946 | 6837065846 | | 2025-06-01T08:13:26Z | 107024366 | 6944090212 | | 2025-06-01T08:17:51Z | 107921050 | 7052011262 | | 2025-06-01T08:22:10Z | 108800298 | 7160811560 | | 2025-06-01T08:26:57Z | 109680407 | 7270491967 | | 2025-06-01T08:33:20Z | 110529049 | 7381021016 | | 2025-06-01T08:37:32Z | 111370819 | 7492391835 | | 2025-06-01T08:41:40Z | 112186046 | 7604577881 | | 2025-06-01T08:45:44Z | 112994181 | 7717572062 | | 2025-06-01T08:49:44Z | 113804531 | 7831376593 | | 2025-06-01T08:54:20Z | 114618695 | 7945995288 | | 2025-06-01T09:02:54Z | 115456687 | 8061451975 | | 2025-06-01T09:07:21Z | 116326296 | 8177778271 | | 2025-06-01T09:11:44Z | 117231532 | 8295009803 | | 2025-06-01T09:16:17Z | 118127454 | 8413137257 | | 2025-06-01T09:20:51Z | 118987582 | 8532124839 | | 2025-06-01T09:29:11Z | 119863415 | 8651988254 | | 2025-06-01T09:34:37Z | 120725935 | 8772714189 | | 2025-06-01T09:38:49Z | 121577540 | 8894291729 | | 2025-06-01T09:42:55Z | 122422046 | 9016713775 | | 2025-06-01T09:48:46Z | 123243035 | 9139956810 | | 2025-06-01T09:55:10Z | 124072684 | 9264029494 | | 2025-06-01T09:59:15Z | 124902815 | 9388932309 | | 2025-06-01T10:03:29Z | 125738896 | 9514671205 | | 2025-06-01T10:09:43Z | 126609885 | 9641281090 | | 2025-06-01T10:14:12Z | 127524630 | 9768805720 | | 2025-06-01T10:22:38Z | 128430143 | 9897235863 | | 2025-06-01T10:26:55Z | 129310475 | 10026546338 | | 2025-06-01T10:31:19Z | 130218521 | 10156764859 | | 2025-06-01T10:35:32Z | 131095948 | 10287860807 | | 2025-06-01T10:39:45Z | 131950389 | 10419811196 | | 2025-06-01T10:44:17Z | 132797300 | 10552608496 | | 2025-06-01T10:54:10Z | 133640653 | 10686249149 | | 2025-06-01T10:58:19Z | 134480700 | 10820729849 | | 2025-06-01T11:02:32Z | 135336165 | 10956066014 | | 2025-06-01T11:06:49Z | 136234632 | 11092300646 | | 2025-06-01T11:11:32Z | 137165840 | 11229466486 | | 2025-06-01T11:17:32Z | 138124317 | 11367590803 | | 2025-06-01T11:22:07Z | 139088025 | 11506678828 | | 2025-06-01T11:26:28Z | 140005124 | 11646683952 | | 2025-06-01T11:30:55Z | 140925335 | 11787609287 | | 2025-06-01T11:35:10Z | 141803316 | 11929412603 | | 2025-06-01T11:45:01Z | 142701228 | 12072113831 | | 2025-06-01T11:49:17Z | 143597883 | 12215711714 | | 2025-06-01T11:53:23Z | 144440412 | 12360152126 | | 2025-06-01T11:57:41Z | 145276617 | 12505428743 | | 2025-06-01T12:01:48Z | 146129107 | 12651557850 | | 2025-06-01T12:09:25Z | 147008048 | 12798565898 | | 2025-06-01T12:13:52Z | 147911543 | 12946477441 | | 2025-06-01T12:24:21Z | 933291 | 12947410732 | | 2025-06-01T12:26:50Z | 1875241 | 12949285973 | | 2025-06-01T12:28:59Z | 2785485 | 12952071458 | | 2025-06-01T12:31:02Z | 3681998 | 12955753456 | | 2025-06-01T12:33:02Z | 4589844 | 12960343300 | | 2025-06-01T12:35:05Z | 5495591 | 12965838891 | | 2025-06-01T12:37:19Z | 6430853 | 12972269744 | | 2025-06-01T12:38:48Z | 7034896 | 12979304640 | | 2025-06-01T12:40:50Z | 7940372 | 12987245012 | | 2025-06-01T12:42:58Z | 8863328 | 12996108340 | | 2025-06-01T12:45:07Z | 9799297 | 13005907637 | | 2025-06-01T12:47:25Z | 10741705 | 13016649342 | | 2025-06-01T12:49:39Z | 11690649 | 13028339991 | | 2025-06-01T12:51:22Z | 12384659 | 13040724650 | | 2025-06-01T12:52:38Z | 12875396 | 13053600046 | | 2025-06-01T12:54:19Z | 13611437 | 13067211483 | | 2025-06-01T12:55:30Z | 14098691 | 13081310174 | | 2025-06-01T12:57:47Z | 15026414 | 13096336588 | | 2025-06-01T12:59:54Z | 15913171 | 13112249759 | | 2025-06-01T13:02:08Z | 16798327 | 13129048086 | | 2025-06-01T13:04:20Z | 17683854 | 13146731940 | | 2025-06-01T13:06:33Z | 18615685 | 13165347625 | | 2025-06-01T13:09:30Z | 19529359 | 13184876984 | | 2025-06-01T13:12:01Z | 20493302 | 13205370286 | | 2025-06-01T13:14:17Z | 21432407 | 13226802693 | | 2025-06-01T13:16:32Z | 22374219 | 13249176912 | | 2025-06-01T13:18:49Z | 23310360 | 13272487272 | | 2025-06-01T13:21:06Z | 24252912 | 13296740184 | | 2025-06-01T13:23:23Z | 25192434 | 13321932618 | | 2025-06-01T13:25:28Z | 25969175 | 13347901793 | | 2025-06-01T13:27:20Z | 26587383 | 13374489176 | | 2025-06-01T13:29:01Z | 27138395 | 13401627571 | | 2025-06-01T13:30:45Z | 27686692 | 13429314263 | | 2025-06-01T13:32:30Z | 28208656 | 13457522919 | | 2025-06-01T13:34:13Z | 28740540 | 13486263459 | | 2025-06-01T13:36:02Z | 29293100 | 13515556559 | | 2025-06-01T13:37:52Z | 29873530 | 13545430089 | | 2025-06-01T13:39:40Z | 30491744 | 13575921833 | | 2025-06-01T13:41:30Z | 31142418 | 13607064251 | | 2025-06-01T13:43:29Z | 31812918 | 13638877169 | | 2025-06-01T13:45:15Z | 32448635 | 13671325804 | | 2025-06-01T13:47:17Z | 33067426 | 13704393230 | | 2025-06-01T13:49:03Z | 33682537 | 13738075767 | | 2025-06-01T13:50:36Z | 34243905 | 13772319672 | | 2025-06-01T13:52:11Z | 34791417 | 13807111089 | | 2025-06-01T13:53:59Z | 35315121 | 13842426210 | | 2025-06-01T13:55:46Z | 35847276 | 13878273486 | | 2025-06-01T13:57:22Z | 36382923 | 13914656409 | | 2025-06-01T13:58:57Z | 36949273 | 13951605682 | | 2025-06-01T14:00:37Z | 37561219 | 13989166901 | | 2025-06-01T14:02:20Z | 38210180 | 14027377081 | | 2025-06-01T14:04:13Z | 38861358 | 14066238439 | | 2025-06-01T14:05:56Z | 39482546 | 14105720985 | | 2025-06-01T14:07:36Z | 40121069 | 14145842054 | | 2025-06-01T14:09:27Z | 40800385 | 14186642439 | | 2025-06-01T14:11:06Z | 41413656 | 14228056095 | | 2025-06-01T14:12:39Z | 42002001 | 14270058096 | | 2025-06-01T14:14:08Z | 42574099 | 14312632195 | | 2025-06-01T14:16:39Z | 43168704 | 14355800899 | | 2025-06-01T14:18:22Z | 43786835 | 14399587734 | | 2025-06-01T14:20:15Z | 44435173 | 14444022907 | | 2025-06-01T14:21:57Z | 45099800 | 14489122707 | | 2025-06-01T14:23:44Z | 45790918 | 14534913625 | | 2025-06-01T14:25:35Z | 46532582 | 14581446207 | | 2025-06-01T14:27:24Z | 47247104 | 14628693311 | | 2025-06-01T15:51:48Z | 1 | 14628693312 | | 2025-06-01T16:52:58Z | 1 | 14628693313 | | 2025-06-01T17:15:08Z | 1 | 14628693314 | | 2025-06-01T18:15:46Z | 1 | 14628693315 | | 2025-06-01T19:16:42Z | 1 | 14628693316 | | 2025-06-01T20:17:41Z | 1 | 14628693317 | | 2025-06-01T21:18:38Z | 1 | 14628693318 | | 2025-06-01T22:19:43Z | 1 | 14628693319 | | 2025-06-01T23:20:45Z | 1 | 14628693320 | | 2025-06-02T00:21:40Z | 1 | 14628693321 | | 2025-06-02T01:22:37Z | 1 | 14628693322 | | 2025-06-02T02:23:33Z | 1 | 14628693323 | | 2025-06-02T03:24:36Z | 1 | 14628693324 | | 2025-06-02T04:25:58Z | 1 | 14628693325 | | 2025-06-02T05:27:04Z | 1 | 14628693326 | | 2025-06-02T06:28:04Z | 1 | 14628693327 | | 2025-06-02T07:29:02Z | 1 | 14628693328 | | 2025-06-02T08:29:59Z | 1 | 14628693329 | | 2025-06-02T09:31:01Z | 1 | 14628693330 | | 2025-06-02T10:32:02Z | 1 | 14628693331 | | 2025-06-02T11:33:07Z | 1 | 14628693332 | | 2025-06-02T12:34:08Z | 1 | 14628693333 | | 2025-06-02T13:35:12Z | 1 | 14628693334 | | 2025-06-02T14:36:13Z | 1 | 14628693335 | | 2025-06-02T15:37:22Z | 1 | 14628693336 | | 2025-06-02T16:38:24Z | 1 | 14628693337 | | 2025-06-02T17:39:36Z | 1 | 14628693338 | | 2025-06-02T18:40:40Z | 1 | 14628693339 | | 2025-06-02T19:41:38Z | 1 | 14628693340 | | 2025-06-02T20:42:39Z | 1 | 14628693341 | | 2025-06-02T21:43:39Z | 1 | 14628693342 | | 2025-06-02T22:44:53Z | 1 | 14628693343 | | 2025-06-02T23:46:04Z | 1 | 14628693344 | | 2025-06-03T00:47:07Z | 1 | 14628693345 | | 2025-06-03T01:48:08Z | 1 | 14628693346 | | 2025-06-03T02:49:11Z | 1 | 14628693347 | | 2025-06-03T03:50:09Z | 1 | 14628693348 | | 2025-06-03T04:51:07Z | 1 | 14628693349 | | 2025-06-03T05:52:18Z | 1 | 14628693350 |
ixelszy/lena
ixelszy
2025-06-03T05:48:39Z
0
0
[ "license:creativeml-openrail-m", "region:us" ]
[]
2025-06-03T04:48:13Z
null
--- license: creativeml-openrail-m ---
allenai/sciriff-yesno
allenai
2025-06-03T04:54:09Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T04:50:10Z
null
--- dataset_info: features: - name: id dtype: string - name: context dtype: string - name: question dtype: string - name: answer dtype: string - name: metadata struct: - name: domains sequence: string - name: input_context dtype: string - name: output_context dtype: string - name: source_type dtype: string - name: task_family dtype: string splits: - name: train num_bytes: 3713449 num_examples: 1582 - name: validation num_bytes: 267088 num_examples: 130 - name: test num_bytes: 881291 num_examples: 531 download_size: 2326429 dataset_size: 4861828 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
Thanarit/Thai-Voice-Test-Main-Final
Thanarit
2025-06-03T04:52:09Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T04:27:12Z
null
--- dataset_info: features: - name: ID dtype: string - name: speaker_id dtype: string - name: Language dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: transcript dtype: string - name: length dtype: float32 - name: dataset_name dtype: string - name: confidence_score dtype: float64 splits: - name: train num_examples: 20 download_size: 0 dataset_size: 0 configs: - config_name: default data_files: - split: train path: data/train/*.parquet --- # Thanarit/Thai-Voice Combined Thai audio dataset from multiple sources ## Dataset Details - **Total samples**: 20 - **Total duration**: 0.02 hours - **Language**: Thai (th) - **Audio format**: 16kHz mono WAV - **Volume normalization**: -20dB ## Sources Processed 1 datasets in streaming mode ## Source Datasets 1. **GigaSpeech2**: Large-scale multilingual speech corpus ## Usage ```python from datasets import load_dataset # Load with streaming to avoid downloading everything dataset = load_dataset("Thanarit/Thai-Voice-Test-Main-Final", streaming=True) # Iterate through samples for sample in dataset['train']: print(sample['ID'], sample['transcript'][:50]) # Process audio: sample['audio'] break ``` ## Schema - `ID`: Unique identifier (S1, S2, S3, ...) - `speaker_id`: Speaker identifier (SPK_00001, SPK_00002, ...) - `Language`: Language code (always "th" for Thai) - `audio`: Audio data with 16kHz sampling rate - `transcript`: Text transcript of the audio - `length`: Duration in seconds - `dataset_name`: Source dataset name (e.g., "GigaSpeech2", "ProcessedVoiceTH", "MozillaCommonVoice") - `confidence_score`: Confidence score of the transcript (0.0-1.0) - 1.0: Original transcript from source dataset - <1.0: STT-generated transcript - 0.0: Fallback transcript (e.g., [NO_TRANSCRIPT]) ## Processing Details This dataset was created using streaming processing to handle large-scale data without requiring full downloads. Audio has been standardized to 16kHz mono with -20dB volume normalization.
tatung/hybrid_gripper_paper_pickup
tatung
2025-06-03T04:14:10Z
0
0
[ "task_categories:robotics", "size_categories:n<1K", "modality:video", "library:datasets", "library:mlcroissant", "region:us", "phosphobot", "so100", "phospho-dk" ]
[ "robotics" ]
2025-06-03T03:34:19Z
null
--- tags: - phosphobot - so100 - phospho-dk task_categories: - robotics --- # hybrid_gripper_paper_pickup **This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).** This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
CohenQu/deepscalar_RL_hard_1_verl
CohenQu
2025-06-03T03:21:42Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T03:21:40Z
null
--- dataset_info: features: - name: data_source dtype: 'null' - name: prompt list: - name: content dtype: string - name: role dtype: string - name: ability dtype: string - name: reward_model struct: - name: ground_truth dtype: string - name: style dtype: string - name: extra_info struct: - name: index dtype: int64 - name: no_hint_prompt dtype: bool - name: problem dtype: string - name: split dtype: string splits: - name: train num_bytes: 1567875 num_examples: 3000 - name: test num_bytes: 191369 num_examples: 300 download_size: 151914 dataset_size: 1759244 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
RebecaLeyva/UNAM_ParraPostPartum_dataset
RebecaLeyva
2025-06-03T03:05:07Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T02:56:03Z
null
--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string splits: - name: train num_bytes: 778221838.92 num_examples: 3024 - name: test num_bytes: 138775756.0 num_examples: 534 download_size: 937487644 dataset_size: 916997594.92 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
mothnaZl/seq_dis_T0.6-Qwen2.5-7B-best_of_n-VLLM-Skywork-o1-Open-PRM-Qwen-2.5-7B-completions
mothnaZl
2025-06-03T02:41:33Z
86
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-13T08:08:11Z
null
--- dataset_info: config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals features: - name: n dtype: int64 - name: acc_naive dtype: float64 - name: acc_weighted dtype: float64 - name: acc_maj dtype: float64 - name: pass@n dtype: float64 - name: div_avg dtype: float64 - name: div_sum dtype: float64 - name: div_mean dtype: float64 - name: Unigrams dtype: float64 - name: Bigrams dtype: float64 - name: Trigrams dtype: float64 - name: Fourgrams dtype: float64 - name: pass_tag sequence: 'null' - name: BM25 dtype: int64 - name: pred_entropy dtype: float64 splits: - name: train num_bytes: 928 num_examples: 8 download_size: 7123 dataset_size: 928 configs: - config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals data_files: - split: train path: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--merged--evals/train-* ---
ShuoHsuan/grasp_0603
ShuoHsuan
2025-06-03T02:40:54Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "so100", "collect" ]
[ "robotics" ]
2025-06-03T02:40:32Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so100 - collect configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100", "total_episodes": 20, "total_frames": 5302, "total_tasks": 1, "total_videos": 40, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:20" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.laptop": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.phone": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
hamidkaloorazi/so100_bi_towel_hadi
hamidkaloorazi
2025-06-03T02:35:20Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "so100_bi" ]
[ "robotics" ]
2025-06-03T02:21:46Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so100_bi configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_bimanual", "total_episodes": 101, "total_frames": 32489, "total_tasks": 1, "total_videos": 303, "total_chunks": 1, "chunks_size": 1000, "fps": 15, "splits": { "train": "0:101" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 12 ], "names": [ "left_shoulder_pan", "left_shoulder_lift", "left_elbow_flex", "left_wrist_flex", "left_wrist_roll", "left_gripper", "right_shoulder_pan", "right_shoulder_lift", "right_elbow_flex", "right_wrist_flex", "right_wrist_roll", "right_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 12 ], "names": [ "left_shoulder_pan", "left_shoulder_lift", "left_elbow_flex", "left_wrist_flex", "left_wrist_roll", "left_gripper", "right_shoulder_pan", "right_shoulder_lift", "right_elbow_flex", "right_wrist_flex", "right_wrist_roll", "right_gripper" ] }, "observation.images.right": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.images.left": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.images.top": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
DanqingZ/so100_test_3_cameras
DanqingZ
2025-06-03T02:21:48Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "so100", "tutorial" ]
[ "robotics" ]
2025-06-03T02:21:36Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so100 - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100", "total_episodes": 2, "total_frames": 1189, "total_tasks": 1, "total_videos": 6, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:2" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.logitech": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.on_robot": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.phone": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
Darkhn/WOFRP_V2_All_Good_Stories
Darkhn
2025-06-03T00:30:40Z
0
0
[ "license:apache-2.0", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-03T00:30:27Z
null
--- license: apache-2.0 ---
jlbaker361/clip-league_captioned_splash
jlbaker361
2025-06-03T00:08:18Z
122
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-23T15:07:58Z
null
--- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float32 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 410219076.5 num_examples: 1804 download_size: 397404716 dataset_size: 410219076.5 configs: - config_name: default data_files: - split: train path: data/train-* ---
jlbaker361/clip-league_captioned_tile
jlbaker361
2025-06-03T00:07:46Z
124
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-23T15:07:56Z
null
--- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float32 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 451302098.5 num_examples: 1804 download_size: 438463892 dataset_size: 451302098.5 configs: - config_name: default data_files: - split: train path: data/train-* ---
jlbaker361/dino-art_coco_captioned-50
jlbaker361
2025-06-02T23:57:12Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-02T23:57:10Z
null
--- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float16 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 12863187.0 num_examples: 50 download_size: 12456942 dataset_size: 12863187.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
VisualSphinx/VisualSphinx-V1-Rules
VisualSphinx
2025-06-02T23:55:18Z
55
0
[ "language:en", "language:zh", "license:cc-by-nc-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2505.23977", "region:us" ]
[]
2025-05-12T21:27:53Z
null
--- license: cc-by-nc-4.0 dataset_info: features: - name: id dtype: int32 - name: rule_content sequence: string - name: generation dtype: int32 - name: parents sequence: int32 - name: mutated dtype: bool - name: question_type dtype: string - name: knowledge_point dtype: string - name: times_used dtype: int32 - name: creation_method dtype: string - name: format_score dtype: int32 - name: content_quality_score dtype: int32 - name: feasibility_score dtype: int32 splits: - name: synthetic_rules num_bytes: 60383953 num_examples: 60339 - name: rules_filted num_bytes: 40152007 num_examples: 41287 download_size: 48914349 dataset_size: 100535960 configs: - config_name: default data_files: - split: synthetic_rules path: data/synthetic_rules-* - split: rules_filted path: data/rules_filted-* language: - en - zh --- # 🦁 VisualSphinx: Large-Scale Synthetic Vision Logic Puzzles for RL VisualSphinx is the largest fully-synthetic open-source dataset providing vision logic puzzles. It consists of over **660K** automatically generated logical visual puzzles. Each logical puzzle is grounded with an interpretable rule and accompanied by both correct answers and plausible distractors. - 🌐 [Project Website](https://visualsphinx.github.io/) - Learn more about VisualSphinx - 📖 [Technical Report](https://arxiv.org/abs/2505.23977) - Discover the methodology and technical details behind VisualSphinx - 🔧 [Github Repo](https://github.com/VisualSphinx/VisualSphinx) - Access the complete pipeline used to produce VisualSphinx-V1 - 🤗 HF Datasets: - [VisualSphinx-V1 (Raw)](https://huggingface.co/datasets/VisualSphinx/VisualSphinx-V1-Raw); - [VisualSphinx-V1 (For RL)](https://huggingface.co/datasets/VisualSphinx/VisualSphinx-V1-RL-20K); - [VisualSphinx-V1 (Benchmark)](https://huggingface.co/datasets/VisualSphinx/VisualSphinx-V1-Benchmark); - [VisualSphinx (Seeds)](https://huggingface.co/datasets/VisualSphinx/VisualSphinx-Seeds); - [VisualSphinx (Rules)](https://huggingface.co/datasets/VisualSphinx/VisualSphinx-V1-Rules). [📍| You are here!] ![VisualSphinx](https://visualsphinx.github.io/static/images/pipeline.jpg) ## 📊 Dataset Details ### 🎯 Purpose This dataset contains the **synthetic logical rules** that power the next step VisualSphinx generation. These rules represent the core logical patterns and constraints used to automatically generate thousands of coherent visual logic puzzles with interpretable reasoning paths. ### 📈 Dataset Splits - **`synthetic_rules`**: Contains all generated synthetic rules with complete metadata - **`rules_filted`**: Contains only high-quality rules that passed filtering criteria ### 🏗️ Dataset Structure Each rule in the dataset contains the following fields: | Field | Type | Description | |-------|------|-------------| | `id` | `int32` | Unique identifier for each rule | | `rule_content` | `Sequence[string]` | List of logical statements defining the rule | | `generation` | `int32` | Generation number in the evolutionary process | | `parents` | `Sequence[int32]` | IDs of parent rules (for rule evolution tracking) | | `mutated` | `bool` | Whether this rule was created through mutation | | `question_type` | `string` | Category of questions this rule generates | | `knowledge_point` | `string` | Associated knowledge domain or concept | | `times_used` | `int32` | Number of times this rule was used to generate puzzles | | `creation_method` | `string` | Method used to create this rule (e.g., manual, genetic, hybrid) | | `format_score` | `int32` | Structural formatting quality score (1-10 scale) | | `content_quality_score` | `int32` | Logical coherence and clarity score (1-10 scale) | | `feasibility_score` | `int32` | Practical applicability score (1-10 scale) | ### 📏 Dataset Statistics - **Total Rules**: Comprehensive collection of synthetic logical rules - **Rule Evolution**: Multi-generational rule development with parent-child relationships - **Quality Control**: Triple-scored validation (format, content, feasibility) - **Usage Tracking**: Statistics on rule effectiveness and popularity ### 🧬 Rule Evolution Process The dataset captures a complete evolutionary process: - **Inheritance**: Child rules inherit characteristics from parent rules - **Mutation**: Systematic variations create new rule variants - **Selection**: Quality scores determine rule survival and usage - **Genealogy**: Full family trees of rule development preserved ## 🔧 Other Information **License**: Please follow [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/deed.en). **Contact**: Please contact [Yichen](mailto:yfeng42@uw.edu) by email. ## 📚 Citation If you find the data or code useful, please cite: ``` @misc{feng2025visualsphinx, title={VisualSphinx: Large-Scale Synthetic Vision Logic Puzzles for RL}, author={Yichen Feng and Zhangchen Xu and Fengqing Jiang and Yuetai Li and Bhaskar Ramasubramanian and Luyao Niu and Bill Yuchen Lin and Radha Poovendran}, year={2025}, eprint={2505.23977}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2505.23977}, } ```
jlbaker361/clip-league_captioned_splash-50
jlbaker361
2025-06-02T23:50:50Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-02T19:48:58Z
null
--- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float32 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 11397939.0 num_examples: 50 download_size: 11034144 dataset_size: 11397939.0 configs: - config_name: default data_files: - split: train path: data/train-* --- --- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float32 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 36464848.0 num_examples: 50 download_size: 35729020 dataset_size: 36464848.0 configs: - config_name: default data_files: - split: train path: data/train-* --- --- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float32 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 36464848.0 num_examples: 50 download_size: 35729020 dataset_size: 36464848.0 configs: - config_name: default data_files: - split: train path: data/train-* --- --- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float32 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 36464848.0 num_examples: 50 download_size: 35729020 dataset_size: 36464848.0 configs: - config_name: default data_files: - split: train path: data/train-* --- --- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float32 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 36464848.0 num_examples: 50 download_size: 35729020 dataset_size: 36464848.0 configs: - config_name: default data_files: - split: train path: data/train-* --- --- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float32 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 36464848.0 num_examples: 50 download_size: 35729020 dataset_size: 36464848.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
jlbaker361/siglip2-celeb_captioned-50
jlbaker361
2025-06-02T23:46:18Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-02T19:47:24Z
null
--- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float16 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 11989461.0 num_examples: 50 download_size: 11610708 dataset_size: 11989461.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
jlbaker361/dino-league_captioned_splash-50
jlbaker361
2025-06-02T23:16:05Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-02T19:49:33Z
null
--- dataset_info: features: - name: image dtype: image - name: embedding sequence: sequence: sequence: float16 - name: text sequence: sequence: sequence: float16 - name: prompt dtype: string - name: posterior sequence: sequence: sequence: float16 splits: - name: train num_bytes: 11589939.0 num_examples: 50 download_size: 11127555 dataset_size: 11589939.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
kothasuhas/residual-teacher-6-2-iter-0-ctx16-12800000
kothasuhas
2025-06-02T23:11:55Z
0
0
[ "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-02T20:09:39Z
null
--- dataset_info: features: - name: text dtype: string - name: input_ids sequence: int32 splits: - name: train num_bytes: 1694264139 num_examples: 12800000 download_size: 1203132206 dataset_size: 1694264139 configs: - config_name: default data_files: - split: train path: data/train-* ---
daniel-dona/sparql-dataset-reasoning-test1
daniel-dona
2025-06-02T22:12:41Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-02T22:12:37Z
null
--- dataset_info: features: - name: qid dtype: string - name: lang dtype: string - name: nlq dtype: string - name: classes sequence: string - name: properties sequence: string - name: features sequence: string - name: sparql dtype: string - name: reasoning dtype: string splits: - name: train num_bytes: 205214 num_examples: 50 download_size: 95200 dataset_size: 205214 configs: - config_name: default data_files: - split: train path: data/train-* ---
kuzheren/geometry-dash-levels-tensors-2
kuzheren
2025-06-02T22:12:11Z
0
0
[ "license:apache-2.0", "region:us" ]
[]
2025-06-02T22:03:01Z
null
--- license: apache-2.0 --- # Geometry Dash Chunks HDF5 Dataset ## Описание Этот датасет создан для обучения нейросетевых моделей (автоэнкодер, DiT, diffusion и др.) на реальных уровнях из Geometry Dash. Он хранится в форматах HDF5 (каждый файл содержит не более 5000 уровней) и подготовлен для максимально быстрой и эффективной работы с последовательностями чанков уровней. ## Структура данных Каждый HDF5-файл содержит: - Датасет: chunk_data — тензоры с чанками уровней. - Датасет: valid_mask — булева маска валидных чанков в каждом уровне. - Атрибут: metadata_json_list — JSON-список метаданных уровней из оригинальных .jsonl-файлов (кроме level_string и неважных служебных полей). - Другие атрибуты описывают размерности тензоров и смысл признаков. ### chunk_data Размерность: ```num_levels, max_seq_len, chunk_h, chunk_w, num_block_features``` **Значения: int32** - num_levels — количество уровней в файле - max_seq_len — максимальное число чанков среди всех уровней в этом файле - chunk_h — высота чанка в сеточных "пикселях" (например, 32) - chunk_w — ширина чанка (например, 128) - num_block_features — количество признаков на ячейку ### valid_mask Размерность: ```num_levels, max_seq_len``` **Тип: bool** - Показывает, какие чанки в каждом уровне содержат реальные данные (True), а какие — добавлены паддингом (False). ### metadata_json_list (атрибут) Это JSON-список метаданных каждого уровня в файле. Пример содержимого одного entry: ``` { "level_id": 123456, "level_name": "My Level", "difficulty_stars": 5, "length_code": 2, "downloads": 1234, "likes": 56, "num_chunks_generated": 12 } ``` level_string (и похожие служебные поля) не сохраняются для экономии места и скорости доступа. ## Содержимое одного чанка - Каждый чанк — это сетка PPPchunk_h x chunk_wPPP. - В каждой ячейке хранится массив из PPPnum_block_featuresPPP чисел: 1. block_id — целое, идентификатор блока Geometry Dash (0 = пусто) 2. x_rel — индекс ячейки (0 .. chunk_w-1) 3. y_rel — индекс ячейки (0 .. chunk_h-1) 4. rotation_index — 0–3 (соответствует 0°/90°/180°/270°) 5. flip_combined — код флипа: 0=нет, 1=flip_y, 2=flip_x, 3=flip_x+flip_y Пустая ячейка содержит block_id=0 и все остальные значения ноль. ## Как читать датасет Пример на Python с использованием библиотеки h5py и numpy: ``` import h5py import numpy as np filename = "gd_dataset_chunked_part_1.h5" with h5py.File(filename, "r") as hf: chunk_data = hf["chunk_data"] # Размер: (num_levels, max_seq_len, chunk_h, chunk_w, num_block_features) valid_mask = hf["valid_mask"] # Размер: (num_levels, max_seq_len) meta_json = hf.attrs["metadata_json_list"] metadata = json.loads(meta_json) # Пример: получить все чанки первого уровня: idx = 0 real_len = valid_mask[idx].sum() level_chunks = chunk_data[idx, :real_len] # (real_len, chunk_h, chunk_w, num_block_features) # Декодировать первый чанк в уровень: chunk = level_chunks[0] # (chunk_h, chunk_w, num_block_features) block_ids = chunk[:,:,0] # карта блоков x_coords = chunk[:,:,1] y_coords = chunk[:,:,2] rotation_idxs = chunk[:,:,3] flip_combined = chunk[:,:,4] ``` ## Как использовать в DataLoader - Для обучения transformer/DiT моделей: формируйте батчи из уровней (последовательностей чанков), используйте valid_mask для attention mask и masking в loss. - Для автоэнкодера: берите отдельные чанки, считывайте PPPchunk_h x chunk_w x num_block_featuresPPP тензоры, пустые блоки можно игнорировать или паддить. ## Описание признаков | Индекс | Название | Описание | |--------|----------------|---------------------------------------------------------------| | 0 | block_id | GD ID блока. 0 — пусто | | 1 | x_rel | X (столбец) внутри чанка, 0 .. chunk_w-1 | | 2 | y_rel | Y (строка) внутри чанка, 0 .. chunk_h-1 | | 3 | rotation_index | Индекс поворота: 0=0°, 1=90°, 2=180°, 3=270° | | 4 | flip_combined | 0=нет; 1=flip_y; 2=flip_x; 3=оба | ## Пример визуализации чанка ``` import matplotlib.pyplot as plt plt.imshow(block_ids, cmap="tab20") # или cmap="nipy_spectral" plt.title("Карта ID блоков в чанке") plt.show() ``` ## Лицензия и источник - Данные Levels: из open Geometry Dash (2013-2025). - Код парсера и структуры: Kuzheren (actually, not quite), 2025. - Используйте свободно для ML-исследований и геймдев-прототипирования! ## Обратная связь Вопросы, предложения и баги — в Issues HuggingFace или на [github.com/kuzheren/gdparse](https://github.com/kuzheren/gdparse)
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Dataset Cards

This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in dataset cards
  • analysis of the dataset card format/content
  • topic modelling of dataset cards
  • training language models on the dataset cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the dataset card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
1,228

Space using librarian-bots/dataset_cards_with_metadata 1

Collection including librarian-bots/dataset_cards_with_metadata