Dataset Viewer
url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
string | type
null | active_lock_reason
null | sub_issues_summary
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | draft
bool | pull_request
dict | is_pull_request
bool |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7590/comments | https://api.github.com/repos/huggingface/datasets/issues/7590/events | https://github.com/huggingface/datasets/issues/7590 | 3,101,654,892 | I_kwDODunzps64339s | 7,590 | `ArrowNotImplementedError: Unsupported cast from list<item:struct<…>> to struct` when loading nested `Sequence(Features)` JSONL | {
"login": "AHS-uni",
"id": 183279820,
"node_id": "U_kgDOCuygzA",
"avatar_url": "https://avatars.githubusercontent.com/u/183279820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AHS-uni",
"html_url": "https://github.com/AHS-uni",
"followers_url": "https://api.github.com/users/AHS-uni/followers",
"following_url": "https://api.github.com/users/AHS-uni/following{/other_user}",
"gists_url": "https://api.github.com/users/AHS-uni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AHS-uni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AHS-uni/subscriptions",
"organizations_url": "https://api.github.com/users/AHS-uni/orgs",
"repos_url": "https://api.github.com/users/AHS-uni/repos",
"events_url": "https://api.github.com/users/AHS-uni/events{/privacy}",
"received_events_url": "https://api.github.com/users/AHS-uni/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lhoestq \n\nCould you help confirm whether this qualifies as a bug?\n\nIt looks like the issue stems from how `Sequence(Features(...))` is interpreted as a plain struct during schema inference, which leads to a mismatch when casting with PyArrow (especially with nested structs inside lists). From the description, this seems like an inconsistency with expected behavior.\n\nIf confirmed, I’d be happy to take a shot at investigating and potentially submitting a fix.\n\nAlso looping in @AHS-uni — could you kindly share a minimal JSONL example that reproduces this?\n\nThanks!"
] | 2025-05-29T22:53:36 | 2025-05-30T09:02:12 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
When loading a JSONL dataset with a top‐level `tags` field defined as a list of structs (via `Sequence(Features({...}))`), Hugging Face Datasets still infers `tags` as a single `struct<…>`, and then PyArrow throws:
```
ArrowNotImplementedError: Unsupported cast from list<item: struct<…>> to struct using function cast_struct
```
even when the full nested `Features` schema is passed to `load_dataset(..., features=…)` and `streaming=False`. This prevents loading any entries where `tags` has more than one element (or is empty).
### Steps to reproduce the bug
[Colab Link](https://colab.research.google.com/drive/1FZPQy6TP3jVd4B3mYKyfQaWNuOAvljUq#scrollTo=qkigsmEZLrnY)
### Expected behavior
The `tags` field should be recognized as `list<struct<name,target,comment>>`, and both empty lists (`[]`) and multi-element lists should load without casting errors.
### Environment info
* **datasets** version: `3.6.0`
* **pyarrow** version: `20.0.0`
* **Python**: 3.12.10
* **OS**: Ubuntu 24.04.02 LTS | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7590/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7589/comments | https://api.github.com/repos/huggingface/datasets/issues/7589/events | https://github.com/huggingface/datasets/pull/7589 | 3,101,119,704 | PR_kwDODunzps6YKiyL | 7,589 | feat: use content defined chunking | {
"login": "kszucs",
"id": 961747,
"node_id": "MDQ6VXNlcjk2MTc0Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/961747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kszucs",
"html_url": "https://github.com/kszucs",
"followers_url": "https://api.github.com/users/kszucs/followers",
"following_url": "https://api.github.com/users/kszucs/following{/other_user}",
"gists_url": "https://api.github.com/users/kszucs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kszucs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kszucs/subscriptions",
"organizations_url": "https://api.github.com/users/kszucs/orgs",
"repos_url": "https://api.github.com/users/kszucs/repos",
"events_url": "https://api.github.com/users/kszucs/events{/privacy}",
"received_events_url": "https://api.github.com/users/kszucs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7589). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-29T18:19:41 | 2025-05-30T11:09:19 | null | COLLABORATOR | null | null | null | WIP:
- [x] set the parameters in `io.parquet.ParquetDatasetReader`
- [x] set the parameters in `arrow_writer.ParquetWriter`
It requires a new pyarrow pin ">=21.0.0" which is not yet released. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7589/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7589/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7589",
"html_url": "https://github.com/huggingface/datasets/pull/7589",
"diff_url": "https://github.com/huggingface/datasets/pull/7589.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7589.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7588/comments | https://api.github.com/repos/huggingface/datasets/issues/7588/events | https://github.com/huggingface/datasets/issues/7588 | 3,094,012,025 | I_kwDODunzps64auB5 | 7,588 | ValueError: Invalid pattern: '**' can only be an entire path component [Colab] | {
"login": "wkambale",
"id": 43061081,
"node_id": "MDQ6VXNlcjQzMDYxMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wkambale",
"html_url": "https://github.com/wkambale",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkambale/subscriptions",
"organizations_url": "https://api.github.com/users/wkambale/orgs",
"repos_url": "https://api.github.com/users/wkambale/repos",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"received_events_url": "https://api.github.com/users/wkambale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\n```",
"```bash\ndatasets version: 2.14.4\nhuggingface_hub version: 0.31.4\nfsspec version: 2025.3.2\n```",
"Version 2.14.4 is not the latest version available, in fact it is from August 08, 2023 (you can check here: https://pypi.org/project/datasets/#history)\n\nUse pip install datasets==3.6.0 to install a more recent version (from May 7, 2025)\n\nI also had the same problem with Colab, after updating to the latest version it was solved.\n\nI hope it helps",
"thank you @CleitonOERocha. it sure did help.\n\nupdating `datasets` to v3.6.0 and keeping `fsspec` on v2025.3.2 eliminates the issue.",
"Very helpful, thank you!"
] | 2025-05-27T13:46:05 | 2025-05-30T13:22:52 | 2025-05-30T01:26:30 | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate).
now i changed a few hyperparameters to increase number of tokens for the model, increase Transformer layers, and all
however, when i try to load the dataset, this error keeps coming up.. i have tried everything.. i have re-written the code a hundred times, and this keep coming up
### Steps to reproduce the bug
Imports:
```bash
!pip install datasets huggingface_hub fsspec
```
Python code:
```python
from datasets import load_dataset
HF_DATASET_NAME = "kambale/luganda-english-parallel-corpus"
# Load the dataset
try:
if not HF_DATASET_NAME or HF_DATASET_NAME == "YOUR_HF_DATASET_NAME":
raise ValueError(
"Please provide a valid Hugging Face dataset name."
)
dataset = load_dataset(HF_DATASET_NAME)
# Omitted code as the error happens on the line above
except ValueError as ve:
print(f"Configuration Error: {ve}")
raise
except Exception as e:
print(f"An error occurred while loading the dataset '{HF_DATASET_NAME}': {e}")
raise e
```
now, i have tried going through this [issue](https://github.com/huggingface/datasets/issues/6737) and nothing helps
### Expected behavior
loading the dataset successfully and perform splits (train, test, validation)
### Environment info
from the imports, i do not install specific versions of these libraries, so the latest or available version is installed
* `datasets` version: latest
* `Platform`: Google Colab
* `Hardware`: NVIDIA A100 GPU
* `Python` version: latest
* `huggingface_hub` version: latest
* `fsspec` version: latest | {
"login": "wkambale",
"id": 43061081,
"node_id": "MDQ6VXNlcjQzMDYxMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wkambale",
"html_url": "https://github.com/wkambale",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkambale/subscriptions",
"organizations_url": "https://api.github.com/users/wkambale/orgs",
"repos_url": "https://api.github.com/users/wkambale/repos",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"received_events_url": "https://api.github.com/users/wkambale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7588/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7587/comments | https://api.github.com/repos/huggingface/datasets/issues/7587/events | https://github.com/huggingface/datasets/pull/7587 | 3,091,834,987 | PR_kwDODunzps6XrB8F | 7,587 | load_dataset splits typing | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-26T18:28:40 | 2025-05-26T18:31:10 | 2025-05-26T18:29:57 | MEMBER | null | null | null | close https://github.com/huggingface/datasets/issues/7583 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7587/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7587",
"html_url": "https://github.com/huggingface/datasets/pull/7587",
"diff_url": "https://github.com/huggingface/datasets/pull/7587.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7587.patch",
"merged_at": "2025-05-26T18:29:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7586/comments | https://api.github.com/repos/huggingface/datasets/issues/7586/events | https://github.com/huggingface/datasets/issues/7586 | 3,091,320,431 | I_kwDODunzps64Qc5v | 7,586 | help is appreciated | {
"login": "rajasekarnp1",
"id": 54931785,
"node_id": "MDQ6VXNlcjU0OTMxNzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/54931785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajasekarnp1",
"html_url": "https://github.com/rajasekarnp1",
"followers_url": "https://api.github.com/users/rajasekarnp1/followers",
"following_url": "https://api.github.com/users/rajasekarnp1/following{/other_user}",
"gists_url": "https://api.github.com/users/rajasekarnp1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajasekarnp1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajasekarnp1/subscriptions",
"organizations_url": "https://api.github.com/users/rajasekarnp1/orgs",
"repos_url": "https://api.github.com/users/rajasekarnp1/repos",
"events_url": "https://api.github.com/users/rajasekarnp1/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajasekarnp1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"how is this related to this repository ?"
] | 2025-05-26T14:00:42 | 2025-05-26T18:21:57 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Feature request
https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main
### Motivation
ai model develpment and audio
### Your contribution
ai model develpment and audio | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7586/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7585/comments | https://api.github.com/repos/huggingface/datasets/issues/7585/events | https://github.com/huggingface/datasets/pull/7585 | 3,091,227,921 | PR_kwDODunzps6Xo-Tw | 7,585 | Avoid multiple default config names | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7585). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-26T13:27:59 | 2025-05-26T13:38:19 | null | MEMBER | null | null | null | Fix duplicating default config names.
Currently, when calling `push_to_hub(set_default=True` with 2 different config names, both are set as default.
Moreover, this will generate an error next time we try to push another default config name, raised by `MetadataConfigs.get_default_config_name`:
https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/arrow_dataset.py#L5757
https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/utils/metadata.py#L186-L188 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7585/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7585",
"html_url": "https://github.com/huggingface/datasets/pull/7585",
"diff_url": "https://github.com/huggingface/datasets/pull/7585.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7585.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7584/comments | https://api.github.com/repos/huggingface/datasets/issues/7584/events | https://github.com/huggingface/datasets/issues/7584 | 3,090,255,023 | I_kwDODunzps64MYyv | 7,584 | Add LMDB format support | {
"login": "trotsky1997",
"id": 30512160,
"node_id": "MDQ6VXNlcjMwNTEyMTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/30512160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trotsky1997",
"html_url": "https://github.com/trotsky1997",
"followers_url": "https://api.github.com/users/trotsky1997/followers",
"following_url": "https://api.github.com/users/trotsky1997/following{/other_user}",
"gists_url": "https://api.github.com/users/trotsky1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trotsky1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trotsky1997/subscriptions",
"organizations_url": "https://api.github.com/users/trotsky1997/orgs",
"repos_url": "https://api.github.com/users/trotsky1997/repos",
"events_url": "https://api.github.com/users/trotsky1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/trotsky1997/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! Can you explain what's your use case ? Is it about converting LMDB to Dataset objects (i.e. converting to Arrow) ?"
] | 2025-05-26T07:10:13 | 2025-05-26T18:23:37 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Feature request
Add LMDB format support for large memory-mapping files
### Motivation
Add LMDB format support for large memory-mapping files
### Your contribution
I'm trying to add it | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7584/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7583/comments | https://api.github.com/repos/huggingface/datasets/issues/7583/events | https://github.com/huggingface/datasets/issues/7583 | 3,088,987,757 | I_kwDODunzps64HjZt | 7,583 | load_dataset type stubs reject List[str] for split parameter, but runtime supports it | {
"login": "hierr",
"id": 25069969,
"node_id": "MDQ6VXNlcjI1MDY5OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/25069969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hierr",
"html_url": "https://github.com/hierr",
"followers_url": "https://api.github.com/users/hierr/followers",
"following_url": "https://api.github.com/users/hierr/following{/other_user}",
"gists_url": "https://api.github.com/users/hierr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hierr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hierr/subscriptions",
"organizations_url": "https://api.github.com/users/hierr/orgs",
"repos_url": "https://api.github.com/users/hierr/repos",
"events_url": "https://api.github.com/users/hierr/events{/privacy}",
"received_events_url": "https://api.github.com/users/hierr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-25T02:33:18 | 2025-05-26T18:29:58 | 2025-05-26T18:29:58 | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type checkers like Pylance to raise `reportArgumentType` errors when passing a list of strings, even though it works as intended at runtime.
### Steps to reproduce the bug
1. Use load_dataset with multiple splits e.g.:
```
from datasets import load_dataset
ds_train, ds_val, ds_test = load_dataset(
"Silly-Machine/TuPyE-Dataset",
"binary",
split=["train[:75%]", "train[75%:]", "test"]
)
```
2. Observe that code executes correctly at runtime and Pylance raises `Argument of type "List[str]" cannot be assigned to parameter "split" of type "str | Split | None"`
### Expected behavior
The type stubs for [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) should accept `Union[str, Split, List[str], None]` or more specific overloads for the split parameter to correctly represent runtime behavior.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.7
- `huggingface_hub` version: 0.32.0
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7583/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7582 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7582/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7582/comments | https://api.github.com/repos/huggingface/datasets/issues/7582/events | https://github.com/huggingface/datasets/pull/7582 | 3,083,515,643 | PR_kwDODunzps6XPIt7 | 7,582 | fix: Add embed_storage in Pdf feature | {
"login": "AndreaFrancis",
"id": 5564745,
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaFrancis",
"html_url": "https://github.com/AndreaFrancis",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7582). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-22T14:06:29 | 2025-05-22T14:17:38 | 2025-05-22T14:17:36 | CONTRIBUTOR | null | null | null | Add missing `embed_storage` method in Pdf feature (Same as in Audio and Image) | {
"login": "AndreaFrancis",
"id": 5564745,
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaFrancis",
"html_url": "https://github.com/AndreaFrancis",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7582/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7582",
"html_url": "https://github.com/huggingface/datasets/pull/7582",
"diff_url": "https://github.com/huggingface/datasets/pull/7582.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7582.patch",
"merged_at": "2025-05-22T14:17:36"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7581/comments | https://api.github.com/repos/huggingface/datasets/issues/7581/events | https://github.com/huggingface/datasets/pull/7581 | 3,083,080,413 | PR_kwDODunzps6XNpm0 | 7,581 | Add missing property on `RepeatExamplesIterable` | {
"login": "SilvanCodes",
"id": 42788329,
"node_id": "MDQ6VXNlcjQyNzg4MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/42788329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SilvanCodes",
"html_url": "https://github.com/SilvanCodes",
"followers_url": "https://api.github.com/users/SilvanCodes/followers",
"following_url": "https://api.github.com/users/SilvanCodes/following{/other_user}",
"gists_url": "https://api.github.com/users/SilvanCodes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SilvanCodes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SilvanCodes/subscriptions",
"organizations_url": "https://api.github.com/users/SilvanCodes/orgs",
"repos_url": "https://api.github.com/users/SilvanCodes/repos",
"events_url": "https://api.github.com/users/SilvanCodes/events{/privacy}",
"received_events_url": "https://api.github.com/users/SilvanCodes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-05-22T11:41:07 | 2025-05-22T11:41:07 | null | NONE | null | null | null | Fixes #7561 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7581/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7581",
"html_url": "https://github.com/huggingface/datasets/pull/7581",
"diff_url": "https://github.com/huggingface/datasets/pull/7581.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7581.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7580/comments | https://api.github.com/repos/huggingface/datasets/issues/7580/events | https://github.com/huggingface/datasets/issues/7580 | 3,082,993,027 | I_kwDODunzps63wr2D | 7,580 | Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False. | {
"login": "s3pi",
"id": 48768216,
"node_id": "MDQ6VXNlcjQ4NzY4MjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/48768216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/s3pi",
"html_url": "https://github.com/s3pi",
"followers_url": "https://api.github.com/users/s3pi/followers",
"following_url": "https://api.github.com/users/s3pi/following{/other_user}",
"gists_url": "https://api.github.com/users/s3pi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/s3pi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s3pi/subscriptions",
"organizations_url": "https://api.github.com/users/s3pi/orgs",
"repos_url": "https://api.github.com/users/s3pi/repos",
"events_url": "https://api.github.com/users/s3pi/events{/privacy}",
"received_events_url": "https://api.github.com/users/s3pi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! There was a PR open to improve this: https://github.com/huggingface/datasets/pull/6832 \nbut it hasn't been continued so far.\n\nIt would be a cool improvement though !"
] | 2025-05-22T11:08:16 | 2025-05-26T18:40:31 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call.
This behavior leads to unnecessary bandwidth usage and longer download times, especially for large datasets, even if the user only intends to use a single split.
### Steps to reproduce the bug
dataset_name = "skbose/indian-english-nptel-v0"
dataset = load_dataset(dataset_name, token=hf_token, split="test")
### Expected behavior
Optimize the download logic so that only the required split is downloaded when streaming=False when a specific split is provided.
### Environment info
Dataset: skbose/indian-english-nptel-v0
Platform: M1 Apple Silicon
Python verison: 3.12.9
datasets>=3.5.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7580/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7579/comments | https://api.github.com/repos/huggingface/datasets/issues/7579/events | https://github.com/huggingface/datasets/pull/7579 | 3,081,849,022 | PR_kwDODunzps6XJerX | 7,579 | Fix typos in PDF and Video documentation | {
"login": "AndreaFrancis",
"id": 5564745,
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaFrancis",
"html_url": "https://github.com/AndreaFrancis",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7579). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-22T02:27:40 | 2025-05-22T12:53:49 | 2025-05-22T12:53:47 | CONTRIBUTOR | null | null | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7579/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7579",
"html_url": "https://github.com/huggingface/datasets/pull/7579",
"diff_url": "https://github.com/huggingface/datasets/pull/7579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7579.patch",
"merged_at": "2025-05-22T12:53:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7577/comments | https://api.github.com/repos/huggingface/datasets/issues/7577/events | https://github.com/huggingface/datasets/issues/7577 | 3,080,833,740 | I_kwDODunzps63ocrM | 7,577 | arrow_schema is not compatible with list | {
"login": "jonathanshen-upwork",
"id": 164412025,
"node_id": "U_kgDOCcy6eQ",
"avatar_url": "https://avatars.githubusercontent.com/u/164412025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanshen-upwork",
"html_url": "https://github.com/jonathanshen-upwork",
"followers_url": "https://api.github.com/users/jonathanshen-upwork/followers",
"following_url": "https://api.github.com/users/jonathanshen-upwork/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanshen-upwork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanshen-upwork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanshen-upwork/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanshen-upwork/orgs",
"repos_url": "https://api.github.com/users/jonathanshen-upwork/repos",
"events_url": "https://api.github.com/users/jonathanshen-upwork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanshen-upwork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, I'll look into it",
"Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = datasets.Features({'x':[datasets.Value(dtype='int32')]})\n```\n\nI'm closing this issue if you don't mind",
"Ah is that what the syntax is? I don't think I was able to find an actual example of it so I assumed it was in the same way that you specify types eg. `list[int]`. This is good to know, thanks."
] | 2025-05-21T16:37:01 | 2025-05-26T18:49:51 | 2025-05-26T18:32:55 | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
```
import datasets
f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})
f.arrow_schema
Traceback (most recent call last):
File "datasets/features/features.py", line 1826, in arrow_schema
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
^^^^^^^^^
File "datasets/features/features.py", line 1815, in type
return get_nested_type(self)
^^^^^^^^^^^^^^^^^^^^^
File "datasets/features/features.py", line 1252, in get_nested_type
return pa.struct(
^^^^^^^^^^
File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct
File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field
File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type
TypeError: DataType expected, got <class 'list'>
```
The following works
```
f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))})
```
### Expected behavior
according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features
### Environment info
- `datasets` version: 3.5.1
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.9
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7577/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7577/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7576/comments | https://api.github.com/repos/huggingface/datasets/issues/7576/events | https://github.com/huggingface/datasets/pull/7576 | 3,080,450,538 | PR_kwDODunzps6XEuMz | 7,576 | Fix regex library warnings | {
"login": "emmanuel-ferdman",
"id": 35470921,
"node_id": "MDQ6VXNlcjM1NDcwOTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/35470921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emmanuel-ferdman",
"html_url": "https://github.com/emmanuel-ferdman",
"followers_url": "https://api.github.com/users/emmanuel-ferdman/followers",
"following_url": "https://api.github.com/users/emmanuel-ferdman/following{/other_user}",
"gists_url": "https://api.github.com/users/emmanuel-ferdman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emmanuel-ferdman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emmanuel-ferdman/subscriptions",
"organizations_url": "https://api.github.com/users/emmanuel-ferdman/orgs",
"repos_url": "https://api.github.com/users/emmanuel-ferdman/repos",
"events_url": "https://api.github.com/users/emmanuel-ferdman/events{/privacy}",
"received_events_url": "https://api.github.com/users/emmanuel-ferdman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-05-21T14:31:58 | 2025-05-21T14:31:58 | null | NONE | null | null | null | # PR Summary
This small PR resolves the regex library warnings showing starting Python3.11:
```python
DeprecationWarning: 'count' is passed as positional argument
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7576/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7576",
"html_url": "https://github.com/huggingface/datasets/pull/7576",
"diff_url": "https://github.com/huggingface/datasets/pull/7576.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7576.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7575/comments | https://api.github.com/repos/huggingface/datasets/issues/7575/events | https://github.com/huggingface/datasets/pull/7575 | 3,080,228,718 | PR_kwDODunzps6XD9gM | 7,575 | [MINOR:TYPO] Update save_to_disk docstring | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-05-21T13:22:24 | 2025-05-21T13:22:24 | null | CONTRIBUTOR | null | null | null | r/hub/filesystem in save_to_disk | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7575/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7575",
"html_url": "https://github.com/huggingface/datasets/pull/7575",
"diff_url": "https://github.com/huggingface/datasets/pull/7575.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7575.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7574/comments | https://api.github.com/repos/huggingface/datasets/issues/7574/events | https://github.com/huggingface/datasets/issues/7574 | 3,079,641,072 | I_kwDODunzps63j5fw | 7,574 | Missing multilingual directions in IWSLT2017 dataset's processing script | {
"login": "andy-joy-25",
"id": 79297451,
"node_id": "MDQ6VXNlcjc5Mjk3NDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/79297451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andy-joy-25",
"html_url": "https://github.com/andy-joy-25",
"followers_url": "https://api.github.com/users/andy-joy-25/followers",
"following_url": "https://api.github.com/users/andy-joy-25/following{/other_user}",
"gists_url": "https://api.github.com/users/andy-joy-25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andy-joy-25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andy-joy-25/subscriptions",
"organizations_url": "https://api.github.com/users/andy-joy-25/orgs",
"repos_url": "https://api.github.com/users/andy-joy-25/repos",
"events_url": "https://api.github.com/users/andy-joy-25/events{/privacy}",
"received_events_url": "https://api.github.com/users/andy-joy-25/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I have opened 2 PRs on the Hub: `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/7` and `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/8` to resolve this issue",
"cool ! I pinged the owners of the dataset on HF to merge your PRs :)"
] | 2025-05-21T09:53:17 | 2025-05-26T18:36:38 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
Hi,
Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the list of all the configs present in `IWSLT/iwslt2017`. This should not be the case since as mentioned in their original paper (please see https://aclanthology.org/2017.iwslt-1.1.pdf), the authors specify that "_this year we proposed the multilingual translation between any pair of languages from {Dutch, English, German, Italian, Romanian}..._" and because these datasets are indeed present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`.
Best Regards,
Anand
### Steps to reproduce the bug
Check the output of `get_dataset_config_names("IWSLT/iwslt2017", trust_remote_code=True)`: only 24 language pairs are present and the following 6 config names are absent: `iwslt2017-de-it`, `iwslt2017-de-ro`, `iwslt2017-de-nl`, `iwslt2017-it-de`, `iwslt2017-nl-de`, and `iwslt2017-ro-de`.
### Expected behavior
The aforementioned 6 language pairs should also be present and hence, all these 6 language pairs' IWSLT2017 datasets must also be available for further use.
I would suggest removing `de` from the `BI_LANGUAGES` list and moving it over to the `MULTI_LANGUAGES` list instead in `iwslt2017.py` to account for all the 6 missing language pairs (the same `de-en` dataset is present in both `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip` and `data/2017-01-trnted/texts/de/en/de-en.zip` but the `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` datasets are only present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`: so, its unclear why the following comment: _`# XXX: Artificially removed DE from here, as it also exists within bilingual data`_ has been added as `L71` in `iwslt2017.py`). The `README.md` file in `IWSLT/iwslt2017`must then be re-created using `datasets-cli test path/to/iwslt2017.py --save_info --all_configs` to pass all split size verification checks for the 6 new language pairs which were previously non-existent.
### Environment info
- `datasets` version: 3.5.0
- Platform: Linux-6.8.0-56-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.30.1
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7574/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7573/comments | https://api.github.com/repos/huggingface/datasets/issues/7573/events | https://github.com/huggingface/datasets/issues/7573 | 3,076,415,382 | I_kwDODunzps63Xl-W | 7,573 | No Samsum dataset | {
"login": "IgorKasianenko",
"id": 17688220,
"node_id": "MDQ6VXNlcjE3Njg4MjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/17688220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IgorKasianenko",
"html_url": "https://github.com/IgorKasianenko",
"followers_url": "https://api.github.com/users/IgorKasianenko/followers",
"following_url": "https://api.github.com/users/IgorKasianenko/following{/other_user}",
"gists_url": "https://api.github.com/users/IgorKasianenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IgorKasianenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IgorKasianenko/subscriptions",
"organizations_url": "https://api.github.com/users/IgorKasianenko/orgs",
"repos_url": "https://api.github.com/users/IgorKasianenko/repos",
"events_url": "https://api.github.com/users/IgorKasianenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/IgorKasianenko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-05-20T09:54:35 | 2025-05-20T09:54:35 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
https://huggingface.co/datasets/Samsung/samsum dataset not found error 404
Originated from https://github.com/meta-llama/llama-cookbook/issues/948
### Steps to reproduce the bug
go to website https://huggingface.co/datasets/Samsung/samsum
see the error
also downloading it with python throws
```
Couldn't find 'Samsung/samsum' on the Hugging Face Hub either: FileNotFoundError: Samsung/samsum@f00baf5a7d4abfec6820415493bcb52c587788e6/samsum.py (repository not found)
```
### Expected behavior
Dataset exists
### Environment info
```
- `datasets` version: 3.2.0
- Platform: macOS-15.4.1-arm64-arm-64bit
- Python version: 3.12.2
- `huggingface_hub` version: 0.26.5
- PyArrow version: 16.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7573/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7572/comments | https://api.github.com/repos/huggingface/datasets/issues/7572/events | https://github.com/huggingface/datasets/pull/7572 | 3,074,529,251 | PR_kwDODunzps6WwsZB | 7,572 | Fixed typos | {
"login": "TopCoder2K",
"id": 47208659,
"node_id": "MDQ6VXNlcjQ3MjA4NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TopCoder2K",
"html_url": "https://github.com/TopCoder2K",
"followers_url": "https://api.github.com/users/TopCoder2K/followers",
"following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}",
"gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions",
"organizations_url": "https://api.github.com/users/TopCoder2K/orgs",
"repos_url": "https://api.github.com/users/TopCoder2K/repos",
"events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}",
"received_events_url": "https://api.github.com/users/TopCoder2K/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@lhoestq, mentioning in case you haven't seen this PR. The contribution is very small and easy to check :)"
] | 2025-05-19T17:16:59 | 2025-05-22T18:10:26 | null | CONTRIBUTOR | null | null | null | More info: [comment](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781). | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7572/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7572",
"html_url": "https://github.com/huggingface/datasets/pull/7572",
"diff_url": "https://github.com/huggingface/datasets/pull/7572.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7572.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7571/comments | https://api.github.com/repos/huggingface/datasets/issues/7571/events | https://github.com/huggingface/datasets/pull/7571 | 3,074,116,942 | PR_kwDODunzps6WvRqi | 7,571 | fix string_to_dict test | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7571). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-19T14:49:23 | 2025-05-19T14:52:24 | 2025-05-19T14:49:28 | MEMBER | null | null | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7571/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7571",
"html_url": "https://github.com/huggingface/datasets/pull/7571",
"diff_url": "https://github.com/huggingface/datasets/pull/7571.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7571.patch",
"merged_at": "2025-05-19T14:49:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7570/comments | https://api.github.com/repos/huggingface/datasets/issues/7570/events | https://github.com/huggingface/datasets/issues/7570 | 3,065,966,529 | I_kwDODunzps62vu_B | 7,570 | Dataset lib seems to broke after fssec lib update | {
"login": "sleepingcat4",
"id": 81933585,
"node_id": "MDQ6VXNlcjgxOTMzNTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sleepingcat4",
"html_url": "https://github.com/sleepingcat4",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions",
"organizations_url": "https://api.github.com/users/sleepingcat4/orgs",
"repos_url": "https://api.github.com/users/sleepingcat4/repos",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"received_events_url": "https://api.github.com/users/sleepingcat4/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-05-15T11:45:06 | 2025-05-15T11:45:06 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec`
### Steps to reproduce the bug
from datasets import load_dataset
def download_hf():
dataset_name = input("Enter the dataset name: ")
subset_name = input("Enter subset name: ")
ds = load_dataset(dataset_name, name=subset_name)
for split in ds:
ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False)
download_hf()
### Expected behavior
```
Downloading readme: 100%
1.55k/1.55k [00:00<00:00, 121kB/s]
Downloading data files: 100%
1/1 [00:00<00:00, 2.06it/s]
Downloading data: 0%| | 0.00/54.2k [00:00<?, ?B/s]
Downloading data: 100%|██████████| 54.2k/54.2k [00:00<00:00, 121kB/s]
Extracting data files: 100%
1/1 [00:00<00:00, 35.17it/s]
Generating test split:
140/0 [00:00<00:00, 2628.62 examples/s]
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
[<ipython-input-2-12ab305b0e77>](https://localhost:8080/#) in <cell line: 0>()
8 ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False)
9
---> 10 download_hf()
2 frames
[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1171 is_local = not is_remote_filesystem(self._fs)
1172 if not is_local:
-> 1173 raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
1174 if not os.path.exists(self._output_dir):
1175 raise FileNotFoundError(
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
```
OR
```
Traceback (most recent call last):
File "e:\Fuck\download-data\mcq_dataset.py", line 10, in <module>
download_hf()
File "e:\Fuck\download-data\mcq_dataset.py", line 6, in download_hf
ds = load_dataset(dataset_name, name=subset_name)
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2606, in load_dataset
builder_instance = load_dataset_builder(
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2277, in load_dataset_builder
dataset_module = dataset_module_factory(
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1917, in dataset_module_factory
raise e1 from None
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1867, in dataset_module_factory
raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e
datasets.exceptions.DatasetNotFoundError: Dataset 'dataset repo_id' doesn't exist on the Hub or cannot be accessed.
```
### Environment info
colab and 3.10 local system | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7570/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7569/comments | https://api.github.com/repos/huggingface/datasets/issues/7569/events | https://github.com/huggingface/datasets/issues/7569 | 3,061,234,054 | I_kwDODunzps62drmG | 7,569 | Dataset creation is broken if nesting a dict inside a dict inside a list | {
"login": "TimSchneider42",
"id": 25732590,
"node_id": "MDQ6VXNlcjI1NzMyNTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/25732590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimSchneider42",
"html_url": "https://github.com/TimSchneider42",
"followers_url": "https://api.github.com/users/TimSchneider42/followers",
"following_url": "https://api.github.com/users/TimSchneider42/following{/other_user}",
"gists_url": "https://api.github.com/users/TimSchneider42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TimSchneider42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimSchneider42/subscriptions",
"organizations_url": "https://api.github.com/users/TimSchneider42/orgs",
"repos_url": "https://api.github.com/users/TimSchneider42/repos",
"events_url": "https://api.github.com/users/TimSchneider42/events{/privacy}",
"received_events_url": "https://api.github.com/users/TimSchneider42/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! That's because Séquence is a type that comes from tensorflow datasets and inverts lists and focus when doing Séquence(dict).\n\nInstead you should use a list. In your case\n```python\nfeatures = Features({\n \"a\": [{\"b\": {\"c\": Value(\"string\")}}]\n})\n```",
"Hi,\n\nThanks for the swift reply! Could you quickly clarify a couple of points?\n\n1. Is there any benefit in using Sequence over normal lists? Especially for longer lists (in my case, up to 256 entries)\n2. When exactly can I use Sequence? If there is a maximum of one level of dictionaries inside, then it's always fine?\n3. When creating the data in the generator, do I need to swap lists and dicts manually, or does that happen automatically?\n\nAlso, the documentation does not seem to mention this limitation of the Sequence type anywhere and encourages users to use it [here](https://huggingface.co/docs/datasets/en/about_dataset_features). In fact, I did not even know that just using a Python list was an option. Maybe the documentation can be improved to mention the limitations of Sequence and highlight that lists can be used instead.\n\nThanks a lot in advance!\n\nBest,\nTim"
] | 2025-05-13T21:06:45 | 2025-05-20T19:25:15 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
Hey,
I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details.
Best,
Tim
### Steps to reproduce the bug
Runing this code:
```python
from datasets import Dataset, Features, Sequence, Value
def generator():
yield {
"a": [{"b": {"c": 0}}],
}
features = Features(
{
"a": Sequence(
feature={
"b": {
"c": Value("int32"),
},
},
length=1,
)
}
)
dataset = Dataset.from_generator(generator, features=features)
```
leads to
```
Generating train split: 1 examples [00:00, 540.85 examples/s]
Traceback (most recent call last):
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1635, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 657, in finalize
self.write_examples_on_file()
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 510, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 629, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 4851, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1608, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 399, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 1004, in pyarrow.lib.Array.cast
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/pyarrow/compute.py", line 405, in cast
return call_function("cast", [arr], options, memory_pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_compute.pyx", line 598, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 393, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from fixed_size_list<item: struct<c: int32>>[1] to struct using function cast_struct
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/test/tools/hf_test2.py", line 23, in <module>
dataset = Dataset.from_generator(generator, features=features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1114, in from_generator
).read()
^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/io/generator.py", line 49, in read
self.builder.download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1487, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Process finished with exit code 1
```
### Expected behavior
I expected this code not to lead to an error.
I have done some digging and figured out that the problem seems to be the `get_nested_type` function in `features.py`, which, for whatever reason, flips Sequences and dicts whenever it encounters a dict inside of a sequence. This seems to be necessary, as disabling that flip leads to another error. However, by keeping that flip enabled for the highest level and disabling it for all subsequent levels, I was able to work around this problem. Specifically, by patching `get_nested_type` as follows, it works on the given example (emphasis on the `level` parameter I added):
```python
def get_nested_type(schema: FeatureType, level=0) -> pa.DataType:
"""
get_nested_type() converts a datasets.FeatureType into a pyarrow.DataType, and acts as the inverse of
generate_from_arrow_type().
It performs double-duty as the implementation of Features.type and handles the conversion of
datasets.Feature->pa.struct
"""
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, Features):
return pa.struct(
{key: get_nested_type(schema[key], level = level + 1) for key in schema}
) # Features is subclass of dict, and dict order is deterministic since Python 3.6
elif isinstance(schema, dict):
return pa.struct(
{key: get_nested_type(schema[key], level = level + 1) for key in schema}
) # however don't sort on struct types since the order matters
elif isinstance(schema, (list, tuple)):
if len(schema) != 1:
raise ValueError("When defining list feature, you should just provide one example of the inner type")
value_type = get_nested_type(schema[0], level = level + 1)
return pa.list_(value_type)
elif isinstance(schema, LargeList):
value_type = get_nested_type(schema.feature, level = level + 1)
return pa.large_list(value_type)
elif isinstance(schema, Sequence):
value_type = get_nested_type(schema.feature, level = level + 1)
# We allow to reverse list of dict => dict of list for compatibility with tfds
if isinstance(schema.feature, dict) and level == 1:
data_type = pa.struct({f.name: pa.list_(f.type, schema.length) for f in value_type})
else:
data_type = pa.list_(value_type, schema.length)
return data_type
# Other objects are callable which returns their data type (ClassLabel, Array2D, Translation, Arrow datatype creation methods)
return schema()
```
I have honestly no idea what I am doing here, so this might produce other issues for different inputs.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
Also tested it with 3.5.0, same result. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7569/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7568/comments | https://api.github.com/repos/huggingface/datasets/issues/7568/events | https://github.com/huggingface/datasets/issues/7568 | 3,060,515,257 | I_kwDODunzps62a8G5 | 7,568 | `IterableDatasetDict.map()` call removes `column_names` (in fact info.features) | {
"login": "mombip",
"id": 7893763,
"node_id": "MDQ6VXNlcjc4OTM3NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7893763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mombip",
"html_url": "https://github.com/mombip",
"followers_url": "https://api.github.com/users/mombip/followers",
"following_url": "https://api.github.com/users/mombip/following{/other_user}",
"gists_url": "https://api.github.com/users/mombip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mombip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mombip/subscriptions",
"organizations_url": "https://api.github.com/users/mombip/orgs",
"repos_url": "https://api.github.com/users/mombip/repos",
"events_url": "https://api.github.com/users/mombip/events{/privacy}",
"received_events_url": "https://api.github.com/users/mombip/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! IterableDataset doesn't know what's the output of the function you pass to map(), so it's not possible to know in advance the features of the output dataset.\n\nThere is a workaround though: either do `ds = ds.map(..., features=features)`, or you can do `ds = ds._resolve_features()` which iterates on the first rows to infer the dataset features.",
"Thank you. I understand that “IterableDataset doesn't know what's the output of the function”—that’s true, but:\n\nUnfortunately, the workaround you proposed **doesn’t solve** the problem. `ds.map()` is called multiple times by third-party code (i.e. `SFTTrainer`). To apply your approach, I would have to modify external library code. That’s why I decided to patch the _class_ rather than update `dataset` _objects_ (in fact, updating the object after `map()` was my initial approach, but then I realized I’m not the only one mapping an already-mapped dataset.)\n\nAs a user, I expected that after mapping I would get a new dataset with the correct column names. If, for some reason, that can’t be the default behavior, I would expect an argument—i.e. `auto_resolve_features: bool = False` — to control how my dataset is mapped if following mapping operation are called.\n\nIt’s also problematic that `column_names` are tied to `features`, which is even more confusing and forces you to inspect the source code to understand what’s going on.\n\n**New version of workaround:**\n```python\ndef patch_iterable_dataset_map():\n _orig_map = IterableDataset.map\n\n def _patched_map(self, *args, **kwargs):\n ds = _orig_map(self, *args, **kwargs)\n return ds._resolve_features()\n\n IterableDataset.map = _patched_map\n```",
"I see, maybe `.resolve_features()` should be called by default in this case in the SFTTrainer ? (or pass `features=` if the data processing always output the same features)\n\nWe can even support a new parameter `features=\"infer\"` if it would be comfortable to not use internal methods in SFTTrainer",
"I think most straightforward solution would be to reinitialize `features` from data after mapping if `feature` argument is not passed. I hink it is more intuitive behavior than just cleaning features. There is also problem in usage `.resolve_features()` in this context. I observed that it leads to `_head()` method execution and it then causes that 5 batches from dataset are iterated (`_head()` defaults to 5 batches). \nI'm not sure how it influences whole process. Are those 5 batches (in my case it's 5000 rows) used only to find `features`. Does final training/eval process \"see\" this items? How it affects IterableDataset state (current position)?",
"I checked the source code and while it indeed iterates on the first 5 rows. As a normal iteration, it does record the state in case you call `.state_dict()`, but it doesn't change the starting state. The starting state is always the beginning of the dataset, unless it is explicitly set with `.load_state_dict()`. To be clear, if you iterate on the dataset after `._resolve_features()`, it will start from the beginning of the dataset (or from a state you manually pass using `.load_state_dict()`)"
] | 2025-05-13T15:45:42 | 2025-05-19T12:09:48 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relies on `info.features`, it ends up broken (`None`).
**Reproduction**
1. Define an IterableDatasetDict with a non-None features schema.
2. my_iterable_dataset_dict contains "text" column.
3. Call:
```Python
new_dict = my_iterable_dataset_dict.map(
function=my_fn,
with_indices=False,
batched=True,
batch_size=16,
)
```
4. Observe
```Python
new_dict["train"].info.features # {'text': Value(dtype='string', id=None)}
new_dict["train"].column_names # ['text']
```
5. Call:
```Python
new_dict = my_iterable_dataset_dict.map(
function=my_fn,
with_indices=False,
batched=True,
batch_size=16,
remove_columns=["foo"]
)
```
6. Observe:
```Python
new_dict["train"].info.features # → None
new_dict["train"].column_names # → None
```
5. Internally, in dataset_dict.py this loop omits features ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/dataset_dict.py#L2047C5-L2056C14)):
```Python
for split, dataset in self.items():
dataset_dict[split] = dataset.map(
function=function,
with_indices=with_indices,
input_columns=input_columns,
batched=batched,
batch_size=batch_size,
drop_last_batch=drop_last_batch,
remove_columns=remove_columns,
fn_kwargs=fn_kwargs,
# features omitted → defaults to None
)
```
7. Then inside IterableDataset.map() ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2619C1-L2622C37)) correct `info.features` is replaced by features which is None:
```Python
info = self.info.copy()
info.features = features # features is None here
return IterableDataset(..., info=info, ...)
```
**Suggestion**
It looks like this replacement was added intentionally but maybe should be done only if `features` is `not None`.
**Workarround:**
`SFTTrainer` calls `dataset.map()` several times and then fails on `NoneType` when iterating `dataset.column_names`.
I decided to write this patch - works form me.
```python
def patch_iterable_dataset_map():
_orig_map = IterableDataset.map
def _patched_map(self, *args, **kwargs):
if "features" not in kwargs or kwargs["features"] is None:
kwargs["features"] = self.info.features
return _orig_map(self, *args, **kwargs)
IterableDataset.map = _patched_map
```
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7568/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7567/comments | https://api.github.com/repos/huggingface/datasets/issues/7567/events | https://github.com/huggingface/datasets/issues/7567 | 3,058,308,538 | I_kwDODunzps62ShW6 | 7,567 | interleave_datasets seed with multiple workers | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! It's already the case IIRC: the effective seed looks like `seed + worker_id`. Do you have a reproducible example ?",
"here is an example with shuffle\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard):\n worker_info = torch.utils.data.get_worker_info()\n for i in range(10):\n yield {'value': i, 'worker_id': worker_info.id}\n\n\ndef main():\n ds = datasets.IterableDataset.from_generator(gen, gen_kwargs={'shard': list(range(8))})\n ds = ds.shuffle(buffer_size=100, seed=1234)\n dataloader = torch.utils.data.DataLoader(ds, batch_size=None, num_workers=8)\n for i, ex in enumerate(itertools.islice(dataloader, 50)):\n print(i, ex)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('spawn')\n main()\n```\n\n```\npython test.py\n0 {'value': 8, 'worker_id': 0}\n1 {'value': 8, 'worker_id': 1}\n2 {'value': 8, 'worker_id': 2}\n3 {'value': 8, 'worker_id': 3}\n4 {'value': 8, 'worker_id': 4}\n5 {'value': 8, 'worker_id': 5}\n6 {'value': 8, 'worker_id': 6}\n7 {'value': 8, 'worker_id': 7}\n8 {'value': 9, 'worker_id': 0}\n9 {'value': 9, 'worker_id': 1}\n10 {'value': 9, 'worker_id': 2}\n11 {'value': 9, 'worker_id': 3}\n12 {'value': 9, 'worker_id': 4}\n13 {'value': 9, 'worker_id': 5}\n14 {'value': 9, 'worker_id': 6}\n15 {'value': 9, 'worker_id': 7}\n16 {'value': 5, 'worker_id': 0}\n17 {'value': 5, 'worker_id': 1}\n18 {'value': 5, 'worker_id': 2}\n19 {'value': 5, 'worker_id': 3}\n```",
"With `interleave_datasets`\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard, value):\n while True:\n yield {'value': value}\n\n\ndef main():\n ds = [\n datasets.IterableDataset.from_generator(gen, gen_kwargs={'shard': list(range(8)), 'value': i})\n for i in range(10)\n ]\n ds = datasets.interleave_datasets(ds, probabilities=[1 / len(ds)] * len(ds), seed=1234)\n dataloader = torch.utils.data.DataLoader(ds, batch_size=None, num_workers=8)\n for i, ex in enumerate(itertools.islice(dataloader, 50)):\n print(i, ex)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('spawn')\n main()\n```\n\n```\npython test.py\n0 {'value': 9}\n1 {'value': 9}\n2 {'value': 9}\n3 {'value': 9}\n4 {'value': 9}\n5 {'value': 9}\n6 {'value': 9}\n7 {'value': 9}\n8 {'value': 3}\n9 {'value': 3}\n10 {'value': 3}\n11 {'value': 3}\n12 {'value': 3}\n13 {'value': 3}\n14 {'value': 3}\n15 {'value': 3}\n16 {'value': 9}\n17 {'value': 9}\n18 {'value': 9}\n19 {'value': 9}\n20 {'value': 9}\n21 {'value': 9}\n22 {'value': 9}\n23 {'value': 9}\n```",
"Same results after updating to datasets 3.6.0.",
"Ah my bad, `shuffle()` uses a global effective seed which is something like `seed + epoch`, which is used to do the same shards shuffle in each worker so that each worker have a non-overlapping set of shards:\n\nhttps://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2102-L2111\n\nI think we should take into account the `worker_id` in a local seed for the buffer right after this line:\n\nhttps://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2151-L2153\n\nlike adding a new step that would propagate in the examples iterables or something like that:\n\n```python\nex_iterable = ex_iterable.shift_rngs(value=worker_id)\n```\n\nis this something you'd like to explore ? contributions on this subject are very welcome",
"Potentially, but busy. If anyone wants to take this up please feel free to, otherwise I may or may not revisit when I have free time.\n\nFor what it's worth I got around this with\n\n```\n\nclass SeedGeneratorWithWorkerIterable(iterable_dataset._BaseExamplesIterable):\n \"\"\"ExamplesIterable that seeds the rng with worker id.\"\"\"\n\n def __init__(\n self,\n ex_iterable: iterable_dataset._BaseExamplesIterable,\n generator: np.random.Generator,\n rank: int = 0,\n ):\n \"\"\"Constructor.\"\"\"\n super().__init__()\n self.ex_iterable = ex_iterable\n self.generator = generator\n self.rank = rank\n\n def _init_state_dict(self) -> dict:\n self._state_dict = self.ex_iterable._init_state_dict()\n return self._state_dict\n\n def __iter__(self):\n \"\"\"Data iterator.\"\"\"\n effective_seed = copy.deepcopy(self.generator).integers(0, 1 << 63) - self.rank\n effective_seed = (1 << 63) + effective_seed if effective_seed < 0 else effective_seed\n generator = np.random.default_rng(effective_seed)\n self.ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\n if self._state_dict:\n self._state_dict = self.ex_iterable._init_state_dict()\n yield from iter(self.ex_iterable)\n\n def shuffle_data_sources(self, generator):\n \"\"\"Shuffle data sources.\"\"\"\n ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\n return SeedGeneratorWithWorkerIterable(ex_iterable, generator=generator, rank=self.rank)\n\n def shard_data_sources(self, num_shards: int, index: int, contiguous=True): # noqa: FBT002\n \"\"\"Shard data sources.\"\"\"\n ex_iterable = self.ex_iterable.shard_data_sources(num_shards, index, contiguous=contiguous)\n return SeedGeneratorWithWorkerIterable(ex_iterable, generator=self.generator, rank=index)\n\n @property\n def is_typed(self):\n return self.ex_iterable.is_typed\n\n @property\n def features(self):\n return self.ex_iterable.features\n\n @property\n def num_shards(self) -> int:\n \"\"\"Number of shards.\"\"\"\n return self.ex_iterable.num_shards\n```"
] | 2025-05-12T22:38:27 | 2025-05-15T20:39:37 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers.
Should the seed be modulated with the worker id?
### Steps to reproduce the bug
See above
### Expected behavior
See above
### Environment info
- `datasets` version: 3.5.1
- Platform: macOS-15.4.1-arm64-arm-64bit
- Python version: 3.12.9
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7567/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7566/comments | https://api.github.com/repos/huggingface/datasets/issues/7566/events | https://github.com/huggingface/datasets/issues/7566 | 3,055,279,344 | I_kwDODunzps62G9zw | 7,566 | terminate called without an active exception; Aborted (core dumped) | {
"login": "alexey-milovidov",
"id": 18581488,
"node_id": "MDQ6VXNlcjE4NTgxNDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/18581488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexey-milovidov",
"html_url": "https://github.com/alexey-milovidov",
"followers_url": "https://api.github.com/users/alexey-milovidov/followers",
"following_url": "https://api.github.com/users/alexey-milovidov/following{/other_user}",
"gists_url": "https://api.github.com/users/alexey-milovidov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexey-milovidov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexey-milovidov/subscriptions",
"organizations_url": "https://api.github.com/users/alexey-milovidov/orgs",
"repos_url": "https://api.github.com/users/alexey-milovidov/repos",
"events_url": "https://api.github.com/users/alexey-milovidov/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexey-milovidov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-05-11T23:05:54 | 2025-05-11T23:05:54 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
I use it as in the tutorial here: https://huggingface.co/docs/datasets/stream, and it ends up with abort.
### Steps to reproduce the bug
1. `pip install datasets`
2.
```
$ cat main.py
#!/usr/bin/env python3
from datasets import load_dataset
dataset = load_dataset('HuggingFaceFW/fineweb', split='train', streaming=True)
print(next(iter(dataset)))
```
3. `chmod +x main.py`
```
$ ./main.py
README.md: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 43.1k/43.1k [00:00<00:00, 7.04MB/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:05<00:00, 4859.26it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:00<00:00, 54773.56it/s]
{'text': "How AP reported in all formats from tornado-stricken regionsMarch 8, 2012\nWhen the first serious bout of tornadoes of 2012 blew through middle America in the middle of the night, they touched down in places hours from any AP bureau. Our closest video journalist was Chicago-based Robert Ray, who dropped his plans to travel to Georgia for Super Tuesday, booked several flights to the cities closest to the strikes and headed for the airport. He’d decide once there which flight to take.\nHe never got on board a plane. Instead, he ended up driving toward Harrisburg, Ill., where initial reports suggested a town was destroyed. That decision turned out to be a lucky break for the AP. Twice.\nRay was among the first journalists to arrive and he confirmed those reports -- in all formats. He shot powerful video, put victims on the phone with AP Radio and played back sound to an editor who transcribed the interviews and put the material on text wires. He then walked around the devastation with the Central Regional Desk on the line, talking to victims with the phone held so close that editors could transcribe his interviews in real time.\nRay also made a dramatic image of a young girl who found a man’s prosthetic leg in the rubble, propped it up next to her destroyed home and spray-painted an impromptu sign: “Found leg. Seriously.”\nThe following day, he was back on the road and headed for Georgia and a Super Tuesday date with Newt Gingrich’s campaign. The drive would take him through a stretch of the South that forecasters expected would suffer another wave of tornadoes.\nTo prevent running into THAT storm, Ray used his iPhone to monitor Doppler radar, zooming in on extreme cells and using Google maps to direct himself to safe routes. And then the journalist took over again.\n“When weather like that occurs, a reporter must seize the opportunity to get the news out and allow people to see, hear and read the power of nature so that they can take proper shelter,” Ray says.\nSo Ray now started to use his phone to follow the storms. He attached a small GoPro camera to his steering wheel in case a tornado dropped down in front of the car somewhere, and took video of heavy rain and hail with his iPhone. Soon, he spotted a tornado and the chase was on. He followed an unmarked emergency vehicle to Cleveland, Tenn., where he was first on the scene of the storm's aftermath.\nAgain, the tornadoes had struck in locations that were hours from the nearest AP bureau. Damage and debris, as well as a wickedly violent storm that made travel dangerous, slowed our efforts to get to the news. That wasn’t a problem in Tennessee, where our customers were well served by an all-formats report that included this text story.\n“CLEVELAND, Tenn. (AP) _ Fierce wind, hail and rain lashed Tennessee for the second time in three days, and at least 15 people were hospitalized Friday in the Chattanooga area.”\nThe byline? Robert Ray.\nFor being adept with technology, chasing after news as it literally dropped from the sky and setting a standard for all-formats reporting that put the AP ahead on the most competitive news story of the day, Ray wins this week’s $300 Best of the States prize.\n© 2013 The Associated Press. All rights reserved. Terms and conditions apply. See AP.org for details.", 'id': '<urn:uuid:d66bc6fe-8477-4adf-b430-f6a558ccc8ff>', 'dump': 'CC-MAIN-2013-20', 'url': 'http://%20jwashington@ap.org/Content/Press-Release/2012/How-AP-reported-in-all-formats-from-tornado-stricken-regions', 'date': '2013-05-18T05:48:54Z', 'file_path': 's3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz', 'language': 'en', 'language_score': 0.9721424579620361, 'token_count': 717}
terminate called without an active exception
Aborted (core dumped)
```
### Expected behavior
I'm not a proficient Python user, so it might be my own error, but even in that case, the error message should be better.
### Environment info
`Successfully installed datasets-3.6.0 dill-0.3.8 hf-xet-1.1.0 huggingface-hub-0.31.1 multiprocess-0.70.16 requests-2.32.3 xxhash-3.5.0`
```
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS"
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7566/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7565/comments | https://api.github.com/repos/huggingface/datasets/issues/7565/events | https://github.com/huggingface/datasets/pull/7565 | 3,051,731,207 | PR_kwDODunzps6VkFBm | 7,565 | add check if repo exists for dataset uploading | {
"login": "Samoed",
"id": 36135455,
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Samoed",
"html_url": "https://github.com/Samoed",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"repos_url": "https://api.github.com/users/Samoed/repos",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7565). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-09T10:27:00 | 2025-05-19T16:03:17 | null | NONE | null | null | null | Currently, I'm reuploading datasets for [`MTEB`](https://github.com/embeddings-benchmark/mteb/). Some of them have many splits (more than 20), and I'm encountering the error:
`Too many requests for https://huggingface.co/datasets/repo/create`.
It seems that this issue occurs because the dataset tries to recreate itself every time a split is uploaded. To resolve this, I've added a check to ensure that if the dataset already exists, it won't attempt to recreate it. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7565/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7565",
"html_url": "https://github.com/huggingface/datasets/pull/7565",
"diff_url": "https://github.com/huggingface/datasets/pull/7565.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7565.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7564/comments | https://api.github.com/repos/huggingface/datasets/issues/7564/events | https://github.com/huggingface/datasets/pull/7564 | 3,049,275,226 | PR_kwDODunzps6VczLS | 7,564 | Implementation of iteration over values of a column in an IterableDataset object | {
"login": "TopCoder2K",
"id": 47208659,
"node_id": "MDQ6VXNlcjQ3MjA4NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TopCoder2K",
"html_url": "https://github.com/TopCoder2K",
"followers_url": "https://api.github.com/users/TopCoder2K/followers",
"following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}",
"gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions",
"organizations_url": "https://api.github.com/users/TopCoder2K/orgs",
"repos_url": "https://api.github.com/users/TopCoder2K/repos",
"events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}",
"received_events_url": "https://api.github.com/users/TopCoder2K/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"A couple of questions:\r\n1. I've noticed two strange things: 1) \"Around 80% of the final dataset is made of the `en_dataset`\" in https://huggingface.co/docs/datasets/stream, 2) \"Click on \"Pull request\" to send your to the project maintainers\" in https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md Are `en_dataset` and \"your [???]\" typos? If so, I can fix them in this PR.\r\n2. Should I update https://huggingface.co/docs/datasets/stream or https://huggingface.co/docs/datasets/access#iterabledataset to include the new feature?",
"Great ! and chained indexing was easy indeed, thanks :)\r\n\r\nregarding your questions:\r\n\r\n> I've noticed two strange things: 1) \"Around 80% of the final dataset is made of the en_dataset\" in https://huggingface.co/docs/datasets/stream, 2) \"Click on \"Pull request\" to send your to the project maintainers\" in https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md Are en_dataset and \"your [???]\" typos? If so, I can fix them in this PR.\r\n\r\nOh good catch, both should be fixed indeed. Feel free to open a new PR for those docs fixes\r\n\r\n> Should I update https://huggingface.co/docs/datasets/stream or https://huggingface.co/docs/datasets/access#iterabledataset to include the new feature?\r\n\r\nYep good idea, I think in both places, since /stream is supposed to be exhaustive, and /access already mentions accessing a specific column for `Dataset`",
"@lhoestq, thank you for the answers!\r\n\r\n> Yep good idea, I think in both places, since /stream is supposed to be exhaustive, and /access already mentions accessing a specific column for Dataset\r\n\r\n👍, I'll try to add something.\r\n\r\nBy the way, do you have any ideas about why the CI pipelines have failed? Essentially, I've already encountered these problems [here](https://github.com/huggingface/datasets/issues/7381#issuecomment-2863421974).\r\nI think `check_code_quality` has failed due to the usage of `pre-commit`. The problem seems to be the old version of the ruff hook. I've tried `v0.11.8` (the one that was installed with `pip install -e \".[quality]\"`) and `pre-commit` seems to work like `make style` now. However, I don't have any ideas about `pyav` since I don't know what it is...",
"I've updated /stream and /access, please check the style and clarity. By the way, I would like to add `IterableDataset.skip` near `IterableDataset.take` to mimic [slicing](https://huggingface.co/docs/datasets/access/#slicing). What do you think?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7564). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-08T14:59:22 | 2025-05-19T12:15:02 | 2025-05-19T12:15:02 | CONTRIBUTOR | null | null | null | Refers to [this issue](https://github.com/huggingface/datasets/issues/7381). | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7564/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7564",
"html_url": "https://github.com/huggingface/datasets/pull/7564",
"diff_url": "https://github.com/huggingface/datasets/pull/7564.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7564.patch",
"merged_at": "2025-05-19T12:15:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7563/comments | https://api.github.com/repos/huggingface/datasets/issues/7563/events | https://github.com/huggingface/datasets/pull/7563 | 3,046,351,253 | PR_kwDODunzps6VS0QL | 7,563 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7563). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T15:18:29 | 2025-05-07T15:21:05 | 2025-05-07T15:18:36 | MEMBER | null | null | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7563/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7563",
"html_url": "https://github.com/huggingface/datasets/pull/7563",
"diff_url": "https://github.com/huggingface/datasets/pull/7563.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7563.patch",
"merged_at": "2025-05-07T15:18:36"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7562 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7562/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7562/comments | https://api.github.com/repos/huggingface/datasets/issues/7562/events | https://github.com/huggingface/datasets/pull/7562 | 3,046,339,430 | PR_kwDODunzps6VSxmx | 7,562 | release: 3.6.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7562). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T15:15:13 | 2025-05-07T15:17:46 | 2025-05-07T15:15:21 | MEMBER | null | null | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7562/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7562/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7562",
"html_url": "https://github.com/huggingface/datasets/pull/7562",
"diff_url": "https://github.com/huggingface/datasets/pull/7562.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7562.patch",
"merged_at": "2025-05-07T15:15:20"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7561/comments | https://api.github.com/repos/huggingface/datasets/issues/7561/events | https://github.com/huggingface/datasets/issues/7561 | 3,046,302,653 | I_kwDODunzps61kuO9 | 7,561 | NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet | {
"login": "cyanic-selkie",
"id": 32219669,
"node_id": "MDQ6VXNlcjMyMjE5NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyanic-selkie",
"html_url": "https://github.com/cyanic-selkie",
"followers_url": "https://api.github.com/users/cyanic-selkie/followers",
"following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}",
"gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions",
"organizations_url": "https://api.github.com/users/cyanic-selkie/orgs",
"repos_url": "https://api.github.com/users/cyanic-selkie/repos",
"events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyanic-selkie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-05-07T15:05:42 | 2025-05-07T15:05:42 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than attempting to open a PR.
### Steps to reproduce the bug
1. Create an `IterableDataset`.
2. Call `.repeat(None)` on it.
3. Wrap it in a pytorch `DataLoader`
4. Iterate over it.
### Expected behavior
This should work normally.
### Environment info
datasets: 3.5.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7561/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7560/comments | https://api.github.com/repos/huggingface/datasets/issues/7560/events | https://github.com/huggingface/datasets/pull/7560 | 3,046,265,500 | PR_kwDODunzps6VShIc | 7,560 | fix decoding tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7560). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T14:56:14 | 2025-05-07T14:59:02 | 2025-05-07T14:56:20 | MEMBER | null | null | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7560/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7560",
"html_url": "https://github.com/huggingface/datasets/pull/7560",
"diff_url": "https://github.com/huggingface/datasets/pull/7560.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7560.patch",
"merged_at": "2025-05-07T14:56:20"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7559/comments | https://api.github.com/repos/huggingface/datasets/issues/7559/events | https://github.com/huggingface/datasets/pull/7559 | 3,046,177,078 | PR_kwDODunzps6VSNiX | 7,559 | fix aiohttp import | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7559). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T14:31:32 | 2025-05-07T14:34:34 | 2025-05-07T14:31:38 | MEMBER | null | null | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7559/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7559",
"html_url": "https://github.com/huggingface/datasets/pull/7559",
"diff_url": "https://github.com/huggingface/datasets/pull/7559.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7559.patch",
"merged_at": "2025-05-07T14:31:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7558/comments | https://api.github.com/repos/huggingface/datasets/issues/7558/events | https://github.com/huggingface/datasets/pull/7558 | 3,046,066,628 | PR_kwDODunzps6VR1gN | 7,558 | fix regression | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7558). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T13:56:03 | 2025-05-07T13:58:52 | 2025-05-07T13:56:18 | MEMBER | null | null | null | reported in https://github.com/huggingface/datasets/pull/7557 (I just reorganized the condition)
wanted to apply this change to the original PR but github didn't let me apply it directly - merging this one instead | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7558/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7558",
"html_url": "https://github.com/huggingface/datasets/pull/7558",
"diff_url": "https://github.com/huggingface/datasets/pull/7558.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7558.patch",
"merged_at": "2025-05-07T13:56:18"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7557/comments | https://api.github.com/repos/huggingface/datasets/issues/7557/events | https://github.com/huggingface/datasets/pull/7557 | 3,045,962,076 | PR_kwDODunzps6VRenr | 7,557 | check for empty _formatting | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting and for the fix ! I tried to reorganize the condition in your PR but didn't get the right permission so. I ended up merging https://github.com/huggingface/datasets/pull/7558 directly so I can make a release today - I hope you don't mind"
] | 2025-05-07T13:22:37 | 2025-05-07T13:57:12 | 2025-05-07T13:57:12 | CONTRIBUTOR | null | null | null | Fixes a regression from #7553 breaking shuffling of iterable datasets
<img width="884" alt="Screenshot 2025-05-07 at 9 16 52 AM" src="https://github.com/user-attachments/assets/d2f43c5f-4092-4efe-ac31-a32cbd025fe3" />
| {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7557/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7557",
"html_url": "https://github.com/huggingface/datasets/pull/7557",
"diff_url": "https://github.com/huggingface/datasets/pull/7557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7557.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7556/comments | https://api.github.com/repos/huggingface/datasets/issues/7556/events | https://github.com/huggingface/datasets/pull/7556 | 3,043,615,210 | PR_kwDODunzps6VJlTR | 7,556 | Add `--merge-pull-request` option for `convert_to_parquet` | {
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This is ready for a review, happy to make any changes. The main question for maintainers is how this should interact with #7555. If my suggestion there is accepted, this PR can be kept as is. If not, more changes are required to merge all the PR parts."
] | 2025-05-06T18:05:05 | 2025-05-07T17:41:16 | null | NONE | null | null | null | Closes #7527
Note that this implementation **will only merge the last PR in the case that they get split up by `push_to_hub`**. See https://github.com/huggingface/datasets/discussions/7555 for more details. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7556/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7556",
"html_url": "https://github.com/huggingface/datasets/pull/7556",
"diff_url": "https://github.com/huggingface/datasets/pull/7556.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7556.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7554/comments | https://api.github.com/repos/huggingface/datasets/issues/7554/events | https://github.com/huggingface/datasets/issues/7554 | 3,043,089,844 | I_kwDODunzps61Yd20 | 7,554 | datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script) | {
"login": "sei-eschwartz",
"id": 50171988,
"node_id": "MDQ6VXNlcjUwMTcxOTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sei-eschwartz",
"html_url": "https://github.com/sei-eschwartz",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions",
"organizations_url": "https://api.github.com/users/sei-eschwartz/orgs",
"repos_url": "https://api.github.com/users/sei-eschwartz/repos",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"received_events_url": "https://api.github.com/users/sei-eschwartz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets in standard formats like Parquet instead.\n\nBtw there is a CLI tool to convert a loading script to parquet:\n\n```\ndatasets-cli convert_to_parquet <dataset-name> --trust_remote_code\n```",
"Closing in favor of #6832 "
] | 2025-05-06T14:43:38 | 2025-05-07T14:53:45 | 2025-05-07T14:53:44 | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
`datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actually process all the splits? But I thought loading scripts were designed to avoid this.
### Steps to reproduce the bug
See [this notebook](https://colab.research.google.com/drive/14kcXp_hgcdj-kIzK0bCG6taE-CLZPVvq?usp=sharing)
Or:
```python
from datasets import load_dataset
dataset = load_dataset('jordiae/exebench', split='test_synth', trust_remote_code=True)
```
### Expected behavior
I expected only the `test_synth` split to be downloaded and processed.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.12
- `huggingface_hub` version: 0.30.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2025.3.0 | {
"login": "sei-eschwartz",
"id": 50171988,
"node_id": "MDQ6VXNlcjUwMTcxOTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sei-eschwartz",
"html_url": "https://github.com/sei-eschwartz",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions",
"organizations_url": "https://api.github.com/users/sei-eschwartz/orgs",
"repos_url": "https://api.github.com/users/sei-eschwartz/repos",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"received_events_url": "https://api.github.com/users/sei-eschwartz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7554/timeline | null | duplicate | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7553/comments | https://api.github.com/repos/huggingface/datasets/issues/7553/events | https://github.com/huggingface/datasets/pull/7553 | 3,042,953,907 | PR_kwDODunzps6VHUNW | 7,553 | Rebatch arrow iterables before formatted iterable | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7553). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq Our CI found an issue with this changeset causing a regression with shuffling iterable datasets \r\n<img width=\"884\" alt=\"Screenshot 2025-05-07 at 9 16 52 AM\" src=\"https://github.com/user-attachments/assets/bf7d9c7e-cc14-47da-8da6-d1a345992d7c\" />\r\n"
] | 2025-05-06T13:59:58 | 2025-05-07T13:17:41 | 2025-05-06T14:03:42 | MEMBER | null | null | null | close https://github.com/huggingface/datasets/issues/7538 and https://github.com/huggingface/datasets/issues/7475 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7553/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7553/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7553",
"html_url": "https://github.com/huggingface/datasets/pull/7553",
"diff_url": "https://github.com/huggingface/datasets/pull/7553.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7553.patch",
"merged_at": "2025-05-06T14:03:41"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7552/comments | https://api.github.com/repos/huggingface/datasets/issues/7552/events | https://github.com/huggingface/datasets/pull/7552 | 3,040,258,084 | PR_kwDODunzps6U-BUv | 7,552 | Enable xet in push to hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7552). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-05T17:02:09 | 2025-05-06T12:42:51 | 2025-05-06T12:42:48 | MEMBER | null | null | null | follows https://github.com/huggingface/huggingface_hub/pull/3035
related to https://github.com/huggingface/datasets/issues/7526 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7552/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7552",
"html_url": "https://github.com/huggingface/datasets/pull/7552",
"diff_url": "https://github.com/huggingface/datasets/pull/7552.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7552.patch",
"merged_at": "2025-05-06T12:42:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7551/comments | https://api.github.com/repos/huggingface/datasets/issues/7551/events | https://github.com/huggingface/datasets/issues/7551 | 3,038,114,928 | I_kwDODunzps61FfRw | 7,551 | Issue with offline mode and partial dataset cached | {
"login": "nrv",
"id": 353245,
"node_id": "MDQ6VXNlcjM1MzI0NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/353245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nrv",
"html_url": "https://github.com/nrv",
"followers_url": "https://api.github.com/users/nrv/followers",
"following_url": "https://api.github.com/users/nrv/following{/other_user}",
"gists_url": "https://api.github.com/users/nrv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nrv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nrv/subscriptions",
"organizations_url": "https://api.github.com/users/nrv/orgs",
"repos_url": "https://api.github.com/users/nrv/repos",
"events_url": "https://api.github.com/users/nrv/events{/privacy}",
"received_events_url": "https://api.github.com/users/nrv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"It seems the problem comes from builder.py / create_config_id()\n\nOn the first call, when the cache is empty we have\n```\nconfig_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n```\nleading to config_id beeing 'default-2935e8cdcc21c613'\n\nthen, on the second call, \n```\nconfig_kwargs = {'data_files': 'fr/fr_part_00038.parquet'}\n```\nthus explaining why the hash is not the same, despite having the same parameter when calling load_dataset : data_files=\"fr/fr_part_00038.parquet\"",
"Same behavior with version 3.5.1",
"Same issue when loading `google/IndicGenBench_flores_in` with `dataset==2.21.0` and `dataset==3.6.0` .",
"\n\n\n> It seems the problem comes from builder.py / create_config_id()\n> \n> On the first call, when the cache is empty we have\n> \n> ```\n> config_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n> ```\n> \n> leading to config_id beeing 'default-2935e8cdcc21c613'\n> \n> then, on the second call,\n> \n> ```\n> config_kwargs = {'data_files': 'fr/fr_part_00038.parquet'}\n> ```\n> \n> thus explaining why the hash is not the same, despite having the same parameter when calling load_dataset : data_files=\"fr/fr_part_00038.parquet\"\n\n\nI have identified that the issue indeed lies in the `data_files` within `config_kwargs`. \nThe format and prefix of `data_files` differ depending on whether `HF_HUB_OFFLINE` is set, leading to different final `config_id` values. \nWhen I use other datasets without passing the `data_files` parameter, this issue does not occur.\n\nA possible solution might be to standardize the formatting of `data_files` within the `create_config_id` function."
] | 2025-05-04T16:49:37 | 2025-05-13T03:18:43 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
Hi,
a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards
### Steps to reproduce the bug
```python
import os
# os.environ["HF_HUB_OFFLINE"] = "1"
os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx"
import datasets
dataset_name = "uonlp/CulturaX"
data_files = "fr/fr_part_00038.parquet"
ds = datasets.load_dataset(dataset_name, split='train', data_files=data_files)
print(f"Dataset loaded : {ds}")
```
Once the file has been cached, I rerun with the HF_HUB_OFFLINE activated an get this error :
```
ValueError: Couldn't find cache for uonlp/CulturaX for config 'default-1e725f978350254e'
Available configs in the cache: ['default-2935e8cdcc21c613']
```
### Expected behavior
Should be able to access the previously cached files
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.4.0-215-generic-x86_64-with-glibc2.31
- Python version: 3.12.0
- `huggingface_hub` version: 0.27.0
- PyArrow version: 19.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7551/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7550/comments | https://api.github.com/repos/huggingface/datasets/issues/7550/events | https://github.com/huggingface/datasets/pull/7550 | 3,037,017,367 | PR_kwDODunzps6UzksN | 7,550 | disable aiohttp depend for python 3.13t free-threading compat | {
"login": "Qubitium",
"id": 417764,
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qubitium",
"html_url": "https://github.com/Qubitium",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-03T00:28:18 | 2025-05-03T00:28:24 | 2025-05-03T00:28:24 | NONE | null | null | null | null | {
"login": "Qubitium",
"id": 417764,
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qubitium",
"html_url": "https://github.com/Qubitium",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7550/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7550",
"html_url": "https://github.com/huggingface/datasets/pull/7550",
"diff_url": "https://github.com/huggingface/datasets/pull/7550.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7550.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7549/comments | https://api.github.com/repos/huggingface/datasets/issues/7549/events | https://github.com/huggingface/datasets/issues/7549 | 3,036,272,015 | I_kwDODunzps60-dWP | 7,549 | TypeError: Couldn't cast array of type string to null on webdataset format dataset | {
"login": "narugo1992",
"id": 117186571,
"node_id": "U_kgDOBvwgCw",
"avatar_url": "https://avatars.githubusercontent.com/u/117186571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/narugo1992",
"html_url": "https://github.com/narugo1992",
"followers_url": "https://api.github.com/users/narugo1992/followers",
"following_url": "https://api.github.com/users/narugo1992/following{/other_user}",
"gists_url": "https://api.github.com/users/narugo1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/narugo1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/narugo1992/subscriptions",
"organizations_url": "https://api.github.com/users/narugo1992/orgs",
"repos_url": "https://api.github.com/users/narugo1992/repos",
"events_url": "https://api.github.com/users/narugo1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/narugo1992/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"seems to get fixed by explicitly adding `dataset_infos.json` like this\n\n```json\n{\n \"default\": {\n \"description\": \"Image dataset with tags and ratings\",\n \"citation\": \"\",\n \"homepage\": \"\",\n \"license\": \"\",\n \"features\": {\n \"image\": {\n \"dtype\": \"image\",\n \"_type\": \"Image\"\n },\n \"json\": {\n \"id\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"width\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"height\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"rating\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n },\n \"general_tags\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n },\n \"character_tags\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n }\n }\n },\n \"builder_name\": \"webdataset\",\n \"config_name\": \"default\",\n \"version\": {\n \"version_str\": \"1.0.0\",\n \"description\": null,\n \"major\": 1,\n \"minor\": 0,\n \"patch\": 0\n }\n }\n}\n\n```\n\nwill close this issue if no further issues found"
] | 2025-05-02T15:18:07 | 2025-05-02T15:37:05 | null | NONE | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
```python
from datasets import load_dataset
dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k")
```
got
```
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 255, in pyarrow.lib.array
File "pyarrow/array.pxi", line 117, in pyarrow.lib._handle_arrow_array_protocol
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 258, in __arrow_array__
out = cast_array_to_feature(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2006, in cast_array_to_feature
arrays = [
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2007, in <listcomp>
_c(array.field(name) if name in array_fields else null_array, subfeature)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2066, in cast_array_to_feature
casted_array_values = _c(array.values, feature.feature)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2103, in cast_array_to_feature
return array_cast(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1949, in array_cast
raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
TypeError: Couldn't cast array of type string to null
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/load.py", line 2084, in load_dataset
builder_instance.download_and_prepare(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1487, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
`datasets==3.5.1` whats wrong
its inner json structure is like
```yaml
features:
- name: "image"
dtype: "image"
- name: "json.id"
dtype: "string"
- name: "json.width"
dtype: "int32"
- name: "json.height"
dtype: "int32"
- name: "json.rating"
sequence:
dtype: "string"
- name: "json.general_tags"
sequence:
dtype: "string"
- name: "json.character_tags"
sequence:
dtype: "string"
```
i'm 100% sure all the jsons satisfies the abovementioned format.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k")
```
### Expected behavior
load the dataset successfully, with the abovementioned json format and webp images
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 3.5.1
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.10.16
- `huggingface_hub` version: 0.30.2
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7549/timeline | null | null | null | null | false |
End of preview. Expand
in Data Studio
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
- Curated by: [More Information Needed]
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
Dataset Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Dataset Structure
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Data Collection and Processing
[More Information Needed]
Who are the source data producers?
[More Information Needed]
Annotations [optional]
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
[More Information Needed]
- Downloads last month
- 91