--- language: - en pretty_name: Wiki12k size_categories: - 10K,\' I suggest to use this dataset via cloning with `git clone https://huggingface.co/datasets/mathieuschanz/Wiki12kTextSegmentation`, because I encountered problems by downloading the dataset with the Huggingface library. I also made slight adjustments for my usecases to the Koomri's loading scripts. Coming perhaps soon. For the whole dataset, the Choi dataset, or the loading script, I forward you to my source: [Koomri-Text-Segmentation](https://github.com/koomri/text-segmentation) Please, take into account that I have no connections to Koomri. If you encounter problems with their script, you need to contact them.