mathieuschanz's picture
Update README.md
28b2c5e verified
metadata
language:
  - en
pretty_name: Wiki12k
size_categories:
  - 10K<n<100K
license: cc-by-sa-4.0

Subset of 12.500 Wikipedia files extracted from the Wiki727k Text Segmentation dataset for my master thesis. Distribution of Wikipedia Content is managed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License. Text Segmentation is the task of splitting a text into semantic coherent paragraphs. Each file has a number of paragraphs, divided by '========,<number>,<title>' I suggest to use this dataset via cloning with git clone https://huggingface.co/datasets/mathieuschanz/Wiki12kTextSegmentation, because I encountered problems by downloading the dataset with the Huggingface library. I also made slight adjustments for my usecases to the Koomri's loading scripts. Coming perhaps soon.

For the whole dataset, the Choi dataset, or the loading script, I forward you to my source: Koomri-Text-Segmentation Please, take into account that I have no connections to Koomri. If you encounter problems with their script, you need to contact them.