Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,9 @@ Subset of 12.500 Wikipedia files extracted from the Wiki727k Text Segmentation d
|
|
11 |
Distribution of Wikipedia Content is managed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License.
|
12 |
Text Segmentation is the task of splitting a text into semantic coherent paragraphs.
|
13 |
Each file has a number of paragraphs, divided by '========,\<number\>,\<title\>'
|
|
|
|
|
|
|
14 |
|
15 |
For the whole dataset, the Choi dataset, or the loading script, I forward you to my source: [Koomri-Text-Segmentation](https://github.com/koomri/text-segmentation)
|
16 |
Please, take into account that I have no connections to Koomri. If you encounter problems with their script, you need to contact them.
|
|
|
11 |
Distribution of Wikipedia Content is managed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License.
|
12 |
Text Segmentation is the task of splitting a text into semantic coherent paragraphs.
|
13 |
Each file has a number of paragraphs, divided by '========,\<number\>,\<title\>'
|
14 |
+
I suggest to use this dataset via cloning with `git clone https://huggingface.co/datasets/mathieuschanz/Wiki12kTextSegmentation`,
|
15 |
+
because I encountered problems by downloading the dataset with the Huggingface library.
|
16 |
+
I also made slight adjustments for my usecases to the Koomri's loading scripts. Coming perhaps soon.
|
17 |
|
18 |
For the whole dataset, the Choi dataset, or the loading script, I forward you to my source: [Koomri-Text-Segmentation](https://github.com/koomri/text-segmentation)
|
19 |
Please, take into account that I have no connections to Koomri. If you encounter problems with their script, you need to contact them.
|