Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
stellaathena commited on
Commit
cd4b401
·
verified ·
1 Parent(s): e6116ac

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-generation
4
+ language:
5
+ - en
6
+ pretty_name: Creative Commons Common Crawl
7
+ ---
8
+
9
+ # Creative Commons Common Crawl
10
+
11
+ ## Description
12
+ This dataset contains text from 52 Common Crawl snapshots, covering about half of Common Crawl snapshots available to date and covering all years of operations of Common Crawl up to 2024.
13
+ We found a higher level of duplication across this collection, suggesting that including more snapshots would lead to a modest increase in total token yield.
14
+ From these snapshots, we extract HTML content using [FastWarc](https://arxiv.org/abs/2112.03103).
15
+ Then, using a regular expression adapted from [the C4Corpus project](https://aclanthology.org/L16-1146/).
16
+ To ensure license accuracy, we manually verified the top 1000 domains by content volume, retaining only the 537 domains with confirmed licenses where the Creative Commons designation applied to the all text content rather than embedded media or a subset of the text on the domain.
17
+ As an additional check, we did a second round of annotations with the assistance of OpenAI's o3 model. Specifically, we instructed the model to examine each web domain and identify the ones that were openly licensed. We then had a second team manually annotate the cases where the AI does not approve of the domain but the original human auditor did. This resulted in **todo** domains being removed.
18
+
19
+ We extract the main content of these documents and remove boilerplate using [Resiliparse](https://github.com/chatnoir-eu/chatnoir-resiliparse).
20
+ We perform URL-level exact deduplication and use Bloom filters to remove near-duplicates with 80% ngram overlap.
21
+ We also employ rule-based filters matching [Dolma](https://arxiv.org/abs/2402.00159);
22
+ namely, we use [C4-derived heuristics](https://arxiv.org/abs/1910.10683) to filter pages containing Javascript, Lorem Ipsum, and curly braces {}.
23
+ We also apply all [Gopher rules](https://arxiv.org/abs/2112.11446) to remove low-quality pages.
24
+ Per-document license information is available in the `license` entry of the `metadata` field of each example.
25
+ Code for collecting, processing, and preparing this dataset is available in the [common-pile GitHub repo](https://github.com/r-three/common-pile).
26
+
27
+ ## Dataset Statistics
28
+ | Documents | UTF-8 GB | Tokens | Times seen during training|
29
+ ------------|----------|--------|---------------------------|
30
+ | 6,852,137 | 58.1 | 19.4B | 1x |
31
+
32
+ ## License Issues
33
+ While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see [our paper](TODO link)). If you believe you have found an instance of incorrect licensing in this dataset, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
34
+ This dataset has been updated to remove instances of incorrect licensing.
35
+ If you require the exact version that Comma v0.1 was trained on for non-commercial research purposes, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
36
+
37
+ ## Other Versions
38
+ This is the "filtered" version of Creative Commons Common Crawl. If you are looking for the raw version, you can find it [here](https://huggingface.co/datasets/common-pile/cccc).
39
+
40
+ ## Citation
41
+ If you use this dataset, please cite:
42
+ ```bibtex
43
+ @article{kandpal2025common,
44
+ title={{The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text}},
45
+ author={Nikhil Kandpal and Brian Lester and Colin Raffel and Sebastian Majstorovic and Stella Biderman and Baber Abbasi and Luca Soldaini and Enrico Shippole and A. Feder Cooper and Aviya Skowron and Shayne Longpre and Lintang Sutawika and Alon Albalak and Zhenlin Xu and Guilherme Penedo and Loubna Ben and Elie Bakouch and John David and Honglu Fan and Dashiell Stander and Guangyu Song and Aaron Gokaslan and John Kirchenbauer and Tom Goldstein and Brian R and Bhavya Kailkhura and Tyler Murray},
46
+ journal={arXiv preprint},
47
+ year={2025}
48
+ }
49
+ ```