Comma v0.1 dataset
This repository contains the dataset used to train Comma v0.1-1T and Comma v0.1-2T. It is a slightly modified and consolidated version of the Common Pile v0.1 "filtered" data. If you are looknig for the raw Common Pile v0.1 data, please see this collection. You can learn more about Common Pile in our paper.
Mixing rates and token counts
The Comma v0.1 models were trained in two stages, a "main" stage and a "cooldown" stage. During each stage, we heuristically set mixing rates to up or downweight different sources. In the two tables below, we provide per-source token count, repeat rate, and effective token count (after up/downweighting) for the main and cooldown stage of the Comma v0.1-1T training run. For the Comma v0.1-2T training run, all sources are repeated 2x as many times in both stages. Token counts are as provided by the Comma v0.1 tokenizer; using a different tokenizer may change these counts significantly.
Main stage | Tokens (B) | Repeats | Effective tokens (B) |
---|---|---|---|
arxiv_abstracts | 0.57 | 6 | 3.4 |
arxiv_papers | 6.0 | 6 | 35.8 |
biodiversity_heritage_library | 9.8 | 0.25 | 2.5 |
caselaw_access_project | 19.7 | 1 | 19.7 |
cccc | 15.2 | 6 | 91.4 |
data_provenance_initiative | 0.92 | 6 | 5.5 |
doab | 3.0 | 6 | 18.2 |
foodista | 0.025 | 6 | 0.15 |
github_archive | 11.0 | 6 | 66.1 |
library_of_congress | 9.5 | 0.25 | 2.4 |
libretexts | 0.093 | 6 | 0.56 |
news | 0.064 | 6 | 0.38 |
oercommons | 0.012 | 6 | 0.07 |
peS2o | 43.3 | 6 | 260.0 |
pre_1929_books | 12.4 | 1 | 12.4 |
pressbooks | 0.14 | 6 | 0.86 |
project_gutenberg | 5.7 | 1 | 5.7 |
public_domain_review | 0.0017 | 6 | 0.010 |
pubmed | 36.6 | 1 | 36.6 |
python_enhancement_proposals | 0.0027 | 6 | 0.016 |
regulations | 1.4 | 6 | 8.2 |
stackexchange | 23.9 | 6 | 143.2 |
stackv2_edu | 67.8 | 2 | 135.5 |
stackv2_html | 1.2 | 2 | 2.5 |
ubuntu_irc | 1.9 | 6 | 11.1 |
uk_hansard | 2.3 | 6 | 14.0 |
usgpo | 8.8 | 0.25 | 2.2 |
uspto | 157.4 | 0.25 | 39.4 |
wikimedia | 15.8 | 6 | 94.7 |
wikiteam | 4.3 | 4 | 17.2 |
youtube | 4.7 | 1 | 4.7 |
Total | 463.6 | 1034.4 |
Cooldown stage | Tokens (B) | Repeats | Effective tokens (B) |
---|---|---|---|
arxiv_papers | 6.0 | 0.5 | 3.0 |
cccc | 15.2 | 0.3 | 4.6 |
data_provenance_initiative | 0.92 | 2 | 1.8 |
doab | 3.0 | 2 | 6.1 |
foodista | 0.025 | 2 | 0.05 |
libretexts | 0.093 | 2 | 0.19 |
news | 0.064 | 2 | 0.13 |
oercommons | 0.012 | 2 | 0.02 |
peS2o | 43.3 | 0.1 | 4.3 |
pressbooks | 0.14 | 2 | 0.29 |
public_domain_review | 0.0017 | 2 | 0.003 |
python_enhancement_proposals | 0.0027 | 2 | 0.005 |
stackexchange | 23.9 | 0.25 | 6.0 |
stackv2_edu | 67.8 | 0.1 | 6.8 |
wikimedia | 15.8 | 0.4 | 6.3 |
Total | 176.2 | 39.5 |