Commit
·
9194531
1
Parent(s):
0833afa
pico-decoder-small-1 trained to 125k steps
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- README.md +56 -0
- config.json +22 -0
- eval_results/step_0.json +1 -0
- eval_results/step_1000.json +1 -0
- eval_results/step_10000.json +1 -0
- eval_results/step_100000.json +1 -0
- eval_results/step_101000.json +1 -0
- eval_results/step_102000.json +1 -0
- eval_results/step_103000.json +1 -0
- eval_results/step_104000.json +1 -0
- eval_results/step_105000.json +1 -0
- eval_results/step_106000.json +1 -0
- eval_results/step_107000.json +1 -0
- eval_results/step_108000.json +1 -0
- eval_results/step_109000.json +1 -0
- eval_results/step_11000.json +1 -0
- eval_results/step_110000.json +1 -0
- eval_results/step_111000.json +1 -0
- eval_results/step_112000.json +1 -0
- eval_results/step_113000.json +1 -0
- eval_results/step_114000.json +1 -0
- eval_results/step_115000.json +1 -0
- eval_results/step_116000.json +1 -0
- eval_results/step_117000.json +1 -0
- eval_results/step_118000.json +1 -0
- eval_results/step_119000.json +1 -0
- eval_results/step_12000.json +1 -0
- eval_results/step_120000.json +1 -0
- eval_results/step_121000.json +1 -0
- eval_results/step_122000.json +1 -0
- eval_results/step_123000.json +1 -0
- eval_results/step_124000.json +1 -0
- eval_results/step_125000.json +1 -0
- eval_results/step_13000.json +1 -0
- eval_results/step_14000.json +1 -0
- eval_results/step_15000.json +1 -0
- eval_results/step_16000.json +1 -0
- eval_results/step_17000.json +1 -0
- eval_results/step_18000.json +1 -0
- eval_results/step_19000.json +1 -0
- eval_results/step_2000.json +1 -0
- eval_results/step_20000.json +1 -0
- eval_results/step_21000.json +1 -0
- eval_results/step_22000.json +1 -0
- eval_results/step_23000.json +1 -0
- eval_results/step_24000.json +1 -0
- eval_results/step_25000.json +1 -0
- eval_results/step_26000.json +1 -0
- eval_results/step_27000.json +1 -0
- eval_results/step_28000.json +1 -0
README.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- pico-lm/pretokenized-dolma
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
license: apache-2.0
|
7 |
+
metrics:
|
8 |
+
- pico-lm/perplexity
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
---
|
11 |
+
|
12 |
+
# Pico Decoder Small
|
13 |
+
|
14 |
+
**pico-decoder-small** is a 65M parameter model in the `pico-decoder` suite — a lightweight, LLaMA-style decoder-only transformer trained from scratch using [`pico-train`](https://github.com/pico-lm/pico-train). It is designed for transparent and reproducible research into the learning dynamics of language models, and is fully compatible with the `pico-analyze` toolkit for detailed interpretability analysis.
|
15 |
+
|
16 |
+
> NOTE: The `pico-decoder-small-1` branch contains the full commit history for the training run.
|
17 |
+
|
18 |
+
## 🔧 Model Details
|
19 |
+
|
20 |
+
| Field | Value |
|
21 |
+
|---------------------|------------------------------------|
|
22 |
+
| **Architecture** | Decoder-only transformer (LLaMA-style) |
|
23 |
+
| **Parameters** | 65M |
|
24 |
+
| **Layers** | 12 |
|
25 |
+
| **Hidden Size** | 384 |
|
26 |
+
| **Feed Foward Size** | 1536 |
|
27 |
+
| **Attention Heads** | 12 |
|
28 |
+
| **Key/Value Heads** | 4 |
|
29 |
+
|
30 |
+
## 📚 Training
|
31 |
+
|
32 |
+
- **Dataset**: [`pretokenized-dolma`](https://huggingface.co/datasets/pico-lm/pretokenized-dolma), English-only
|
33 |
+
- **Training steps**: 200,000
|
34 |
+
- **Batch size**: 1024
|
35 |
+
- **Sequence length**: 2048
|
36 |
+
- **Optimizer**: AdamW
|
37 |
+
- **Learning rate schedule**: Linear decay with warmup
|
38 |
+
- **Compute**: 16 A100-SXM4-80GB GPUs
|
39 |
+
|
40 |
+
## 📈 Evaluation and Analysis
|
41 |
+
|
42 |
+
This model supports fine-grained analysis using [`pico-analyze`](https://github.com/pico-lm/pico-analyze). This tool enables researchers to understand how learning unfolds over training, even at modest scales.
|
43 |
+
|
44 |
+
We also evaluate perplexity of the model on the [`pico-paloma-tinsy`](https://huggingface.co/datasets/pico-lm/pretokenized-paloma-tinsy) dataset.
|
45 |
+
|
46 |
+
## 📄 Citation
|
47 |
+
|
48 |
+
If you use `pico-small` or any other `pico-decoder` model in your research, please cite:
|
49 |
+
|
50 |
+
```bibtex
|
51 |
+
@software{pico2025,
|
52 |
+
author = {Diehl Martinez, Richard},
|
53 |
+
title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
|
54 |
+
year = {2025},
|
55 |
+
url = {https://github.com/pico-lm}
|
56 |
+
}
|
config.json
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"activation_hidden_dim": 1536,
|
3 |
+
"architectures": [
|
4 |
+
"PicoDecoderHF"
|
5 |
+
],
|
6 |
+
"attention_n_heads": 12,
|
7 |
+
"attention_n_kv_heads": 4,
|
8 |
+
"auto_map": {
|
9 |
+
"AutoConfig": "pico_decoder.PicoDecoderHFConfig",
|
10 |
+
"AutoModelForCausalLM": "pico_decoder.PicoDecoderHF"
|
11 |
+
},
|
12 |
+
"batch_size": 1024,
|
13 |
+
"d_model": 384,
|
14 |
+
"max_seq_len": 2048,
|
15 |
+
"model_type": "pico_decoder",
|
16 |
+
"n_layers": 12,
|
17 |
+
"norm_eps": 1e-06,
|
18 |
+
"position_emb_theta": 10000.0,
|
19 |
+
"torch_dtype": "float32",
|
20 |
+
"transformers_version": "4.48.3",
|
21 |
+
"vocab_size": 50304
|
22 |
+
}
|
eval_results/step_0.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 59235.520549324916}
|
eval_results/step_1000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 792.1833685393118}
|
eval_results/step_10000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 65.94262431332459}
|
eval_results/step_100000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 43.36404886320493}
|
eval_results/step_101000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 43.263255441479565}
|
eval_results/step_102000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 43.253649176537785}
|
eval_results/step_103000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 43.22560267041369}
|
eval_results/step_104000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 43.13925088066673}
|
eval_results/step_105000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 43.181178607674845}
|
eval_results/step_106000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 43.06913597359475}
|
eval_results/step_107000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.975087318437026}
|
eval_results/step_108000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.94199747208519}
|
eval_results/step_109000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.868824278186835}
|
eval_results/step_11000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 64.34633452908918}
|
eval_results/step_110000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.827151819471695}
|
eval_results/step_111000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.727992517939846}
|
eval_results/step_112000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.74776250020137}
|
eval_results/step_113000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.69900836387993}
|
eval_results/step_114000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.633967242423665}
|
eval_results/step_115000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.61576641868631}
|
eval_results/step_116000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.569933022854634}
|
eval_results/step_117000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.53510425497846}
|
eval_results/step_118000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.54298578654432}
|
eval_results/step_119000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.397330363120766}
|
eval_results/step_12000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 62.59192442304166}
|
eval_results/step_120000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.37647307832897}
|
eval_results/step_121000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.39845054972047}
|
eval_results/step_122000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.372684175710646}
|
eval_results/step_123000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.29145676591255}
|
eval_results/step_124000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.25910740986934}
|
eval_results/step_125000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 42.243373122198655}
|
eval_results/step_13000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 61.176598006341514}
|
eval_results/step_14000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 60.127189135634524}
|
eval_results/step_15000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 59.05735223143773}
|
eval_results/step_16000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 58.34294593874171}
|
eval_results/step_17000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 57.50681012357984}
|
eval_results/step_18000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 56.75063220922955}
|
eval_results/step_19000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 56.22409683063055}
|
eval_results/step_2000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 430.54463134327}
|
eval_results/step_20000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 55.70834681912997}
|
eval_results/step_21000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 55.09185880999947}
|
eval_results/step_22000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 54.68077409824012}
|
eval_results/step_23000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 54.099016925931394}
|
eval_results/step_24000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 53.69616283332014}
|
eval_results/step_25000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 53.325224398403634}
|
eval_results/step_26000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 52.88207766685752}
|
eval_results/step_27000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 52.41606502532959}
|
eval_results/step_28000.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paloma": 52.10107550388429}
|