rdiehlmartinez commited on
Commit
dc1a407
·
1 Parent(s): b8437b6

pico-decoder-large-1 trained to 125k steps

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +54 -0
  2. config.json +22 -0
  3. eval_results/step_0.json +1 -0
  4. eval_results/step_1000.json +1 -0
  5. eval_results/step_10000.json +1 -0
  6. eval_results/step_100000.json +1 -0
  7. eval_results/step_101000.json +1 -0
  8. eval_results/step_102000.json +1 -0
  9. eval_results/step_103000.json +1 -0
  10. eval_results/step_104000.json +1 -0
  11. eval_results/step_105000.json +1 -0
  12. eval_results/step_106000.json +1 -0
  13. eval_results/step_107000.json +1 -0
  14. eval_results/step_108000.json +1 -0
  15. eval_results/step_109000.json +1 -0
  16. eval_results/step_11000.json +1 -0
  17. eval_results/step_110000.json +1 -0
  18. eval_results/step_111000.json +1 -0
  19. eval_results/step_112000.json +1 -0
  20. eval_results/step_113000.json +1 -0
  21. eval_results/step_114000.json +1 -0
  22. eval_results/step_115000.json +1 -0
  23. eval_results/step_116000.json +1 -0
  24. eval_results/step_117000.json +1 -0
  25. eval_results/step_118000.json +1 -0
  26. eval_results/step_119000.json +1 -0
  27. eval_results/step_12000.json +1 -0
  28. eval_results/step_120000.json +1 -0
  29. eval_results/step_121000.json +1 -0
  30. eval_results/step_122000.json +1 -0
  31. eval_results/step_123000.json +1 -0
  32. eval_results/step_124000.json +1 -0
  33. eval_results/step_125000.json +1 -0
  34. eval_results/step_13000.json +1 -0
  35. eval_results/step_14000.json +1 -0
  36. eval_results/step_15000.json +1 -0
  37. eval_results/step_16000.json +1 -0
  38. eval_results/step_17000.json +1 -0
  39. eval_results/step_18000.json +1 -0
  40. eval_results/step_19000.json +1 -0
  41. eval_results/step_2000.json +1 -0
  42. eval_results/step_20000.json +1 -0
  43. eval_results/step_21000.json +1 -0
  44. eval_results/step_22000.json +1 -0
  45. eval_results/step_23000.json +1 -0
  46. eval_results/step_24000.json +1 -0
  47. eval_results/step_25000.json +1 -0
  48. eval_results/step_26000.json +1 -0
  49. eval_results/step_27000.json +1 -0
  50. eval_results/step_28000.json +1 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - pico-lm/pretokenized-dolma
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ metrics:
8
+ - pico-lm/perplexity
9
+ pipeline_tag: text-generation
10
+ ---
11
+
12
+ # Pico Decoder Large
13
+
14
+ **pico-decoder-large** is the largest model (570M) in the current `pico-decoder` suite. It is a full-scale research model designed for in-depth interpretability studies of transformer learning. Trained with [`pico-train`](https://github.com/pico-lm) and fully compatible with [`pico-analyze`](https://github.com/pico-lm), it offers rich checkpointing and analytical insight into large-scale LM behavior.
15
+
16
+ > NOTE: The `pico-decoder-large-1` branch contains the full commit history for the training run.
17
+
18
+ ## 🔧 Model Details
19
+
20
+ | Field | Value |
21
+ |---------------------|------------------------------------|
22
+ | **Architecture** | Decoder-only transformer (LLaMA-style) |
23
+ | **Parameters** | 570M |
24
+ | **Layers** | 12 |
25
+ | **Hidden Size** | 1536 |
26
+ | **Feed Forward Size**| 6144 |
27
+ | **Attention Heads** | 12 |
28
+ | **Key/Value Heads** | 4 |
29
+
30
+ ## 📚 Training
31
+
32
+ - **Dataset**: [`pretokenized-dolma`](https://github.com/pico-lm)
33
+ - **Training steps**: 200,000
34
+ - **Batch size**: 1024
35
+ - **Sequence length**: 2048
36
+ - **Optimizer**: AdamW
37
+ - **Learning rate schedule**: Linear decay with warmup
38
+ - **Compute**: 16 A100-SXM4-80GB GPUs
39
+
40
+ ## 📈 Evaluation and Analysis
41
+
42
+ This model supports fine-grained analysis using [pico-analyze](https://github.com/pico-lm). This tool enables researchers to understand how learning unfolds over training, even at very small scales.
43
+
44
+ We also evaluate perplexity of the model on the [pico-paloma-tinsy](https://huggingface.co/datasets/pico-lm/pretokenized-paloma-tinsy) dataset.
45
+
46
+ ## 📄 Citation
47
+
48
+ ```bibtex
49
+ @software{pico2025,
50
+ author = {Diehl Martinez, Richard},
51
+ title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
52
+ year = {2025},
53
+ url = {https://github.com/pico-lm}
54
+ }
config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_hidden_dim": 6144,
3
+ "architectures": [
4
+ "PicoDecoderHF"
5
+ ],
6
+ "attention_n_heads": 12,
7
+ "attention_n_kv_heads": 4,
8
+ "auto_map": {
9
+ "AutoConfig": "pico_decoder.PicoDecoderHFConfig",
10
+ "AutoModelForCausalLM": "pico_decoder.PicoDecoderHF"
11
+ },
12
+ "batch_size": 1024,
13
+ "d_model": 1536,
14
+ "max_seq_len": 2048,
15
+ "model_type": "pico_decoder",
16
+ "n_layers": 12,
17
+ "norm_eps": 1e-06,
18
+ "position_emb_theta": 10000.0,
19
+ "torch_dtype": "float32",
20
+ "transformers_version": "4.48.3",
21
+ "vocab_size": 50304
22
+ }
eval_results/step_0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 60407.55679170296}
eval_results/step_1000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 325.7754169842923}
eval_results/step_10000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 39.63131210396929}
eval_results/step_100000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.71192149534458}
eval_results/step_101000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.722617558984393}
eval_results/step_102000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.647181750673035}
eval_results/step_103000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.617256277363474}
eval_results/step_104000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.58370889288208}
eval_results/step_105000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.574076971609003}
eval_results/step_106000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.514877815196737}
eval_results/step_107000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.512063555019658}
eval_results/step_108000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.481759576348892}
eval_results/step_109000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.42309071344781}
eval_results/step_11000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 38.16542526977818}
eval_results/step_110000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.396521913133018}
eval_results/step_111000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.37710198400743}
eval_results/step_112000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.309339042906146}
eval_results/step_113000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.345229115552602}
eval_results/step_114000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.29986716762237}
eval_results/step_115000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.27356985670349}
eval_results/step_116000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.228843119917016}
eval_results/step_117000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.17763711808035}
eval_results/step_118000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.129703529786564}
eval_results/step_119000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.10688131553371}
eval_results/step_12000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 37.00609017996838}
eval_results/step_120000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.095857482255543}
eval_results/step_121000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.07633466438134}
eval_results/step_122000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 23.05234782687463}
eval_results/step_123000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 22.985029866720325}
eval_results/step_124000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 22.9857818419925}
eval_results/step_125000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 22.96052921383223}
eval_results/step_13000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 36.12258010915763}
eval_results/step_14000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 35.30092575084873}
eval_results/step_15000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 34.9831400311367}
eval_results/step_16000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 33.944966741887534}
eval_results/step_17000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 33.525249324692254}
eval_results/step_18000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 33.06738855614479}
eval_results/step_19000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 32.68154775822204}
eval_results/step_2000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 893.744832137902}
eval_results/step_20000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 32.365150609082875}
eval_results/step_21000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 31.893736724238778}
eval_results/step_22000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 31.542816529955182}
eval_results/step_23000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 31.126356617498896}
eval_results/step_24000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.837729862010438}
eval_results/step_25000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.55359834328761}
eval_results/step_26000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.223841104607132}
eval_results/step_27000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.00440268965133}
eval_results/step_28000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 29.728886223836227}