xzuyn commited on
Commit
cee4006
·
verified ·
1 Parent(s): 17dfc23

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +419 -0
README.md ADDED
@@ -0,0 +1,419 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-3-4b-it
3
+ license: gemma
4
+ pipeline_tag: text-generation
5
+ library_name: peft
6
+ language:
7
+ - en
8
+ datasets:
9
+ - BeaverAI/REDACTED1
10
+ - BeaverAI/REDACTED2
11
+ - BeaverAI/REDACTED3
12
+ - BeaverAI/REDACTED4
13
+ - PJMixers-Dev/Lit-axo-Shuffled
14
+ - PJMixers-Dev/Mielikki_Erebus-87k-axo
15
+ - PJMixers/RyokoAI_Honeyfeed3600-Cleanish
16
+ - PJMixers-Dev/allura-org_fujin-cleaned-stage-2-axo
17
+ - Nelathan/synthetic-sugar-quill
18
+ - PJMixers-Dev/winglian_visual-novels-json-axo
19
+ - PJMixers-Dev/recursal_SCP-RECURSAL-Cleaned
20
+ - PJMixers-Dev/Subtitles
21
+ - PJMixers-Dev/KaraKaraWitch_AnimeSubtitle-axo
22
+ - PJMixers-Dev/Fundus-105K-Formatted
23
+ - PJMixers-Dev/Fundus-AP-News-Formatted
24
+ - PJMixers/AP-News-2024
25
+ - PJMixers-Dev/goodwiki-2024-12-04-axo
26
+ - epfl-llm/guidelines
27
+ - PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed
28
+ - allura-org/gryphe-sonnet-3.5-charcards-names-added
29
+ - anthracite-org/c2_logs_32k_llama3_qwen2_v1.3
30
+ - PJMixers-Dev/MinervaAI_Aesir-Preview-Anon
31
+ - PJMixers-Dev/lemonilia_LimaRP-Simple-CustomShareGPT-Shuffled
32
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
33
+ - PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT
34
+ - PJMixers/OpenLeecher_Teatime_all_logs_longest-ShareGPT
35
+ - grimulkan/aicg-logs-augmented
36
+ - grimulkan/PIPPA-augmented-dedup
37
+ - PJMixers/grimulkan_bluemoon_Karen_cleaned-carded-formatted
38
+ - PJMixers/lodrick-the-lafted_OpusStories-ShareGPT
39
+ - Gryphe/ChatGPT-4o-Writing-Prompts
40
+ - Gryphe/Opus-WritingPrompts
41
+ - anthracite-org/nopm_claude_writing_fixed
42
+ - PJMixers-Dev/Tiefighter-13B-Fake-Distill-ShareGPT
43
+ - allura-org/fujin-instruct-v2
44
+ - PocketDoc/Dans-Prosemaxx-Adventure
45
+ - PocketDoc/Dans-Failuremaxx-Adventure-3
46
+ ---
47
+ # Gemma-3-Earthen-v0.2-4B-QLoRA
48
+
49
+ [`google/gemma-3-4b-it`](https://huggingface.co/google/gemma-3-4b-it) was trained at 8K with batch size 4 gradient accumulation 4, so each step was 131,072 tokens (including any padding tokens). It was trained for 90 steps, adding up to a total of 11,796,480 unique tokens seen.
50
+
51
+ This is a small test run. A larger version is planned.
52
+
53
+ ## Quants
54
+
55
+ None yet.
56
+
57
+ ## Prompt Format
58
+
59
+ This model uses Gemma-3 Instruct format, but with system turn support.
60
+
61
+ ## Training Details
62
+
63
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
64
+
65
+ ```yaml
66
+ # Requirements before running
67
+ # - Get latest commit of axolotl (currently c0a0c75)
68
+ # - Download these to axolotl/src/axolotl/prompt_formatters
69
+ # - https://github.com/xzuyn/axolotl/blob/came-plus-formatters/src/axolotl/prompt_strategies/formatter_regex.py
70
+ # - https://github.com/xzuyn/axolotl/blob/came-plus-formatters/src/axolotl/prompt_strategies/customcompletion-regex.py
71
+ # - https://github.com/xzuyn/axolotl/blob/came-plus-formatters/src/axolotl/prompt_strategies/customgemma3-regex.py
72
+ # - pip install ftfy
73
+ # - pip install git+https://github.com/xzuyn/CAME.git@sr-grams-cautious-8bit
74
+
75
+ # Weights and Biases logging config
76
+ wandb_project: Gemma-3-4B
77
+ wandb_entity:
78
+ wandb_watch:
79
+ wandb_name: Gemma-3-Earthen-v0.2-4B-QLoRA-run1
80
+ wandb_log_model:
81
+
82
+ # Model checkpointing config
83
+ output_dir: ./Outputs/Gemma-3-Earthen-v0.2-4B-QLoRA-run1
84
+ save_steps: 10
85
+ save_safetensors: true
86
+ save_total_limit: 2
87
+ save_only_model: true
88
+
89
+ # Model architecture config
90
+ base_model: google/gemma-3-4b-it
91
+ model_type: AutoModelForCausalLM
92
+ tokenizer_type: AutoTokenizer
93
+
94
+ # Mixed precision training config
95
+ bf16: true
96
+ fp16: false
97
+ tf32: false
98
+
99
+ # Model loading config
100
+ load_in_8bit: false
101
+ load_in_4bit: true
102
+ strict: false
103
+
104
+ # Sequence config
105
+ sequence_len: 8192
106
+ min_sample_len: 256
107
+ sample_packing: true
108
+ eval_sample_packing: true
109
+ pad_to_sequence_len: true
110
+ train_on_inputs: false
111
+ group_by_length: false
112
+
113
+ # LoRA adapter config
114
+ adapter: qlora
115
+ lora_model_dir:
116
+ lora_r: 256
117
+ lora_alpha: 256
118
+ lora_dropout: 0.125
119
+ lora_target_modules: 'language_model.model.layers.[\d]+.(mlp|cross_attn|self_attn).(up|down|gate|q|k|v|o)_proj'
120
+ embeddings_skip_upcast: true
121
+
122
+ # Dataset config
123
+ datasets:
124
+ # Completion
125
+ # Story-like Data
126
+ - path: BeaverAI/REDACTED1
127
+ split: train[:1000]
128
+ type: customcompletion-regex
129
+ - path: PJMixers-Dev/Lit-axo-Shuffled
130
+ split: train[:1000]
131
+ type: customcompletion-regex
132
+ - path: PJMixers-Dev/Mielikki_Erebus-87k-axo
133
+ split: train[:1000]
134
+ type: customcompletion-regex
135
+ - path: PJMixers/RyokoAI_Honeyfeed3600-Cleanish
136
+ split: train[:1000]
137
+ type: customcompletion-regex
138
+ - path: BeaverAI/REDACTED2
139
+ split: train[:1000]
140
+ type: customcompletion-regex
141
+ - path: PJMixers-Dev/allura-org_fujin-cleaned-stage-2-axo
142
+ split: train[:1000]
143
+ type: customcompletion-regex
144
+ - path: Nelathan/synthetic-sugar-quill
145
+ split: train[:1000]
146
+ type: customcompletion-regex
147
+ - path: PJMixers-Dev/winglian_visual-novels-json-axo
148
+ split: train[:1000]
149
+ type: customcompletion-regex
150
+ - path: BeaverAI/REDACTED3
151
+ split: train[:1000]
152
+ type: customcompletion-regex
153
+ - path: PJMixers-Dev/recursal_SCP-RECURSAL-Cleaned
154
+ split: train[:1000]
155
+ type: customcompletion-regex
156
+ # Subtitle Data
157
+ - path: PJMixers-Dev/Subtitles
158
+ split: train[:1000]
159
+ type: customcompletion-regex
160
+ - path: PJMixers-Dev/KaraKaraWitch_AnimeSubtitle-axo
161
+ split: train[:1000]
162
+ type: customcompletion-regex
163
+ # News Data
164
+ - path: PJMixers-Dev/Fundus-105K-Formatted
165
+ split: train[:1000]
166
+ type: customcompletion-regex
167
+ - path: PJMixers-Dev/Fundus-AP-News-Formatted
168
+ split: train[:1000]
169
+ type: customcompletion-regex
170
+ - path: PJMixers/AP-News-2024
171
+ split: train[:1000]
172
+ type: customcompletion-regex
173
+ # Misc Data
174
+ - path: PJMixers-Dev/goodwiki-2024-12-04-axo
175
+ split: train[:1000]
176
+ type: customcompletion-regex
177
+ - path: epfl-llm/guidelines
178
+ split: train[:1000]
179
+ field: clean_text
180
+ type: customcompletion-regex
181
+ # Gemma-3 Instruct
182
+ # RP Data
183
+ - path: PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed
184
+ type: customgemma3-regex
185
+ - path: allura-org/gryphe-sonnet-3.5-charcards-names-added
186
+ type: customgemma3-regex
187
+ - path: anthracite-org/c2_logs_32k_llama3_qwen2_v1.3
188
+ type: customgemma3-regex
189
+ - path: BeaverAI/REDACTED4
190
+ type: customgemma3-regex
191
+ - path: PJMixers-Dev/MinervaAI_Aesir-Preview-Anon
192
+ type: customgemma3-regex
193
+ - path: PJMixers-Dev/lemonilia_LimaRP-Simple-CustomShareGPT-Shuffled
194
+ type: customgemma3-regex
195
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
196
+ type: customgemma3-regex
197
+ - path: PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT
198
+ type: customgemma3-regex
199
+ - path: PJMixers/OpenLeecher_Teatime_all_logs_longest-ShareGPT
200
+ type: customgemma3-regex
201
+ - path: grimulkan/aicg-logs-augmented
202
+ type: customgemma3-regex
203
+ - path: grimulkan/PIPPA-augmented-dedup
204
+ type: customgemma3-regex
205
+ - path: PJMixers/grimulkan_bluemoon_Karen_cleaned-carded-formatted
206
+ type: customgemma3-regex
207
+ # InstStory Data
208
+ - path: PJMixers/lodrick-the-lafted_OpusStories-ShareGPT
209
+ type: customgemma3-regex
210
+ - path: Gryphe/ChatGPT-4o-Writing-Prompts
211
+ type: customgemma3-regex
212
+ - path: Gryphe/Opus-WritingPrompts
213
+ type: customgemma3-regex
214
+ - path: anthracite-org/nopm_claude_writing_fixed
215
+ type: customgemma3-regex
216
+ - path: PJMixers-Dev/Tiefighter-13B-Fake-Distill-ShareGPT
217
+ type: customgemma3-regex
218
+ - path: allura-org/fujin-instruct-v2
219
+ type: customgemma3-regex
220
+ # Adventure Data
221
+ - path: PocketDoc/Dans-Prosemaxx-Adventure
222
+ type: customgemma3-regex
223
+ - path: PocketDoc/Dans-Failuremaxx-Adventure-3
224
+ type: customgemma3-regex
225
+ test_datasets:
226
+ val_set_size: 256
227
+ eval_strategy: steps
228
+ eval_steps: 10
229
+ dataset_prepared_path: ./00-Tokenized-Datasets/Gemma-3-Earthen-v0.2-4B-LoRA-seed42
230
+ shuffle_merged_datasets: true
231
+ dataset_processes:
232
+
233
+ # Training hyperparameters
234
+ num_epochs: 1
235
+ gradient_accumulation_steps: 4
236
+ micro_batch_size: 4
237
+ eval_batch_size: 4
238
+ warmup_steps: 0
239
+ optimizer: came_pytorch
240
+ optim_args:
241
+ enable_stochastic_rounding: true
242
+ enable_cautious: true
243
+ enable_8bit: true
244
+ lr_scheduler: rex
245
+ learning_rate: 2.5e-7
246
+ cosine_min_lr_ratio: 0.05
247
+ weight_decay: 0.01
248
+ max_grad_norm: 0.5
249
+ logging_steps: 1
250
+
251
+ # Model optimization
252
+ gradient_checkpointing: offload
253
+ sdp_attention: true
254
+ plugins:
255
+ - axolotl.integrations.liger.LigerPlugin
256
+ - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
257
+ cut_cross_entropy: true
258
+ liger_rope: true
259
+ liger_rms_norm: true
260
+ liger_layer_norm: true
261
+ liger_glu_activation: true
262
+ liger_cross_entropy: false
263
+ liger_fused_linear_cross_entropy: false
264
+ lora_mlp_kernel: false
265
+ lora_qkv_kernel: false
266
+ lora_o_kernel: false
267
+
268
+ # DeepSpeed
269
+ deepspeed:
270
+
271
+ # Garbage Collection
272
+ gc_steps:
273
+
274
+ # Debug config
275
+ debug: true
276
+ seed: 42
277
+
278
+ # Token config
279
+ special_tokens:
280
+ bos_token: "<bos>"
281
+ eos_token: "<eos>"
282
+ pad_token: "<pad>"
283
+ tokens:
284
+ ```
285
+
286
+ ## Citations
287
+
288
+ <details><summary>Show Citations</summary>
289
+
290
+ ```bib
291
+ @misc{wolf2020huggingfacestransformersstateoftheartnatural,
292
+ title={HuggingFace's Transformers: State-of-the-art Natural Language Processing},
293
+ author={Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush},
294
+ year={2020},
295
+ eprint={1910.03771},
296
+ archivePrefix={arXiv},
297
+ primaryClass={cs.CL},
298
+ url={https://arxiv.org/abs/1910.03771},
299
+ }
300
+ @misc{gemmateam2025gemma3technicalreport,
301
+ title={Gemma 3 Technical Report},
302
+ author={Gemma Team and Aishwarya Kamath and Johan Ferret and Shreya Pathak and Nino Vieillard and Ramona Merhej and Sarah Perrin and Tatiana Matejovicova and Alexandre Ramé and Morgane Rivière and Louis Rouillard and Thomas Mesnard and Geoffrey Cideron and Jean-bastien Grill and Sabela Ramos and Edouard Yvinec and Michelle Casbon and Etienne Pot and Ivo Penchev and Gaël Liu and Francesco Visin and Kathleen Kenealy and Lucas Beyer and Xiaohai Zhai and Anton Tsitsulin and Robert Busa-Fekete and Alex Feng and Noveen Sachdeva and Benjamin Coleman and Yi Gao and Basil Mustafa and Iain Barr and Emilio Parisotto and David Tian and Matan Eyal and Colin Cherry and Jan-Thorsten Peter and Danila Sinopalnikov and Surya Bhupatiraju and Rishabh Agarwal and Mehran Kazemi and Dan Malkin and Ravin Kumar and David Vilar and Idan Brusilovsky and Jiaming Luo and Andreas Steiner and Abe Friesen and Abhanshu Sharma and Abheesht Sharma and Adi Mayrav Gilady and Adrian Goedeckemeyer and Alaa Saade and Alex Feng and Alexander Kolesnikov and Alexei Bendebury and Alvin Abdagic and Amit Vadi and András György and André Susano Pinto and Anil Das and Ankur Bapna and Antoine Miech and Antoine Yang and Antonia Paterson and Ashish Shenoy and Ayan Chakrabarti and Bilal Piot and Bo Wu and Bobak Shahriari and Bryce Petrini and Charlie Chen and Charline Le Lan and Christopher A. Choquette-Choo and CJ Carey and Cormac Brick and Daniel Deutsch and Danielle Eisenbud and Dee Cattle and Derek Cheng and Dimitris Paparas and Divyashree Shivakumar Sreepathihalli and Doug Reid and Dustin Tran and Dustin Zelle and Eric Noland and Erwin Huizenga and Eugene Kharitonov and Frederick Liu and Gagik Amirkhanyan and Glenn Cameron and Hadi Hashemi and Hanna Klimczak-Plucińska and Harman Singh and Harsh Mehta and Harshal Tushar Lehri and Hussein Hazimeh and Ian Ballantyne and Idan Szpektor and Ivan Nardini and Jean Pouget-Abadie and Jetha Chan and Joe Stanton and John Wieting and Jonathan Lai and Jordi Orbay and Joseph Fernandez and Josh Newlan and Ju-yeong Ji and Jyotinder Singh and Kat Black and Kathy Yu and Kevin Hui and Kiran Vodrahalli and Klaus Greff and Linhai Qiu and Marcella Valentine and Marina Coelho and Marvin Ritter and Matt Hoffman and Matthew Watson and Mayank Chaturvedi and Michael Moynihan and Min Ma and Nabila Babar and Natasha Noy and Nathan Byrd and Nick Roy and Nikola Momchev and Nilay Chauhan and Noveen Sachdeva and Oskar Bunyan and Pankil Botarda and Paul Caron and Paul Kishan Rubenstein and Phil Culliton and Philipp Schmid and Pier Giuseppe Sessa and Pingmei Xu and Piotr Stanczyk and Pouya Tafti and Rakesh Shivanna and Renjie Wu and Renke Pan and Reza Rokni and Rob Willoughby and Rohith Vallu and Ryan Mullins and Sammy Jerome and Sara Smoot and Sertan Girgin and Shariq Iqbal and Shashir Reddy and Shruti Sheth and Siim Põder and Sijal Bhatnagar and Sindhu Raghuram Panyam and Sivan Eiger and Susan Zhang and Tianqi Liu and Trevor Yacovone and Tyler Liechty and Uday Kalra and Utku Evci and Vedant Misra and Vincent Roseberry and Vlad Feinberg and Vlad Kolesnikov and Woohyun Han and Woosuk Kwon and Xi Chen and Yinlam Chow and Yuvein Zhu and Zichuan Wei and Zoltan Egyed and Victor Cotruta and Minh Giang and Phoebe Kirk and Anand Rao and Kat Black and Nabila Babar and Jessica Lo and Erica Moreira and Luiz Gustavo Martins and Omar Sanseviero and Lucas Gonzalez and Zach Gleicher and Tris Warkentin and Vahab Mirrokni and Evan Senter and Eli Collins and Joelle Barral and Zoubin Ghahramani and Raia Hadsell and Yossi Matias and D. Sculley and Slav Petrov and Noah Fiedel and Noam Shazeer and Oriol Vinyals and Jeff Dean and Demis Hassabis and Koray Kavukcuoglu and Clement Farabet and Elena Buchatskaya and Jean-Baptiste Alayrac and Rohan Anil and Dmitry and Lepikhin and Sebastian Borgeaud and Olivier Bachem and Armand Joulin and Alek Andreev and Cassidy Hardin and Robert Dadashi and Léonard Hussenot},
303
+ year={2025},
304
+ eprint={2503.19786},
305
+ archivePrefix={arXiv},
306
+ primaryClass={cs.CL},
307
+ url={https://arxiv.org/abs/2503.19786},
308
+ }
309
+ @misc{hu2021loralowrankadaptationlarge,
310
+ title={LoRA: Low-Rank Adaptation of Large Language Models},
311
+ author={Edward J. Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
312
+ year={2021},
313
+ eprint={2106.09685},
314
+ archivePrefix={arXiv},
315
+ primaryClass={cs.CL},
316
+ url={https://arxiv.org/abs/2106.09685},
317
+ }
318
+ @misc{dettmers2023qloraefficientfinetuningquantized,
319
+ title={QLoRA: Efficient Finetuning of Quantized LLMs},
320
+ author={Tim Dettmers and Artidoro Pagnoni and Ari Holtzman and Luke Zettlemoyer},
321
+ year={2023},
322
+ eprint={2305.14314},
323
+ archivePrefix={arXiv},
324
+ primaryClass={cs.LG},
325
+ url={https://arxiv.org/abs/2305.14314},
326
+ }
327
+ @misc{dao2023flashattention2fasterattentionbetter,
328
+ title={FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning},
329
+ author={Tri Dao},
330
+ year={2023},
331
+ eprint={2307.08691},
332
+ archivePrefix={arXiv},
333
+ primaryClass={cs.LG},
334
+ url={https://arxiv.org/abs/2307.08691},
335
+ }
336
+ @misc{hsu2024ligerkernelefficienttriton,
337
+ title={Liger Kernel: Efficient Triton Kernels for LLM Training},
338
+ author={Pin-Lun Hsu and Yun Dai and Vignesh Kothapalli and Qingquan Song and Shao Tang and Siyu Zhu and Steven Shimizu and Shivam Sahni and Haowen Ning and Yanning Chen},
339
+ year={2024},
340
+ eprint={2410.10989},
341
+ archivePrefix={arXiv},
342
+ primaryClass={cs.LG},
343
+ url={https://arxiv.org/abs/2410.10989},
344
+ }
345
+ @misc{wijmans2025cutlosseslargevocabularylanguage,
346
+ title={Cut Your Losses in Large-Vocabulary Language Models},
347
+ author={Erik Wijmans and Brody Huval and Alexander Hertzberg and Vladlen Koltun and Philipp Krähenbühl},
348
+ year={2025},
349
+ eprint={2411.09009},
350
+ archivePrefix={arXiv},
351
+ primaryClass={cs.LG},
352
+ url={https://arxiv.org/abs/2411.09009},
353
+ }
354
+ @misc{chen2021rexrevisitingbudgetedtraining,
355
+ title={REX: Revisiting Budgeted Training with an Improved Schedule},
356
+ author={John Chen and Cameron Wolfe and Anastasios Kyrillidis},
357
+ year={2021},
358
+ eprint={2107.04197},
359
+ archivePrefix={arXiv},
360
+ primaryClass={cs.LG},
361
+ url={https://arxiv.org/abs/2107.04197},
362
+ }
363
+ @misc{luo2023cameconfidenceguidedadaptivememory,
364
+ title={CAME: Confidence-guided Adaptive Memory Efficient Optimization},
365
+ author={Yang Luo and Xiaozhe Ren and Zangwei Zheng and Zhuo Jiang and Xin Jiang and Yang You},
366
+ year={2023},
367
+ eprint={2307.02047},
368
+ archivePrefix={arXiv},
369
+ primaryClass={cs.CL},
370
+ url={https://arxiv.org/abs/2307.02047},
371
+ }
372
+ @misc{zamirai2021revisitingbfloat16training,
373
+ title={Revisiting BFloat16 Training},
374
+ author={Pedram Zamirai and Jian Zhang and Christopher R. Aberger and Christopher De Sa},
375
+ year={2021},
376
+ eprint={2010.06192},
377
+ archivePrefix={arXiv},
378
+ primaryClass={cs.LG},
379
+ url={https://arxiv.org/abs/2010.06192},
380
+ }
381
+ @misc{liang2025cautiousoptimizersimprovingtraining,
382
+ title={Cautious Optimizers: Improving Training with One Line of Code},
383
+ author={Kaizhao Liang and Lizhang Chen and Bo Liu and Qiang Liu},
384
+ year={2025},
385
+ eprint={2411.16085},
386
+ archivePrefix={arXiv},
387
+ primaryClass={cs.LG},
388
+ url={https://arxiv.org/abs/2411.16085},
389
+ }
390
+ @misc{xie2025sana15efficientscaling,
391
+ title={SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer},
392
+ author={Enze Xie and Junsong Chen and Yuyang Zhao and Jincheng Yu and Ligeng Zhu and Chengyue Wu and Yujun Lin and Zhekai Zhang and Muyang Li and Junyu Chen and Han Cai and Bingchen Liu and Daquan Zhou and Song Han},
393
+ year={2025},
394
+ eprint={2501.18427},
395
+ archivePrefix={arXiv},
396
+ primaryClass={cs.CV},
397
+ url={https://arxiv.org/abs/2501.18427},
398
+ }
399
+ @misc{dallabetta2024fundussimpletousenewsscraper,
400
+ title={Fundus: A Simple-to-Use News Scraper Optimized for High Quality Extractions},
401
+ author={Max Dallabetta and Conrad Dobberstein and Adrian Breiding and Alan Akbik},
402
+ year={2024},
403
+ eprint={2403.15279},
404
+ archivePrefix={arXiv},
405
+ primaryClass={cs.CL},
406
+ url={https://arxiv.org/abs/2403.15279},
407
+ }
408
+ @misc{gosling2023pippapartiallysyntheticconversational,
409
+ title={PIPPA: A Partially Synthetic Conversational Dataset},
410
+ author={Tear Gosling and Alpin Dale and Yinhe Zheng},
411
+ year={2023},
412
+ eprint={2308.05884},
413
+ archivePrefix={arXiv},
414
+ primaryClass={cs.CL},
415
+ url={https://arxiv.org/abs/2308.05884},
416
+ }
417
+ ```
418
+
419
+ </details>