See axolotl config
axolotl version: 0.10.0.dev0
base_model: Heralax/datagen-pretrain-v1-7b-mistralv0.2
tokenizer_type: AutoTokenizer
model_type: AutoModelForCausalLM
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: 29_mil_asstr.jsonl
ds_type: json
type: completion
- path: 40mil_gutenberg.jsonl
type: completion
- path: hle-1_formatted_2mil.jsonl
type: completion
- path: 11_mil_fineweb.jsonl
type: completion
- path: multiturn_segments_shard_01.json
type: input_output
- path: multiturn_segments_shard_02.json
type: input_output
- path: singleturn_segments_shard_01.json
type: input_output
- path: singleturn_segments_shard_02.json
type: input_output
- path: openhermes2_5_shard_01.json
type: chat_template
chat_template: chatml
field_messages: conversations
message_field_role: from
message_field_content: value
roles:
user:
- human
assistant:
- gpt
system:
- system
- path: openhermes2_5_shard_02.json
type: chat_template
chat_template: chatml
field_messages: conversations
message_field_role: from
message_field_content: value
roles:
user:
- human
assistant:
- gpt
system:
- system
- path: openthoughts-1.parquet
type: chat_template
chat_template: chatml
field_messages: conversations
message_field_role: from
message_field_content: value
roles:
user:
- user
assistant:
- assistant
system:
- system
- path: openthoughts-2.parquet
type: chat_template
chat_template: chatml
field_messages: conversations
message_field_role: from
message_field_content: value
roles:
user:
- user
assistant:
- assistant
system:
- system
- path: qwq_10million.jsonl
type: chat_template
chat_template: chatml
field_messages: conversations
message_field_role: from
message_field_content: value
roles:
user:
- human
assistant:
- gpt
system:
- system
- path: bluemoon-6mil.json
type: chat_template
chat_template: chatml
field_messages: conversations
message_field_role: from
message_field_content: value
roles:
user:
- human
assistant:
- gpt
system:
- system
dataset_prepared_path: last_run_prepared
output_dir: ./datagen-pretrain-v1-7b-mistralv0.2
seed: 11037
hub_model_id: datagen-sft-1
hub_strategy: every_save
sequence_len: 20000
sample_packing: true
pad_to_sequence_len: false
shuffle_merged_datasets: true
wandb_project: datagen-pretrain-v1-7b-mistralv0.2
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 50
micro_batch_size: 3
eval_batch_size: 1
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: constant
learning_rate: 0.000020
weight_decay: 0
train_on_inputs: true
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention: false # faster
flash_attention: true # slower than xformers
chat_template: chatml
# warmup_ratio: 0.5
# warmup_steps: 0
auto_resume_from_checkpoints: false
warmup_ratio: 0.1
evals_per_epoch: 1
eval_batch_size: 4
val_set_size: 0.01
save_steps: 1000
eval_sample_packing: false
save_total_limit: 2 # NOTE you can afford many more saves with this config due to not storing optimizer states like with normal ones I think.
debug:
special_tokens:
pad_token: "<unk>"
use_liger_kernel: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
datagen-sft-1
This model is a fine-tuned version of Heralax/datagen-pretrain-v1-7b-mistralv0.2 on the 29_mil_asstr.jsonl, the 40mil_gutenberg.jsonl, the hle-1_formatted_2mil.jsonl, the 11_mil_fineweb.jsonl, the multiturn_segments_shard_01.json, the multiturn_segments_shard_02.json, the singleturn_segments_shard_01.json, the singleturn_segments_shard_02.json, the openhermes2_5_shard_01.json, the openhermes2_5_shard_02.json, the openthoughts-1.parquet, the openthoughts-2.parquet, the qwq_10million.jsonl and the bluemoon-6mil.json datasets. It achieves the following results on the evaluation set:
- Loss: 0.6304
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 4
- seed: 11037
- gradient_accumulation_steps: 50
- total_train_batch_size: 150
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 111
- num_epochs: 2.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.4533 | 0.0018 | 1 | 2.4612 |
0.5531 | 0.9999 | 558 | 0.6706 |
0.5148 | 1.9981 | 1116 | 0.6304 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 19
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Heralax/Augmentoolkit-DataSpecialist-v0.1
Base model
Heralax/datagen-pretrain-v1-7b-mistralv0.2