lora_rank: 32

pref_beta: 0.1

cutoff_len: 2048

per_device_train_batch_size: 2

gradient_accumulation_steps: 8

learning_rate: 5.0e-6

num_train_epochs: 1.0

lr_scheduler_type: cosine

warmup_ratio: 0.1

Downloads last month
2
Safetensors
Model size
473M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lblaoke/qwama-0.5b-skywork-pref-dpo-llama-factory-v1

Finetuned
(6)
this model

Dataset used to train lblaoke/qwama-0.5b-skywork-pref-dpo-llama-factory-v1

Collection including lblaoke/qwama-0.5b-skywork-pref-dpo-llama-factory-v1