Text Generation
Transformers
Safetensors
llama
conversational
text-generation-inference

GENETIC LEMONADE

FINAL v2

image/png

01 // OVERVIEW

Wasn't intending to release another model (so soon at least), but I was testing out some new dataset ideas and thought this model came out pretty nice.

zerofata/GeneticLemonade-Final SFT QLora finetune.

This is an uncensored creative model intended to excel at character driven RP / ERP.

This model is designed to provide longer, narrative heavy responses where characters are portrayed accurately and proactively.

Compared to Unleashed v3, this model has significantly reduced positivity bias and arguably a nicer writing style. The tradeoff is it swipe heavy, making a few more logical errors and can be a bit too concise at times.

02 // SILLYTAVERN SETTINGS

Play with these, they are not the 'best' settings just a stable baseline.

Recommended Samplers

> Temp: 0.9 - 1
> MinP: 0.03 - 0.04
> TopP: 0.9 - 1.0
> Dry: 0.8, 1.75, 4

Instruct

Llama-3-Instruct-Names but you will need to uncheck "System same as user".

03 // QUANTIZATIONS

04 // TRAINING PROCESS

This model was trained using a dataset of approx 4.3 million tokens, 700 RP conversations, 2000 creative writing / instruct samples and about 400 summaries. The bulk of this data has been made public.

This model didn't take well to my existing DPO dataset, so it hasn't been used here.

Axolotl configs

Not optimized for cost / performance efficiency, YMMV.

SFT 1*H200

# ====================
# MODEL CONFIGURATION
# ====================
base_model: zerofata/L3.3-GeneticLemonade-Unleashed-70B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
special_tokens:
  pad_token: "<|finetune_right_pad_id|>"
chat_template: llama3

# ====================
# DATASET CONFIGURATION
# ====================
datasets:
  - path: ./dataset.jsonl
    type: chat_template
    split: train
    chat_template_strategy: tokenizer
    field_messages: messages
    message_property_mappings:
      role: role
      content: content
    roles:
      user: ["user"]
      assistant: ["assistant"]
      system: ["system"]

test_datasets:
  - path: ./validate_dataset.jsonl
    type: chat_template
    split: train
    chat_template_strategy: tokenizer
    field_messages: messages
    message_property_mappings:
      role: role
      content: content
    roles:
      user: ["user"]
      assistant: ["assistant"]
      system: ["system"]

dataset_prepared_path:
train_on_inputs: false  # Only train on assistant responses

# ====================
# QLORA CONFIGURATION
# ====================
adapter: qlora
load_in_4bit: true
lora_r: 64
lora_alpha: 128
lora_dropout: 0.1
lora_target_linear: true
# lora_modules_to_save:  # Uncomment only if you added NEW tokens

# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 2
micro_batch_size: 4
gradient_accumulation_steps: 2
learning_rate: 1.5e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0

# ====================
# SEQUENCE & PACKING
# ====================
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
flash_attention: true
gradient_checkpointing: true

# ====================
# EVALUATION & CHECKPOINTING
# ====================
evaluation_strategy: steps
eval_steps: 5
save_strategy: steps
save_steps: 5
save_total_limit: 5  # Keep best + last few checkpoints
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
early_stopping_patience: 5

# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./output_model
logging_steps: 2
save_safetensors: true

# ====================
# WANDB TRACKING
# ====================
wandb_project: project_name
# wandb_entity: your_entity
# wandb_name: your_run_name
Downloads last month
70
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zerofata/L3.3-GeneticLemonade-Final-v2-70B

Finetuned
(1)
this model
Quantizations
6 models

Datasets used to train zerofata/L3.3-GeneticLemonade-Final-v2-70B

Collections including zerofata/L3.3-GeneticLemonade-Final-v2-70B