Model Card for Model ID
it was used to fine-tune OuteAI/Lite-Oute-1-300M-Instruct for tweet tone classification problem. Default model achieved 0.08 f1-score, while fine-tuned version achieved 0.53 f1-score in less than 8 minutes of fine-tuning on a single A100
Prameters
LoRA was used with r=8 and alpha=16 to fine-tune "k_proj", "v_proj".
Training parameters
BATCH_SIZE = 128 LEARNING_RATE = 1e-3 NUM_EPOCHS = 1
Metrics
F1 score is 0.53 on a test set
Examples
========== Sorry bout the stream last night I crashed out but will be on tonight for sure. Then back to Minecraft in pc tomorrow night. neutral assistant neutral assistant neutral
========== Chase Headley's RBI double in the 8th inning off David Price snapped a Yankees streak of 33 consecutive scoreless innings against Blue Jays neutral assistant neutral assistant neutral
========== @user Alciato: Bee will invest 150 million in January, another 200 in the Summer and plans to bring Messi by 2017" positive assistant neutral assistant neutral
==========
- Downloads last month
- 13
Model tree for spankevich/llm-course-hw3-lora
Base model
OuteAI/Lite-Oute-1-300M-Instruct