Metrics
PPL | arc_easy | arc_challenge | piqa | winogrande | hellaswag | mmlu | QA Avg |
---|---|---|---|---|---|---|---|
7.87 | 67.09 ± 0.96 | 33.02 ± 1.37 | 74.05 ± 1.02 | 61.64 ± 1.37 | 48.79 ± 0.50 | - | 56.92 |
Training method based on BitDistiller Paper
- License: mit
- Finetuned from: TinyLlama/TinyLlama_v1.1
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for BrownianNotion/Llama-2-7b-hf_2bit_int
Base model
meta-llama/Llama-2-7b-hf