llama-quantize --tensor-type ffn_down_exps=q4_0 --tensor-type ffn_gate_exps=q4_0 --tensor-type ffn_up_exps=q4_0 Llama-4-Scout-17B-16E-Instruct.gguf Llama-4-Scout-17B-16E-Instruct-GGUF-Q8_0-EXP-Q4_0.gguf q8_0  
Downloads last month
100
GGUF
Model size
108B params
Architecture
llama4
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for hjc4869/Llama-4-Scout-17B-16E-Instruct-GGUF