Outlook

We have quantised the model in 2-bit to make it inferenceable in low-end GPU cards at scale. It was achieved thanks to llama.cpp library.

Downloads last month
22
GGUF
Model size
32.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sleeping-ai/Intellect-2-TQ2-0

Quantized
(13)
this model