Clarification needed: credit and profile placement for my merged GGUF q4_k_m quantized huihui-ai/Qwen2.5-VL-7B-Instruct-abliterated model on Ollama
1
#6 opened 16 days ago
by
jordivcb
Can't be ran
1
#5 opened 5 months ago
by
qe2
Quantized Version
#4 opened 5 months ago
by
bobwu
Can we have Qwen2.5-vl-72b-abliterated?
2
#2 opened 5 months ago
by
chibop
Can you provide GGUF model usable with Ollama locally
🔥
2
1
#1 opened 6 months ago
by
ryg81