running model in ollama is not supported.
π
π
4
#15 opened about 1 month ago
by
humble92
llama_cpp_python: gguf_init_from_file_impl: failed to read tensor info
#14 opened about 1 month ago
by
miscw

Kind of a strange responses GGGGGGGGGGGG....
#13 opened about 1 month ago
by
avicohen
Run with 400MB
2
#12 opened about 1 month ago
by
Dinuraj
How to easily run on Windows OS ?
4
#11 opened about 1 month ago
by
lbarasc
Chat template issue
#8 opened about 2 months ago
by
tdh111
TQ1 quant version
3
#7 opened about 2 months ago
by
TobDeBer
Does not work in LM Studio
β
π
4
9
#6 opened about 2 months ago
by
mailxp
Chinese Ha is not supported
#3 opened about 2 months ago
by
digmouse100
gguf not llama.cpp compatible yet
3
#2 opened about 2 months ago
by
lefromage
Update README.md
#1 opened about 2 months ago
by
bullerwins
