Brianpuz commited on
Commit
f817a51
·
verified ·
1 Parent(s): 845f036

Fix the bug about the GGML version number

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -55,7 +55,7 @@ def get_llama_cpp_notes(
55
  *Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
56
 
57
  ## llama.cpp quantization
58
- Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/{version}">b4944</a> for quantization.
59
  Original model: https://huggingface.co/{model_id}
60
  Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
61
  ## Prompt format
 
55
  *Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
56
 
57
  ## llama.cpp quantization
58
+ Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/{version}">{version}</a> for quantization.
59
  Original model: https://huggingface.co/{model_id}
60
  Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
61
  ## Prompt format