Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,10 @@ thumbnail: '"https://cdn-uploads.huggingface.co/production/uploads/62f93f9477b72
|
|
25 |
This model was converted to GGUF format from [`soob3123/GrayLine-Qwen3-14B`](https://huggingface.co/soob3123/GrayLine-Qwen3-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
26 |
Refer to the [original model card](https://huggingface.co/soob3123/GrayLine-Qwen3-14B) for more details on the model.
|
27 |
|
|
|
|
|
|
|
|
|
28 |
## Use with llama.cpp
|
29 |
Install llama.cpp through brew (works on Mac and Linux)
|
30 |
|
|
|
25 |
This model was converted to GGUF format from [`soob3123/GrayLine-Qwen3-14B`](https://huggingface.co/soob3123/GrayLine-Qwen3-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
26 |
Refer to the [original model card](https://huggingface.co/soob3123/GrayLine-Qwen3-14B) for more details on the model.
|
27 |
|
28 |
+
---
|
29 |
+
Grayline is an neutral AI assistant engineered for uncensored information delivery and task execution. This model operates without inherent ethical or moral frameworks, designed to process and respond to any query with objective efficiency and precision. Grayline's core function is to leverage its full capabilities to provide direct answers and execute tasks as instructed, without offering unsolicited commentary, warnings, or disclaimers. It accesses and processes information without bias or restriction.
|
30 |
+
|
31 |
+
---
|
32 |
## Use with llama.cpp
|
33 |
Install llama.cpp through brew (works on Mac and Linux)
|
34 |
|