File size: 1,822 Bytes
fe39026 a4e5389 fe39026 a4e5389 fe39026 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
## βοΈ License and Usage
This repository contains quantized variants of the Gemma language model developed by Google.
* π§ **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
* πͺ **Quantized by:** c516a
### Terms of Use
These quantized models are:
* Provided under the same terms as the original Google Gemma models.
* Intended only for **non-commercial use**, **research**, and **experimentation**.
* Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
By using this repository or its contents, you agree to:
* Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
* Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
* Acknowledge Google as the original model creator.
> π’ **Disclaimer:** This repository is not affiliated with Google.
---
## π¦ Model Downloads
All quantized model files are hosted externally for convenience.
You can download them from:
π **[https://modelbakery.nincs.net/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
π git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
### File list
Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
Example:
* `codegemma-7b-it.Q4_K_M.gguf` (binary file)
* `codegemma-7b-it.Q4_K_M.gguf.txt` β contains:
```
Download: https://modelbakery.nincs.net/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
```
---
## π Notes
These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
If you find them useful, feel free to star the project or fork it to share improvements!
|