βš–οΈ License and Usage

This repository contains quantized variants of the Gemma language model developed by Google.

Terms of Use

These quantized models are:

  • Provided under the same terms as the original Google Gemma models.
  • Intended only for non-commercial use, research, and experimentation.
  • Redistributed without modification to the underlying model weights, except for format (GGUF) and quantization level.

By using this repository or its contents, you agree to:

  • Comply with the Gemma License Terms,
  • Not use the model or its derivatives for any commercial purposes without a separate license from Google,
  • Acknowledge Google as the original model creator.

πŸ“’ Disclaimer: This repository is not affiliated with Google.


πŸ“¦ Model Downloads

All quantized model files are hosted externally for convenience. You can download them from:

πŸ‘‰ https://modelbakery.nincs.net/c516a

πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git

File list

Each .gguf file has a corresponding .txt file that contains the same download URL for clarity.

Example:

  • codegemma-7b-it.Q4_K_M.gguf (binary file)
  • codegemma-7b-it.Q4_K_M.gguf.txt β†’ contains:
Download: https://modelbakery.nincs.net/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf

πŸ“˜ Notes

These models were quantized locally using llama.cpp and tested on RTX 3050 / 5950X / 64GB RAM setups.

If you find them useful, feel free to star the project or fork it to share improvements!

πŸ“₯ Model Files

Model weights are not stored directly on this repository due to size constraints.

Instead, each .txt file in the models/ folder contains a direct download link to the corresponding .gguf model file hosted at:

➑️ https://modelbakery.nincs.net/c516a/projects/quantized-codegemma-7b-it

c687980 (Add download instructions for GGUF files hosted externally)

Downloads last month
40
GGUF
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for c516a/quantized-codegemma-7b-it

Quantized
(88)
this model