c516a commited on
Commit
fe39026
Β·
1 Parent(s): 4bf7a9c

Add download instructions for GGUF files hosted externally

Browse files
README.md CHANGED
@@ -1,3 +1,80 @@
 
1
  ---
2
  license: gemma
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <<<<<<< HEAD
2
  ---
3
  license: gemma
4
  ---
5
+ =======
6
+ ---
7
+ license: gemma
8
+ tags:
9
+ - gguf
10
+ - text-generation
11
+ - gemma
12
+ - quantized
13
+ model_type: llama
14
+ quantized_by: c516a
15
+ base_model: google/gemma-3-12b-it
16
+ ---
17
+
18
+ ## βš–οΈ License and Usage
19
+
20
+ This repository contains quantized variants of the Gemma language model developed by Google.
21
+
22
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
23
+ * πŸͺ„ **Quantized by:** c516a
24
+
25
+ ### Terms of Use
26
+
27
+ These quantized models are:
28
+
29
+ * Provided under the same terms as the original Google Gemma models.
30
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
31
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
32
+
33
+ By using this repository or its contents, you agree to:
34
+
35
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
36
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
37
+ * Acknowledge Google as the original model creator.
38
+
39
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
40
+
41
+ ---
42
+
43
+ ## πŸ“¦ Model Downloads
44
+
45
+ All quantized model files are hosted externally for convenience.
46
+ You can download them from:
47
+
48
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
49
+
50
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
51
+
52
+ ### File list
53
+
54
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
55
+
56
+ Example:
57
+
58
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
59
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
60
+
61
+ ```
62
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
63
+ ```
64
+
65
+ ---
66
+
67
+ ## πŸ“˜ Notes
68
+
69
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
70
+
71
+ If you find them useful, feel free to star the project or fork it to share improvements!
72
+
73
+ ## πŸ“₯ Model Files
74
+
75
+ Model weights are not stored directly on this repository due to size constraints.
76
+
77
+ Instead, each `.txt` file in the `models/` folder contains a direct download link to the corresponding `.gguf` model file hosted at:
78
+
79
+ ➑️ https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it
80
+ >>>>>>> c687980 (Add download instructions for GGUF files hosted externally)
models/codegemma-7b-it.Q2_K.gguf ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## βš–οΈ License and Usage
2
+
3
+ This repository contains quantized variants of the Gemma language model developed by Google.
4
+
5
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
6
+ * πŸͺ„ **Quantized by:** c516a
7
+
8
+ ### Terms of Use
9
+
10
+ These quantized models are:
11
+
12
+ * Provided under the same terms as the original Google Gemma models.
13
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
14
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
15
+
16
+ By using this repository or its contents, you agree to:
17
+
18
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
19
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
20
+ * Acknowledge Google as the original model creator.
21
+
22
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
23
+
24
+ ---
25
+
26
+ ## πŸ“¦ Model Downloads
27
+
28
+ All quantized model files are hosted externally for convenience.
29
+ You can download them from:
30
+
31
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
32
+
33
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
34
+
35
+ ### File list
36
+
37
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
38
+
39
+ Example:
40
+
41
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
42
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
43
+
44
+ ```
45
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
46
+ ```
47
+
48
+ ---
49
+
50
+ ## πŸ“˜ Notes
51
+
52
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
53
+
54
+ If you find them useful, feel free to star the project or fork it to share improvements!
models/codegemma-7b-it.Q3_K_M.gguf ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## βš–οΈ License and Usage
2
+
3
+ This repository contains quantized variants of the Gemma language model developed by Google.
4
+
5
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
6
+ * πŸͺ„ **Quantized by:** c516a
7
+
8
+ ### Terms of Use
9
+
10
+ These quantized models are:
11
+
12
+ * Provided under the same terms as the original Google Gemma models.
13
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
14
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
15
+
16
+ By using this repository or its contents, you agree to:
17
+
18
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
19
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
20
+ * Acknowledge Google as the original model creator.
21
+
22
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
23
+
24
+ ---
25
+
26
+ ## πŸ“¦ Model Downloads
27
+
28
+ All quantized model files are hosted externally for convenience.
29
+ You can download them from:
30
+
31
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
32
+
33
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
34
+
35
+ ### File list
36
+
37
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
38
+
39
+ Example:
40
+
41
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
42
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
43
+
44
+ ```
45
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
46
+ ```
47
+
48
+ ---
49
+
50
+ ## πŸ“˜ Notes
51
+
52
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
53
+
54
+ If you find them useful, feel free to star the project or fork it to share improvements!
models/codegemma-7b-it.Q3_K_S.gguf ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## βš–οΈ License and Usage
2
+
3
+ This repository contains quantized variants of the Gemma language model developed by Google.
4
+
5
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
6
+ * πŸͺ„ **Quantized by:** c516a
7
+
8
+ ### Terms of Use
9
+
10
+ These quantized models are:
11
+
12
+ * Provided under the same terms as the original Google Gemma models.
13
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
14
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
15
+
16
+ By using this repository or its contents, you agree to:
17
+
18
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
19
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
20
+ * Acknowledge Google as the original model creator.
21
+
22
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
23
+
24
+ ---
25
+
26
+ ## πŸ“¦ Model Downloads
27
+
28
+ All quantized model files are hosted externally for convenience.
29
+ You can download them from:
30
+
31
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
32
+
33
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
34
+
35
+ ### File list
36
+
37
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
38
+
39
+ Example:
40
+
41
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
42
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
43
+
44
+ ```
45
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
46
+ ```
47
+
48
+ ---
49
+
50
+ ## πŸ“˜ Notes
51
+
52
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
53
+
54
+ If you find them useful, feel free to star the project or fork it to share improvements!
models/codegemma-7b-it.Q4_0.gguf ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ v## βš–οΈ License and Usage
2
+
3
+ This repository contains quantized variants of the Gemma language model developed by Google.
4
+
5
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
6
+ * πŸͺ„ **Quantized by:** c516a
7
+
8
+ ### Terms of Use
9
+
10
+ These quantized models are:
11
+
12
+ * Provided under the same terms as the original Google Gemma models.
13
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
14
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
15
+
16
+ By using this repository or its contents, you agree to:
17
+
18
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
19
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
20
+ * Acknowledge Google as the original model creator.
21
+
22
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
23
+
24
+ ---
25
+
26
+ ## πŸ“¦ Model Downloads
27
+
28
+ All quantized model files are hosted externally for convenience.
29
+ You can download them from:
30
+
31
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
32
+
33
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
34
+
35
+ ### File list
36
+
37
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
38
+
39
+ Example:
40
+
41
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
42
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
43
+
44
+ ```
45
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
46
+ ```
47
+
48
+ ---
49
+
50
+ ## πŸ“˜ Notes
51
+
52
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
53
+
54
+ If you find them useful, feel free to star the project or fork it to share improvements!
models/codegemma-7b-it.Q4_K.gguf ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## βš–οΈ License and Usage
2
+
3
+ This repository contains quantized variants of the Gemma language model developed by Google.
4
+
5
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
6
+ * πŸͺ„ **Quantized by:** c516a
7
+
8
+ ### Terms of Use
9
+
10
+ These quantized models are:
11
+
12
+ * Provided under the same terms as the original Google Gemma models.
13
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
14
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
15
+
16
+ By using this repository or its contents, you agree to:
17
+
18
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
19
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
20
+ * Acknowledge Google as the original model creator.
21
+
22
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
23
+
24
+ ---
25
+
26
+ ## πŸ“¦ Model Downloads
27
+
28
+ All quantized model files are hosted externally for convenience.
29
+ You can download them from:
30
+
31
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
32
+
33
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
34
+
35
+ ### File list
36
+
37
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
38
+
39
+ Example:
40
+
41
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
42
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
43
+
44
+ ```
45
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
46
+ ```
47
+
48
+ ---
49
+
50
+ ## πŸ“˜ Notes
51
+
52
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
53
+
54
+ If you find them useful, feel free to star the project or fork it to share improvements!
models/codegemma-7b-it.Q4_K_M.gguf ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## βš–οΈ License and Usage
2
+
3
+ This repository contains quantized variants of the Gemma language model developed by Google.
4
+
5
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
6
+ * πŸͺ„ **Quantized by:** c516a
7
+
8
+ ### Terms of Use
9
+
10
+ These quantized models are:
11
+
12
+ * Provided under the same terms as the original Google Gemma models.
13
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
14
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
15
+
16
+ By using this repository or its contents, you agree to:
17
+
18
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
19
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
20
+ * Acknowledge Google as the original model creator.
21
+
22
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
23
+
24
+ ---
25
+
26
+ ## πŸ“¦ Model Downloads
27
+
28
+ All quantized model files are hosted externally for convenience.
29
+ You can download them from:
30
+
31
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
32
+
33
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
34
+
35
+ ### File list
36
+
37
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
38
+
39
+ Example:
40
+
41
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
42
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
43
+
44
+ ```
45
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
46
+ ```
47
+
48
+ ---
49
+
50
+ ## πŸ“˜ Notes
51
+
52
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
53
+
54
+ If you find them useful, feel free to star the project or fork it to share improvements!
models/codegemma-7b-it.Q5_K.gguf ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## βš–οΈ License and Usage
2
+
3
+ This repository contains quantized variants of the Gemma language model developed by Google.
4
+
5
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
6
+ * πŸͺ„ **Quantized by:** c516a
7
+
8
+ ### Terms of Use
9
+
10
+ These quantized models are:
11
+
12
+ * Provided under the same terms as the original Google Gemma models.
13
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
14
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
15
+
16
+ By using this repository or its contents, you agree to:
17
+
18
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
19
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
20
+ * Acknowledge Google as the original model creator.
21
+
22
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
23
+
24
+ ---
25
+
26
+ ## πŸ“¦ Model Downloads
27
+
28
+ All quantized model files are hosted externally for convenience.
29
+ You can download them from:
30
+
31
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
32
+
33
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
34
+
35
+ ### File list
36
+
37
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
38
+
39
+ Example:
40
+
41
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
42
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
43
+
44
+ ```
45
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
46
+ ```
47
+
48
+ ---
49
+
50
+ ## πŸ“˜ Notes
51
+
52
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
53
+
54
+ If you find them useful, feel free to star the project or fork it to share improvements!
models/codegemma-7b-it.Q6_K.gguf ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## βš–οΈ License and Usage
2
+
3
+ This repository contains quantized variants of the Gemma language model developed by Google.
4
+
5
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
6
+ * πŸͺ„ **Quantized by:** c516a
7
+
8
+ ### Terms of Use
9
+
10
+ These quantized models are:
11
+
12
+ * Provided under the same terms as the original Google Gemma models.
13
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
14
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
15
+
16
+ By using this repository or its contents, you agree to:
17
+
18
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
19
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
20
+ * Acknowledge Google as the original model creator.
21
+
22
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
23
+
24
+ ---
25
+
26
+ ## πŸ“¦ Model Downloads
27
+
28
+ All quantized model files are hosted externally for convenience.
29
+ You can download them from:
30
+
31
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
32
+
33
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
34
+
35
+ ### File list
36
+
37
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
38
+
39
+ Example:
40
+
41
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
42
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
43
+
44
+ ```
45
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
46
+ ```
47
+
48
+ ---
49
+
50
+ ## πŸ“˜ Notes
51
+
52
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
53
+
54
+ If you find them useful, feel free to star the project or fork it to share improvements!
models/codegemma-7b-it.Q8_0.gguf ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## βš–οΈ License and Usage
2
+
3
+ This repository contains quantized variants of the Gemma language model developed by Google.
4
+
5
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
6
+ * πŸͺ„ **Quantized by:** c516a
7
+
8
+ ### Terms of Use
9
+
10
+ These quantized models are:
11
+
12
+ * Provided under the same terms as the original Google Gemma models.
13
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
14
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
15
+
16
+ By using this repository or its contents, you agree to:
17
+
18
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
19
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
20
+ * Acknowledge Google as the original model creator.
21
+
22
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
23
+
24
+ ---
25
+
26
+ ## πŸ“¦ Model Downloads
27
+
28
+ All quantized model files are hosted externally for convenience.
29
+ You can download them from:
30
+
31
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
32
+
33
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
34
+
35
+ ### File list
36
+
37
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
38
+
39
+ Example:
40
+
41
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
42
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
43
+
44
+ ```
45
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
46
+ ```
47
+
48
+ ---
49
+
50
+ ## πŸ“˜ Notes
51
+
52
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
53
+
54
+ If you find them useful, feel free to star the project or fork it to share improvements!
models/codegemma-7b-it.gguf ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## βš–οΈ License and Usage
2
+
3
+ This repository contains quantized variants of the Gemma language model developed by Google.
4
+
5
+ * 🧠 **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms)
6
+ * πŸͺ„ **Quantized by:** c516a
7
+
8
+ ### Terms of Use
9
+
10
+ These quantized models are:
11
+
12
+ * Provided under the same terms as the original Google Gemma models.
13
+ * Intended only for **non-commercial use**, **research**, and **experimentation**.
14
+ * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**.
15
+
16
+ By using this repository or its contents, you agree to:
17
+
18
+ * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms),
19
+ * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google,
20
+ * Acknowledge Google as the original model creator.
21
+
22
+ > πŸ“’ **Disclaimer:** This repository is not affiliated with Google.
23
+
24
+ ---
25
+
26
+ ## πŸ“¦ Model Downloads
27
+
28
+ All quantized model files are hosted externally for convenience.
29
+ You can download them from:
30
+
31
+ πŸ‘‰ **[https://modelbakery.nincs.net/users/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)**
32
+
33
+ πŸ‘‰ git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git
34
+
35
+ ### File list
36
+
37
+ Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity.
38
+
39
+ Example:
40
+
41
+ * `codegemma-7b-it.Q4_K_M.gguf` (binary file)
42
+ * `codegemma-7b-it.Q4_K_M.gguf.txt` β†’ contains:
43
+
44
+ ```
45
+ Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf
46
+ ```
47
+
48
+ ---
49
+
50
+ ## πŸ“˜ Notes
51
+
52
+ These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups.
53
+
54
+ If you find them useful, feel free to star the project or fork it to share improvements!