File size: 1,208 Bytes
3a4cc65
 
 
 
 
 
 
 
 
 
 
 
a2997b4
 
 
59be5a8
a2997b4
 
 
 
 
 
60776df
a2997b4
60776df
 
3c30847
b8f19b5
60776df
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: other
license_name: llama4
license_link: LICENSE
base_model:
- meta-llama/Llama-4-Scout-17B-16E-Instruct
pipeline_tag: image-text-to-text
tags:
- gguf-connector
---

## llama-4-scout-17b-16e-instruct-gguf
- base model from meta-llama
- tested on [gguf-connector](https://pypi.org/project/gguf-connector) with nightly [llama-cpp-python](https://github.com/calcuis/llama-cpp-python/releases)

## example workflow (run it locally)
- download the different parts of the model; for example q2_k
- `llama-4-scout-17b-16e-it-q2_k-00001-of-00004.gguf`
- `llama-4-scout-17b-16e-it-q2_k-00002-of-00004.gguf`
- `llama-4-scout-17b-16e-it-q2_k-00003-of-00004.gguf`
- `llama-4-scout-17b-16e-it-q2_k-00004-of-00004.gguf`
- pull them all into an empty folder; then execute the merge command: `ggc m2`
- the merged gguf is around 36.8GB for q2_k (setup once)
- execute connector command: `ggc gpp` or `ggc cpp`
- select the merged gguf then start your prompt to interact with llama4

## for model larger than 50GB in total
- don't need to merge (linked already); just execute: `ggc gpp` (or `ggc cpp` for ui)
- select the first part of the model (i.e., 00001-of-xxxxx)
- start your prompt to interact with llama4