apepkuss79 commited on
Commit
c05bd7b
·
verified ·
1 Parent(s): cda052c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -97
README.md CHANGED
@@ -1,98 +1,98 @@
1
- ---
2
- license: llama3
3
- model_name: Llama-3-Taiwan-70B-Instruct
4
- base_model: yentinglin/Llama-3-Taiwan-70B-Instruct
5
- inference: false
6
- pipeline_tag: text-generation
7
- quantized_by: Second State Inc.
8
- library_name: transformers
9
- language:
10
- - zh
11
- - en
12
- tags:
13
- - zhtw
14
- ---
15
-
16
- <!-- header start -->
17
- <!-- 200823 -->
18
- <div style="width: auto; margin-left: auto; margin-right: auto">
19
- <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
- </div>
21
- <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
22
- <!-- header end -->
23
-
24
- # Llama-3-Taiwan-70B-Instruct-GGUF
25
-
26
- ## Original Model
27
-
28
- [meta-llama/Llama-3-Taiwan-70B-Instruct](https://huggingface.co/meta-llama/Llama-3-Taiwan-70B-Instruct)
29
-
30
- ## Run with LlamaEdge
31
-
32
- - LlamaEdge version: [v0.14.1](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.1) and above
33
-
34
- - Prompt template
35
-
36
- - Prompt type: `llama-3-chat`
37
-
38
- - Prompt string
39
-
40
- ```text
41
- <|begin_of_text|><|start_header_id|>system<|end_header_id|>
42
-
43
- {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
44
-
45
- {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
46
-
47
- {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
48
-
49
- {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
50
- ```
51
-
52
- - Context size: `8192`
53
-
54
- - Run as LlamaEdge service
55
-
56
- ```bash
57
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3-Taiwan-70B-Instruct-Q5_K_M.gguf \
58
- llama-api-server.wasm \
59
- --prompt-template llama-3-chat \
60
- --ctx-size 8192 \
61
- --model-name Llama-3-70b
62
- ```
63
-
64
- - Run as LlamaEdge command app
65
-
66
- ```bash
67
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3-Taiwan-70B-Instruct-Q5_K_M.gguf \
68
- llama-chat.wasm \
69
- --prompt-template llama-3-chat \
70
- --ctx-size 8192
71
- ```
72
-
73
- ## Quantized GGUF Models
74
-
75
- | Name | Quant method | Bits | Size | Use case |
76
- | ---- | ---- | ---- | ---- | ----- |
77
- | [Llama-3-Taiwan-70B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q2_K.gguf) | Q2_K | 2 | 26.4 GB| smallest, significant quality loss - not recommended for most purposes |
78
- | [Llama-3-Taiwan-70B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 37.1 GB| small, substantial quality loss |
79
- | [Llama-3-Taiwan-70B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 34.3 GB| very small, high quality loss |
80
- | [Llama-3-Taiwan-70B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 30.9 GB| very small, high quality loss |
81
- | [Llama-3-Taiwan-70B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 40.0 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
82
- | [Llama-3-Taiwan-70B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 42.5 GB| medium, balanced quality - recommended |
83
- | [Llama-3-Taiwan-70B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q4_K_S.gguf) | Q4_K_M | 4 | 40.3 GB| small, greater quality loss |
84
- | [Llama-3-Taiwan-70B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 48.7 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
85
- | [Llama-3-Taiwan-70B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 49.9 GB| large, very low quality loss - recommended |
86
- | [Llama-3-Taiwan-70B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 48.7 GB| large, low quality loss - recommended |
87
- | [Llama-3-Taiwan-70B-Instruct-Q6_K-00001-of-00002.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q6_K-00001-of-00002.gguf) | Q6_K | 6 | 29.8 GB| very large, extremely low quality loss |
88
- | [Llama-3-Taiwan-70B-Instruct-Q6_K-00002-of-00002.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q6_K-00002-of-00002.gguf) | Q6_K | 6 | 28.0 GB| very large, extremely low quality loss |
89
- | [Llama-3-Taiwan-70B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 29.8 GB| very large, extremely low quality loss - not recommended |
90
- | [Llama-3-Taiwan-70B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 29.8 GB| very large, extremely low quality loss - not recommended |
91
- | [Llama-3-Taiwan-70B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 15.4 GB| very large, extremely low quality loss - not recommended |
92
- | [Llama-3-Taiwan-70B-Instruct-f16-00001-of-00005.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-f16-00001-of-00005.gguf) | f16 | 16 | 30.0 GB| |
93
- | [Llama-3-Taiwan-70B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 29.6 GB| |
94
- | [Llama-3-Taiwan-70B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 29.9 GB| |
95
- | [Llama-3-Taiwan-70B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 29.6 GB| |
96
- | [Llama-3-Taiwan-70B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 22.2 GB| |
97
-
98
  *Quantized with llama.cpp b3613.*
 
1
+ ---
2
+ license: llama3
3
+ model_name: Llama-3-Taiwan-70B-Instruct
4
+ base_model: yentinglin/Llama-3-Taiwan-70B-Instruct
5
+ inference: false
6
+ pipeline_tag: text-generation
7
+ quantized_by: Second State Inc.
8
+ library_name: transformers
9
+ language:
10
+ - zh
11
+ - en
12
+ tags:
13
+ - zhtw
14
+ ---
15
+
16
+ <!-- header start -->
17
+ <!-- 200823 -->
18
+ <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
+ </div>
21
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
22
+ <!-- header end -->
23
+
24
+ # Llama-3-Taiwan-70B-Instruct-GGUF
25
+
26
+ ## Original Model
27
+
28
+ [haqishen/Llama-3-Taiwan-70B-Instruct](https://huggingface.co/haqishen/Llama-3-Taiwan-70B-Instruct)
29
+
30
+ ## Run with LlamaEdge
31
+
32
+ - LlamaEdge version: [v0.14.1](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.1) and above
33
+
34
+ - Prompt template
35
+
36
+ - Prompt type: `llama-3-chat`
37
+
38
+ - Prompt string
39
+
40
+ ```text
41
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
42
+
43
+ {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
44
+
45
+ {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
46
+
47
+ {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
48
+
49
+ {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
50
+ ```
51
+
52
+ - Context size: `8192`
53
+
54
+ - Run as LlamaEdge service
55
+
56
+ ```bash
57
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3-Taiwan-70B-Instruct-Q5_K_M.gguf \
58
+ llama-api-server.wasm \
59
+ --prompt-template llama-3-chat \
60
+ --ctx-size 8192 \
61
+ --model-name Llama-3-70b
62
+ ```
63
+
64
+ - Run as LlamaEdge command app
65
+
66
+ ```bash
67
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3-Taiwan-70B-Instruct-Q5_K_M.gguf \
68
+ llama-chat.wasm \
69
+ --prompt-template llama-3-chat \
70
+ --ctx-size 8192
71
+ ```
72
+
73
+ ## Quantized GGUF Models
74
+
75
+ | Name | Quant method | Bits | Size | Use case |
76
+ | ---- | ---- | ---- | ---- | ----- |
77
+ | [Llama-3-Taiwan-70B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q2_K.gguf) | Q2_K | 2 | 26.4 GB| smallest, significant quality loss - not recommended for most purposes |
78
+ | [Llama-3-Taiwan-70B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 37.1 GB| small, substantial quality loss |
79
+ | [Llama-3-Taiwan-70B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 34.3 GB| very small, high quality loss |
80
+ | [Llama-3-Taiwan-70B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 30.9 GB| very small, high quality loss |
81
+ | [Llama-3-Taiwan-70B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 40.0 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
82
+ | [Llama-3-Taiwan-70B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 42.5 GB| medium, balanced quality - recommended |
83
+ | [Llama-3-Taiwan-70B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q4_K_S.gguf) | Q4_K_M | 4 | 40.3 GB| small, greater quality loss |
84
+ | [Llama-3-Taiwan-70B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 48.7 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
85
+ | [Llama-3-Taiwan-70B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 49.9 GB| large, very low quality loss - recommended |
86
+ | [Llama-3-Taiwan-70B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 48.7 GB| large, low quality loss - recommended |
87
+ | [Llama-3-Taiwan-70B-Instruct-Q6_K-00001-of-00002.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q6_K-00001-of-00002.gguf) | Q6_K | 6 | 29.8 GB| very large, extremely low quality loss |
88
+ | [Llama-3-Taiwan-70B-Instruct-Q6_K-00002-of-00002.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q6_K-00002-of-00002.gguf) | Q6_K | 6 | 28.0 GB| very large, extremely low quality loss |
89
+ | [Llama-3-Taiwan-70B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 29.8 GB| very large, extremely low quality loss - not recommended |
90
+ | [Llama-3-Taiwan-70B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 29.8 GB| very large, extremely low quality loss - not recommended |
91
+ | [Llama-3-Taiwan-70B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 15.4 GB| very large, extremely low quality loss - not recommended |
92
+ | [Llama-3-Taiwan-70B-Instruct-f16-00001-of-00005.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-f16-00001-of-00005.gguf) | f16 | 16 | 30.0 GB| |
93
+ | [Llama-3-Taiwan-70B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 29.6 GB| |
94
+ | [Llama-3-Taiwan-70B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 29.9 GB| |
95
+ | [Llama-3-Taiwan-70B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 29.6 GB| |
96
+ | [Llama-3-Taiwan-70B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Llama-3-Taiwan-70B-Instruct-GGUF/blob/main/Llama-3-Taiwan-70B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 22.2 GB| |
97
+
98
  *Quantized with llama.cpp b3613.*