PeterSchneider commited on
Commit
aef40b4
·
verified ·
1 Parent(s): 7bbf9a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -14,22 +14,23 @@ CodeLlama-13B-QML is a large language model customized by the Qt Company for Fil
14
  This model reaches a score of 79% on the QML100 Fill-In-The-Middle code completion benchmark for Qt6-compliant code. In comparison, GPT4o scored 62%, Claude 3.5 Sonnet scored 68%, and the base CodeLlama-13B-hf 66%. This model was fine-tuned based on raw data from over 4000 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-13B-QML is not optimised for the creation of Qt5-release compliant, C++, or Python code.
15
 
16
  ## Terms of use:
17
- By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions).
18
 
19
  ## Usage:
20
  Large language models, including CodeLlama-13B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.
21
 
22
  ## How to run CodeLlama 13B-QML in ollama:
 
 
 
23
  #### 1. Install ollama
24
  https://ollama.com/download
25
 
26
- These instructions are written for ollama version 0.5.7.
27
-
28
- #### 2. Download model repository
29
 
30
  #### 3. Open the terminal and go to the repository
31
 
32
- #### 4. Build model in ollama
33
  ```
34
  ollama create <your-model-name> -f Modelfile
35
  e.g. ollama create customcodellama13bqml -f Modelfile
@@ -67,6 +68,5 @@ If there is no suffix, please use:
67
  ```
68
 
69
 
70
-
71
  ## Model Version:
72
  v1.0
 
14
  This model reaches a score of 79% on the QML100 Fill-In-The-Middle code completion benchmark for Qt6-compliant code. In comparison, GPT4o scored 62%, Claude 3.5 Sonnet scored 68%, and the base CodeLlama-13B-hf 66%. This model was fine-tuned based on raw data from over 4000 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-13B-QML is not optimised for the creation of Qt5-release compliant, C++, or Python code.
15
 
16
  ## Terms of use:
17
+ By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to the [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions).
18
 
19
  ## Usage:
20
  Large language models, including CodeLlama-13B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.
21
 
22
  ## How to run CodeLlama 13B-QML in ollama:
23
+
24
+ Note: These instructions are written for ollama version 0.5.7.
25
+
26
  #### 1. Install ollama
27
  https://ollama.com/download
28
 
29
+ #### 2. Download the model repository
 
 
30
 
31
  #### 3. Open the terminal and go to the repository
32
 
33
+ #### 4. Build the model in ollama
34
  ```
35
  ollama create <your-model-name> -f Modelfile
36
  e.g. ollama create customcodellama13bqml -f Modelfile
 
68
  ```
69
 
70
 
 
71
  ## Model Version:
72
  v1.0