Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ tags:
|
|
12 |
## Description:
|
13 |
CodeLlama-13B-QML is a large language model customized by the Qt Company for Fill-In-The-Middle code completion tasks in the QML programming language, especially for Qt Quick Controls compliant with Qt 6 releases. The CodeLlama-13B-QML model is designed for companies and individuals that want to self-host their LLM for HMI (Human Machine Interface) software development instead of relying on third-party hosted LLMs.
|
14 |
|
15 |
-
This model reaches a score of 79% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6-compliant code. In comparison,
|
16 |
|
17 |
## Terms of use:
|
18 |
By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to the [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions/ai-services/model-use).
|
|
|
12 |
## Description:
|
13 |
CodeLlama-13B-QML is a large language model customized by the Qt Company for Fill-In-The-Middle code completion tasks in the QML programming language, especially for Qt Quick Controls compliant with Qt 6 releases. The CodeLlama-13B-QML model is designed for companies and individuals that want to self-host their LLM for HMI (Human Machine Interface) software development instead of relying on third-party hosted LLMs.
|
14 |
|
15 |
+
This model reaches a score of 79% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6-compliant code. In comparison, CodeLlama-7B-QML (finetuned model from Qt) scored 79%, Claude 3.7 Sonnet scored 76%, Claude 3.5 Sonnet scored 68%, the base CodeLlama-13B scored 66%, GPT-4o scored 62%, the base CodeLlama-7B scored 61%. This model was fine-tuned based on raw data from over 4000 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-13B-QML is not optimised for the creation of Qt5-release compliant, C++, or Python code.
|
16 |
|
17 |
## Terms of use:
|
18 |
By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to the [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions/ai-services/model-use).
|