File size: 3,248 Bytes
2690367
 
 
 
 
 
 
 
 
 
 
 
1e89ad0
2690367
c730df2
cd02793
649abf1
 
 
 
 
2690367
 
 
 
 
 
 
 
fad0b14
2690367
 
c612bff
 
2690367
c52e946
c612bff
2690367
c612bff
0852e8e
2690367
 
 
c612bff
5fe7c14
2690367
 
 
 
 
 
 
 
 
 
 
 
 
451c125
2690367
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: llama2
base_model:
- meta-llama/CodeLlama-7b-hf
base_model_relation: adapter
tags:
- QML
- Code-Completion
---
# Model Overview

## Description:
CodeLlama-7B-QML is a large language model customized by the Qt Company for Fill-In-The-Middle code completion tasks in the QML programming language, especially for Qt Quick Controls compliant with Qt 6 releases. The CodeLlama-7B-QML model is designed for software developers who want to run their code completion LLM locally on their computer.

This model reaches a score of 79% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6-compliant code. In comparison, others models scored:
- CodeLlama-13B-QML: 79%
- Claude 3.7 Sonnet: 76%
- Claude 3.5 Sonnet: 68%
- CodeLlama 13B: 66%
- GPT-4o: 62%
- CodeLlama 7B: 61%

This model was fine-tuned based on raw data from over 5000 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-7B-QML is not optimised for the creation of Qt5-release compliant, C++, or Python code.

 ## Terms of use:
By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to the [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions/ai-services/model-use). 

 ## Usage:

CodeLlama-7B-QML requires significant computing resources to perform with inference (response) times suitable for automatic code completion. Therefore, it should be used with a GPU accelerator. 

Large Language Models, including CodeLlama-7B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.
 
## How to run CodeLlama-7B-QML:

1. Download and install Ollama from Ollama's web page (if you are not using it yet):
```
https://ollama.com/download
```
2. Run the model with the following command in Ollama's CLI:
```
ollama run theqtcompany/codellama-7b-qml
```

Now, you can set and use CodeLlama-7B-QML as LLM for code completions in the Qt AI Assistant. If you want to test the model in Ollama, then you can write curl requests in Ollama's CLI as shown below.

```
curl -X POST http://localhost:11434/api/generate -d '{
  "model": "theqtcompany/codellama-7b-qml",
  "Prompt": "<SUF>\n    title: qsTr(\"Hello World\")\n}<PRE>import QtQuick\n\nWindow {\n    width: 640\n    height: 480\n    visible: true\n<MID>",
  "stream": false,
  "temperature": 0.2,
  "top_p": 0.9,
  "num_predict": 500,
  "stop": ["<SUF>", "<PRE>", "</PRE>", "</SUF>", "< EOT >", "\\end", "<MID>", "</MID>", "##"]
}'
```

In general, the prompt format for CodeLlama-7B-QML is:
```
"<SUF>{suffix}<PRE>{prefix}<MID>"
```

If there is no suffix, please use:
```
"<PRE>{prefix}<MID>"
```


## Model Version:
v1.0

## Attribution:
CodeLlama-7B is a model of the Llama 2 family. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.