Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ By accessing this model, you are agreeing to the Llama 2 terms and conditions of
|
|
27 |
|
28 |
## Usage:
|
29 |
|
30 |
-
CodeLlama-7B-QML requires significant computing resources to perform with inference (response) times suitable for automatic code completion. Therefore, it should be used with a GPU accelerator
|
31 |
|
32 |
Large Language Models, including CodeLlama-7B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.
|
33 |
|
|
|
27 |
|
28 |
## Usage:
|
29 |
|
30 |
+
CodeLlama-7B-QML requires significant computing resources to perform with inference (response) times suitable for automatic code completion. Therefore, it should be used with a GPU accelerator.
|
31 |
|
32 |
Large Language Models, including CodeLlama-7B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.
|
33 |
|