File size: 1,102 Bytes
0a6f0c9 4a8b036 0a6f0c9 6af3fff 4a8b036 6af3fff 4a8b036 6af3fff 4a8b036 2d4edc2 8f3471f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
language:
- en
license: mit
tags:
- pretrained
- security
- redteam
- blueteam
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.7
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# TylerG01/Indigo-v0.1
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model.
## Project Goals
This is v0.1 (alpha) release of the Indigo LLM project, which used LoRA Fine-Tuning to train Mistral 7B on more than 400 books, pamphlets,
training documents, code snippets and other works in the cyber security field, openly sourced on the surface web. This version used 16 LoRA layers
and had a val loss of 1.601 after the 4th training epoch. However, my goal for the LoRA version of this model is to produce a val loss of <1.51 after
some modification to the dataset and training approach.
For more information on this project, check out the blog post at https://t2-security.com/indigo-llm-503cd6e22fe4.
|