Cleanup readme
Browse filesAlso added link for ExLlamaV2 quants if you don't mind
README.md
CHANGED
|
@@ -1,6 +1,7 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
base_model: mistralai/Mistral-7B-v0.1
|
|
|
|
| 4 |
tags:
|
| 5 |
- generated_from_trainer
|
| 6 |
model-index:
|
|
@@ -87,21 +88,21 @@ special_tokens:
|
|
| 87 |
|
| 88 |
</details><br>
|
| 89 |
|
| 90 |
-
|
| 91 |
|
| 92 |
-
|
| 93 |
|
| 94 |
-
|
| 95 |
|
| 96 |
-
|
| 97 |
|
| 98 |
-
|
| 99 |
|
| 100 |
-
|
| 101 |
|
| 102 |
-
##
|
| 103 |
|
| 104 |
-
|
| 105 |
|
| 106 |
## Training procedure
|
| 107 |
|
|
@@ -119,10 +120,6 @@ The following hyperparameters were used during training:
|
|
| 119 |
- lr_scheduler_warmup_steps: 10
|
| 120 |
- training_steps: 1602
|
| 121 |
|
| 122 |
-
### Training results
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
### Framework versions
|
| 127 |
|
| 128 |
- Transformers 4.38.0.dev0
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
base_model: mistralai/Mistral-7B-v0.1
|
| 4 |
+
datasets: NeuralNovel/Neural-DPO
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
model-index:
|
|
|
|
| 88 |
|
| 89 |
</details><br>
|
| 90 |
|
| 91 |
+
Creator: <a href="https://huggingface.co/NovoCode">NovoCode</a>
|
| 92 |
|
| 93 |
+
Community Organization: <a href="https://huggingface.co/ConvexAI">ConvexAI</a>
|
| 94 |
|
| 95 |
+
Discord: <a href="https://discord.gg/rJXGjmxqzS">Join us on Discord</a>
|
| 96 |
|
| 97 |
+
## Model description
|
| 98 |
|
| 99 |
+
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on [Neural-DPO](https://huggingface.co/datasets/NeuralNovel/Neural-DPO).
|
| 100 |
|
| 101 |
+
This model should excel at question answering across a rich array of subjects across a wide range of domains such as literature, scientific research, and theoretical inquiries.
|
| 102 |
|
| 103 |
+
## ExLlamaV2 Quants
|
| 104 |
|
| 105 |
+
ExLlamaV2 quants are available from [bartowski here](https://huggingface.co/bartowski/Mistral-NeuralDPO-v0.5-exl2)
|
| 106 |
|
| 107 |
## Training procedure
|
| 108 |
|
|
|
|
| 120 |
- lr_scheduler_warmup_steps: 10
|
| 121 |
- training_steps: 1602
|
| 122 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 123 |
### Framework versions
|
| 124 |
|
| 125 |
- Transformers 4.38.0.dev0
|