Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ This model has half size in comparison to the Mixtral 8x7b Instruct. And it basi
|
|
14 |
|
15 |
Used models (all lasered using laserRMT, except for the base model):
|
16 |
|
17 |
-
#
|
18 |
|
19 |
This model is a Mixture of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
|
20 |
* [cognitivecomputations/dolphin-2.6-mistral-7b-dpo](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo)
|
|
|
14 |
|
15 |
Used models (all lasered using laserRMT, except for the base model):
|
16 |
|
17 |
+
# Laserxtral - 4x7b
|
18 |
|
19 |
This model is a Mixture of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
|
20 |
* [cognitivecomputations/dolphin-2.6-mistral-7b-dpo](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo)
|