File size: 3,983 Bytes
a47eb6a 3f30ef0 e574d26 a47eb6a e574d26 3f30ef0 e574d26 49ae096 e574d26 49ae096 e574d26 49ae096 3f30ef0 49ae096 e574d26 49ae096 e574d26 49ae096 e574d26 49ae096 e574d26 3f30ef0 e574d26 3f30ef0 e574d26 49ae096 e574d26 49ae096 e574d26 49ae096 e574d26 49ae096 e574d26 49ae096 3f30ef0 e574d26 49ae096 e574d26 49ae096 e574d26 49ae096 e574d26 49ae096 e574d26 49ae096 e574d26 49ae096 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 |
---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# Model Card for neoncortex/mini-mistral-openhermes-2.5-chatml-test
A tiny Mistral model trained as an experiment on teknium/OpenHermes-2.5.
## Model Details
A 63M parameter auto-regressive LM using Mistral architecture as a base.
- Multi-query Attention instead of Grouped-query Attention.
- Sliding window is disabled.
- Modified ChatML instead of Mistral chat template - TL;DR I used '<|im_start|>human' instead of '<|im_start|>user'
### Model Description
Just doing it to see what happens.
It'll take about 40 to 45 hours to train on two Nvidia RTX 3060 12GB.
It uses ChatML for the chat template, but I fucked up the template in the dataset,
using '<|im_start|>human' instead of '<|im_start|>user'. ¯\_(ツ)_/¯
So, here's the bits:
```
{%- set ns = namespace(found=false) -%}
{%- for message in messages -%}
{%- if message['role'] == 'system' -%}
{%- set ns.found = true -%}
{%- endif -%}
{%- endfor -%}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '<|im_start|>system\n' + message['content'].rstrip() + '<|im_end|>\n' -}}
{%- else -%}
{%- if message['role'] == 'human' -%}
{{-'<|im_start|>human\n' + message['content'].rstrip() + '<|im_end|>\n'-}}
{%- else -%}
{{-'<|im_start|>assistant\n' + message['content'] + '<|im_end|>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|im_start|>assistant\n'-}}
{%- endif -%}
```
- **Developed by:** gronkomatic
- **Funded by:** gronkomatic
- **Shared by:** gronkomatic
- **Model type:** Mistral
- **Language(s) (NLP):** English, maybe others I dunno
- **License:** OpenRAIL, IDGAF
### Model Sources
Exclusively available right here on HuggingFace!
- **Repository:** https://huggingface.co/neoncortex/mini-mistral-openhermes-2.5-chatml-test
- **Paper:** LoL
- **Demo:** Just download it in Oobabooga and use the modified chatML template above. Maybe I'll throw together a Space or something.
## Uses
If you wanna have a laugh at how bad it is then go ahead, but I wouldn't expect much from it.
### Out-of-Scope Use
This model won't work well for pretty much everything, probably.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing
I took the OpenHermes 2.5 dataset and formatted it with ChatML.
#### Training Hyperparameters
- **Training regime:** bf16 mixed precision
#### Speeds, Sizes, Times
epochs: 9
steps: 140976
batches per device: 6
1.04it/s
## Evaluation
I tried to run evals but the eval suite just laughed at me.
## Model Examination
Don't be rude.
## Environmental Impact
- **Hardware Type:** I already told you. Try and keep up.
- **Hours used:** ~45 x 2 I guess.
- **Cloud Provider:** gronkomatic
- **Compute Region:** myob
- **Carbon Emitted:** Yes, definitely
### Compute Infrastructure
I trained it on my PC with no side on it because I like to watch the GPUs do their work.
#### Hardware
2 x Nvidia RTX 3060 12GB
#### Software
The wonderful free stuff at HuggingFace (https://huggingface.co)[https://huggingface.co]: transformers, datasets, trl
## Model Card Authors
gronkomatic, unless you're offended by something, in which case it was hacked by hackers.
## Model Card Contact
If you want to send me insults come find me on Reddit I guess u/gronkomatic. |