Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,7 @@ language:
|
|
| 19 |
|
| 20 |
Use this text2text model to find out what LLM instructions might be able to generate an arbitary piece of code!
|
| 21 |
|
| 22 |
-
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the `pszemraj/fleece2instructions-codealpaca dataset.
|
| 23 |
It achieves the following results on the evaluation set:
|
| 24 |
- Loss: 0.9222
|
| 25 |
- Rouge1: 62.0692
|
|
@@ -37,7 +37,7 @@ Intended use: Research on domain adaptation and/or other improvements to LLMs by
|
|
| 37 |
|
| 38 |
## Training and evaluation data
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
## Training procedure
|
| 43 |
|
|
|
|
| 19 |
|
| 20 |
Use this text2text model to find out what LLM instructions might be able to generate an arbitary piece of code!
|
| 21 |
|
| 22 |
+
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the `pszemraj/fleece2instructions-codealpaca` dataset.
|
| 23 |
It achieves the following results on the evaluation set:
|
| 24 |
- Loss: 0.9222
|
| 25 |
- Rouge1: 62.0692
|
|
|
|
| 37 |
|
| 38 |
## Training and evaluation data
|
| 39 |
|
| 40 |
+
Refer to the linked dataset card for `pszemraj/fleece2instructions-codealpaca` or the [original dataset](https://github.com/sahil280114/codealpaca) repo.
|
| 41 |
|
| 42 |
## Training procedure
|
| 43 |
|