Update README.md
Browse files
README.md
CHANGED
|
@@ -5,15 +5,26 @@ This model is a fine-tuned Llama3 model, trained on the training set of PromptEv
|
|
| 5 |
|
| 6 |
Model Card:
|
| 7 |
Model Details
|
| 8 |
-
β Person or organization developing model: Meta, and fine-tuned by [
|
| 9 |
β Model date: Base model was released in April 18 2024, and fine-tuned in July 2024
|
| 10 |
β Model version: 3.1
|
| 11 |
β Model type: decoder-only Transformer
|
| 12 |
β Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: 8 billion parameters, fine-tuned by us using Axolotl (https://github.com/axolotl-ai-cloud/axolotl)
|
| 13 |
-
β Paper or other resource for more information: https://arxiv.org/abs/
|
| 14 |
-
β Citation details:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
β License: Meta Llama 3 Community License
|
| 16 |
-
β Where to send questions or comments about the model:
|
| 17 |
Intended Use. Use cases that were envisioned during development. (Primary intended uses, Primary intended users, Out-of-scope use cases)
|
| 18 |
Intended to be used by developers to generate high quality assertion criteria for LLM outputs, or to benchmark the ability of LLMs in generating these assertion criteria.
|
| 19 |
Factors. Factors could include demographic or phenotypic groups, environmental conditions, technical attributes, or others listed in Section 4.3.
|
|
|
|
| 5 |
|
| 6 |
Model Card:
|
| 7 |
Model Details
|
| 8 |
+
β Person or organization developing model: Meta, and fine-tuned by the [authors](https://openreview.net/forum?id=uUW8jYai6K)
|
| 9 |
β Model date: Base model was released in April 18 2024, and fine-tuned in July 2024
|
| 10 |
β Model version: 3.1
|
| 11 |
β Model type: decoder-only Transformer
|
| 12 |
β Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: 8 billion parameters, fine-tuned by us using Axolotl (https://github.com/axolotl-ai-cloud/axolotl)
|
| 13 |
+
β Paper or other resource for more information: [Llama 3](https://arxiv.org/abs/2407.21783), [PromptEvals](https://openreview.net/forum?id=uUW8jYai6K)
|
| 14 |
+
β Citation details:
|
| 15 |
+
```bibtex
|
| 16 |
+
@inproceedings{
|
| 17 |
+
anonymous2024promptevals,
|
| 18 |
+
title={{PROMPTEVALS}: A Dataset of Assertions and Guardrails for Custom Production Large Language Model Pipelines},
|
| 19 |
+
author={Anonymous},
|
| 20 |
+
booktitle={Submitted to ACL Rolling Review - August 2024},
|
| 21 |
+
year={2024},
|
| 22 |
+
url={https://openreview.net/forum?id=uUW8jYai6K},
|
| 23 |
+
note={under review}
|
| 24 |
+
}
|
| 25 |
+
```
|
| 26 |
β License: Meta Llama 3 Community License
|
| 27 |
+
β Where to send questions or comments about the model: https://openreview.net/forum?id=uUW8jYai6K
|
| 28 |
Intended Use. Use cases that were envisioned during development. (Primary intended uses, Primary intended users, Out-of-scope use cases)
|
| 29 |
Intended to be used by developers to generate high quality assertion criteria for LLM outputs, or to benchmark the ability of LLMs in generating these assertion criteria.
|
| 30 |
Factors. Factors could include demographic or phenotypic groups, environmental conditions, technical attributes, or others listed in Section 4.3.
|