reyavir commited on
Commit
bd3d248
Β·
verified Β·
1 Parent(s): abbf120

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -4
README.md CHANGED
@@ -5,15 +5,26 @@ This model is a fine-tuned Llama3 model, trained on the training set of PromptEv
5
 
6
  Model Card:
7
  Model Details
8
- – Person or organization developing model: Meta, and fine-tuned by [Redacted for submission]
9
  – Model date: Base model was released in April 18 2024, and fine-tuned in July 2024
10
  – Model version: 3.1
11
  – Model type: decoder-only Transformer
12
  – Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: 8 billion parameters, fine-tuned by us using Axolotl (https://github.com/axolotl-ai-cloud/axolotl)
13
- – Paper or other resource for more information: https://arxiv.org/abs/2310.06825
14
- – Citation details: Redacted for submission
 
 
 
 
 
 
 
 
 
 
 
15
  – License: Meta Llama 3 Community License
16
- – Where to send questions or comments about the model: [Redacted for submission]
17
  Intended Use. Use cases that were envisioned during development. (Primary intended uses, Primary intended users, Out-of-scope use cases)
18
  Intended to be used by developers to generate high quality assertion criteria for LLM outputs, or to benchmark the ability of LLMs in generating these assertion criteria.
19
  Factors. Factors could include demographic or phenotypic groups, environmental conditions, technical attributes, or others listed in Section 4.3.
 
5
 
6
  Model Card:
7
  Model Details
8
+ – Person or organization developing model: Meta, and fine-tuned by the [authors](https://openreview.net/forum?id=uUW8jYai6K)
9
  – Model date: Base model was released in April 18 2024, and fine-tuned in July 2024
10
  – Model version: 3.1
11
  – Model type: decoder-only Transformer
12
  – Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: 8 billion parameters, fine-tuned by us using Axolotl (https://github.com/axolotl-ai-cloud/axolotl)
13
+ – Paper or other resource for more information: [Llama 3](https://arxiv.org/abs/2407.21783), [PromptEvals](https://openreview.net/forum?id=uUW8jYai6K)
14
+ – Citation details:
15
+ ```bibtex
16
+ @inproceedings{
17
+ anonymous2024promptevals,
18
+ title={{PROMPTEVALS}: A Dataset of Assertions and Guardrails for Custom Production Large Language Model Pipelines},
19
+ author={Anonymous},
20
+ booktitle={Submitted to ACL Rolling Review - August 2024},
21
+ year={2024},
22
+ url={https://openreview.net/forum?id=uUW8jYai6K},
23
+ note={under review}
24
+ }
25
+ ```
26
  – License: Meta Llama 3 Community License
27
+ – Where to send questions or comments about the model: https://openreview.net/forum?id=uUW8jYai6K
28
  Intended Use. Use cases that were envisioned during development. (Primary intended uses, Primary intended users, Out-of-scope use cases)
29
  Intended to be used by developers to generate high quality assertion criteria for LLM outputs, or to benchmark the ability of LLMs in generating these assertion criteria.
30
  Factors. Factors could include demographic or phenotypic groups, environmental conditions, technical attributes, or others listed in Section 4.3.