kevin009 commited on
Commit
622b387
·
verified ·
1 Parent(s): ae59e8b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -8
README.md CHANGED
@@ -1,22 +1,42 @@
1
  ---
2
- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
3
  language:
4
  - en
5
  license: apache-2.0
6
  tags:
7
  - text-generation-inference
8
  - transformers
9
- - unsloth
10
  - mistral
11
  - trl
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** kevin009
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
 
 
19
 
20
- This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - en
4
  license: apache-2.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
 
8
  - mistral
9
  - trl
10
  ---
11
 
12
+ # Model Card: Minimalist Assistant
13
 
14
+ ## Model Details
15
+ - **Architecture**: 32k tokens, 32 layers
16
+ - **Quantization**: 4-bit
17
+ - **Base Model**: Mistral Instruct
18
+ - **Tokenizer**: Custom (based on Mistral Instruct)
19
 
20
+ ## Intended Use
21
+ - As Editor Assistant for revision and paraphrasing
22
 
23
+ ## Training Data
24
+ - **Initial Training**: 14,000 conversations in minimalist style to ensure concise output
25
+ - **Further Training**: 8,000 revision conversations to enhance rewriting and paraphrasing capabilities
26
+
27
+ ## Performance and Limitations
28
+ - **Strengths**:
29
+ - Optimized for generating concise content
30
+ - Specialized in rewriting and paraphrasing tasks
31
+ - **Limitations**:
32
+ - May produce shorter outputs compared to standard models
33
+ - Potential biases from training data should be considered
34
+
35
+ ## Ethical Considerations
36
+ - Designed for daily use, potential biases from training data should be considered
37
+ - Users should be aware of the model's focus on brevity and rewriting
38
+
39
+ ## Additional Information
40
+ - Fine-tuned to address limitations in writing tasks observed in other models
41
+ - Personalized for everyday use cases
42
+ - Motivation for development was to create a model better suited for writing tasks, as existing models were found lacking in this area