pearsonkyle commited on
Commit
54fd5d5
·
1 Parent(s): 9f9e2c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -12
README.md CHANGED
@@ -10,24 +10,53 @@ model-index:
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
  should probably proofread and complete it, then remove this comment. -->
12
 
13
- # ArtPrompter
14
 
15
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
16
 
17
- ## Model description
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- More information needed
20
 
21
  ## Intended uses & limitations
22
 
23
- More information needed
 
 
 
 
 
 
 
 
24
 
25
- ## Training and evaluation data
 
 
 
 
 
 
 
 
 
 
26
 
27
- More information needed
28
 
29
  ## Training procedure
30
 
 
 
31
  ### Training hyperparameters
32
 
33
  The following hyperparameters were used during training:
@@ -39,12 +68,8 @@ The following hyperparameters were used during training:
39
  - lr_scheduler_type: linear
40
  - num_epochs: 50
41
 
42
- ### Training results
43
-
44
-
45
-
46
  ### Framework versions
47
 
48
  - Transformers 4.25.1
49
  - Pytorch 1.13.1
50
- - Tokenizers 0.13.2
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
  should probably proofread and complete it, then remove this comment. -->
12
 
13
+ # [ArtPrompter](https://pearsonkyle.github.io/Art-Prompter/)
14
 
15
+ A [gpt2](https://huggingface.co/gpt2) powered predictive keyboard for making descriptive text prompts for A.I. image generators (e.g. MidJourney, Stable Diffusion, ArtBot, etc). The model was trained on a database of over 268K MidJourney images corresponding to 113K unique prompts.
16
 
17
+ ```python
18
+ from transformers import pipeline
19
+
20
+ ai = pipeline('text-generation',model='pearsonkyle/ArtPrompter', tokenizer='gpt2')
21
+
22
+ texts = ai('The', max_length=30, num_return_sequences=5)
23
+
24
+ for i in range(5):
25
+ print(texts[i]['generated_text']+'\n')
26
+ ```
27
+
28
+ [![Art Prompter](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1HQOtD2LENTeXEaxHUfIhDKUaPIGd6oTR?usp=sharing)
29
 
 
30
 
31
  ## Intended uses & limitations
32
 
33
+ Build prompts and generate images on Discord!
34
+
35
+
36
+ [![](https://cincydiscord.com/wp-content/uploads/2019/02/CINCYDISCORDJOIN.png)](https://discord.gg/3S8Taqa2Xy)
37
+
38
+
39
+ [![](https://pearsonkyle.github.io/Art-Prompter/images/discord_bot.png)](https://discord.gg/3S8Taqa2Xy)
40
+
41
+
42
 
43
+ ## Examples
44
+
45
+ All text prompts below are generated with our language model
46
+
47
+ - *The entire universe is a simulation,a confessional with a smiling guy fawkes mask, symmetrical, inviting,hyper realistic*
48
+
49
+ - *a pug disguised as a teacher. Setting is a class room*
50
+
51
+ - *I wish I had an angel For one moment of love I wish I had your angel Your Virgin Mary undone Im in love with my desire Burning angelwings to dust*
52
+
53
+ - *The heart of a galaxy, surrounded by stars, magnetic fields, big bang, cinestill 800T,black background, hyper detail, 8k, black*
54
 
 
55
 
56
  ## Training procedure
57
 
58
+ ~1 hour of finetune on RTX2080 with 113K unique prompts
59
+
60
  ### Training hyperparameters
61
 
62
  The following hyperparameters were used during training:
 
68
  - lr_scheduler_type: linear
69
  - num_epochs: 50
70
 
 
 
 
 
71
  ### Framework versions
72
 
73
  - Transformers 4.25.1
74
  - Pytorch 1.13.1
75
+ - Tokenizers 0.13.2