nroggendorff commited on
Commit
06f748d
·
verified ·
1 Parent(s): 75d420a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -1
README.md CHANGED
@@ -7,4 +7,46 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ ## Usage
11
+
12
+ You can load models using the Hugging Face Transformers library:
13
+
14
+ ```python
15
+ from transformers import pipeline
16
+
17
+ pipe = pipeline("text-generation", model="nroggendorff/mayo")
18
+
19
+ question = "What color is the sky?"
20
+ conv = [{"role": "user", "content": question}]
21
+
22
+ response = pipe(conv, max_new_tokens=32)[0]['generated_text'][-1]['content']
23
+ print(response)
24
+ ```
25
+
26
+ To use models with quantization:
27
+
28
+ ```python
29
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
30
+ import torch
31
+
32
+ bnb_config = BitsAndBytesConfig(
33
+ load_in_4bit=True,
34
+ bnb_4bit_use_double_quant=True,
35
+ bnb_4bit_quant_type="nf4",
36
+ bnb_4bit_compute_dtype=torch.bfloat16
37
+ )
38
+
39
+ model_id = "nroggendorff/mayo"
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
42
+ model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
43
+
44
+ question = "What color is the sky?"
45
+ prompt = tokenizer.apply_chat_template([{"role": "user", "content": question}])
46
+ inputs = tokenizer(prompt, return_tensors="pt")
47
+
48
+ outputs = model.generate(**inputs, max_new_tokens=32)
49
+
50
+ generated_text = tokenizer.batch_decode(outputs)[0]
51
+ print(generated_text)
52
+ ```