joaogante HF Staff commited on
Commit
0dfb499
·
verified ·
1 Parent(s): 6e4d005

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -2
README.md CHANGED
@@ -5,9 +5,9 @@ tags: [custom_generate]
5
 
6
 
7
  ## Description
8
- Test repo to experiment with calling `generate` from the hub. It is a simplified implementation of greedy decoding.
9
 
10
- ⚠️ this recipe has an impossible requirement and is meant to crash. If you try to run it, you should see something like
11
  ```
12
  ValueError: Missing requirements for joaogante/test_generate_from_hub_bad_requirements:
13
  foo (installed: None)
@@ -27,3 +27,18 @@ Most models. More specifically, any `transformer` LLM/VLM trained for causal lan
27
  ## Output Type changes
28
  (none)
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
 
7
  ## Description
8
+ Example repository used to document `generate` from the hub.
9
 
10
+ ⚠️ this custom generation method has an impossible requirement and is meant to crash. If you try to run it, you should see something like
11
  ```
12
  ValueError: Missing requirements for joaogante/test_generate_from_hub_bad_requirements:
13
  foo (installed: None)
 
27
  ## Output Type changes
28
  (none)
29
 
30
+ ## Example usage
31
+
32
+ ```py
33
+ from transformers import AutoModelForCausalLM, AutoTokenizer
34
+
35
+ # `generate` with `custom_generate` -> `generate` uses custom code
36
+ # note: calling the custom method prints "✨ using a custom generation method ✨"
37
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
38
+ model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct", device_map="auto")
39
+
40
+ inputs = tokenizer(["The quick brown"], return_tensors="pt").to(model.device)
41
+ gen_out = model.generate(**inputs, custom_generate="transformers-community/custom_generate_bad_requirements", trust_remote_code=True)
42
+ print(tokenizer.batch_decode(gen_out, skip_special_tokens=True))
43
+ ```
44
+