File size: 1,318 Bytes
f3f0edc 0dfb499 f3f0edc 0dfb499 f3f0edc 1a3a091 f3f0edc 6e4d005 f3f0edc 0dfb499 b079a1b 0dfb499 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
library_name: transformers
tags: [custom_generate]
---
## Description
Example repository used to document `generate` from the hub.
⚠️ this custom generation method has an impossible requirement and is meant to crash. If you try to run it, you should see something like
```
ValueError: Missing requirements for `transformers-community/custom_generate_bad_requirements`:
foo (installed: None)
bar==0.0.0 (installed: None)
torch>=99.0 (installed: 2.6.0+cu126)
```
## Base model
`Qwen/Qwen2.5-0.5B-Instruct`
## Model compatibility
Most models. More specifically, any `transformer` LLM/VLM trained for causal language modeling.
## Additional Arguments
`left_padding` (`int`, *optional*): number of padding tokens to add before the provided input
## Output Type changes
(none)
## Example usage
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct", device_map="auto")
inputs = tokenizer(["The quick brown"], return_tensors="pt").to(model.device)
gen_out = model.generate(**inputs, custom_generate="transformers-community/custom_generate_bad_requirements", trust_remote_code=True)
# You should get an exception regarding missing requirements
```
|