Update README.md
Browse files
README.md
CHANGED
|
@@ -3,13 +3,11 @@ library_name: transformers
|
|
| 3 |
tags: [custom_generate]
|
| 4 |
---
|
| 5 |
|
| 6 |
-
## Base model:
|
| 7 |
-
`Qwen/Qwen2.5-0.5B-Instruct`
|
| 8 |
|
| 9 |
## Description
|
| 10 |
Test repo to experiment with calling `generate` from the hub. It is a simplified implementation of greedy decoding.
|
| 11 |
|
| 12 |
-
⚠️ this recipe has an impossible requirement and is meant to crash. If you try to run it, you should see
|
| 13 |
```
|
| 14 |
ValueError: Missing requirements for joaogante/test_generate_from_hub_bad_requirements:
|
| 15 |
foo (installed: None)
|
|
@@ -17,6 +15,12 @@ bar==0.0.0 (installed: None)
|
|
| 17 |
torch>=99.0 (installed: 2.6.0+cu126)
|
| 18 |
```
|
| 19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
## Additional Arguments
|
| 21 |
`left_padding` (`int`, *optional*): number of padding tokens to add before the provided input
|
| 22 |
|
|
|
|
| 3 |
tags: [custom_generate]
|
| 4 |
---
|
| 5 |
|
|
|
|
|
|
|
| 6 |
|
| 7 |
## Description
|
| 8 |
Test repo to experiment with calling `generate` from the hub. It is a simplified implementation of greedy decoding.
|
| 9 |
|
| 10 |
+
⚠️ this recipe has an impossible requirement and is meant to crash. If you try to run it, you should see something like
|
| 11 |
```
|
| 12 |
ValueError: Missing requirements for joaogante/test_generate_from_hub_bad_requirements:
|
| 13 |
foo (installed: None)
|
|
|
|
| 15 |
torch>=99.0 (installed: 2.6.0+cu126)
|
| 16 |
```
|
| 17 |
|
| 18 |
+
## Base model
|
| 19 |
+
`Qwen/Qwen2.5-0.5B-Instruct`
|
| 20 |
+
|
| 21 |
+
## Model compatibility
|
| 22 |
+
Most models. More specifically, any `transformer` LLM/VLM trained for causal language modeling.
|
| 23 |
+
|
| 24 |
## Additional Arguments
|
| 25 |
`left_padding` (`int`, *optional*): number of padding tokens to add before the provided input
|
| 26 |
|