File size: 6,619 Bytes
52de1e3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 |
---
viewer: false
tags: [uv-script, classification, vllm, structured-outputs, gpu-required]
---
# Dataset Classification with vLLM
Efficient text classification for Hugging Face datasets using vLLM with structured outputs. This script provides GPU-accelerated classification with guaranteed valid outputs through guided decoding.
## π Quick Start
```bash
# Classify IMDB reviews
uv run classify-dataset.py \
--input-dataset stanfordnlp/imdb \
--column text \
--labels "positive,negative" \
--output-dataset user/imdb-classified
```
That's it! No installation, no setup - just `uv run`.
## π Requirements
- **GPU Required**: This script uses vLLM for efficient inference
- Python 3.10+
- UV (will handle all dependencies automatically)
- vLLM >= 0.6.6 (for guided decoding support)
## π― Features
- **Guaranteed valid outputs** using vLLM's guided decoding with outlines
- **Zero-shot classification** with structured generation
- **GPU-optimized** with vLLM's automatic batching for maximum efficiency
- **Robust text handling** with preprocessing and validation
- **Three prompt styles** for different use cases
- **Automatic progress tracking** and detailed statistics
- **Direct Hub integration** - read and write datasets seamlessly
## π» Usage
### Basic Classification
```bash
uv run classify-dataset.py \
--input-dataset <dataset-id> \
--column <text-column> \
--labels <comma-separated-labels> \
--output-dataset <output-id>
```
### Arguments
**Required:**
- `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`)
- `--column`: Name of the text column to classify
- `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`)
- `--output-dataset`: Where to save the classified dataset
**Optional:**
- `--model`: Model to use (default: `HuggingFaceTB/SmolLM3-3B`)
- `--prompt-style`: Choose from `simple`, `detailed`, or `reasoning` (default: `simple`)
- `--split`: Dataset split to process (default: `train`)
- `--max-samples`: Limit samples for testing
- `--temperature`: Generation temperature (default: 0.1)
- `--guided-backend`: Backend for guided decoding (default: `outlines`)
- `--hf-token`: Hugging Face token (or use `HF_TOKEN` env var)
### Prompt Styles
- **simple**: Direct classification prompt
- **detailed**: Emphasizes exact category matching
- **reasoning**: Includes brief analysis before classification
All styles benefit from structured output guarantees - the model can only output valid labels!
## π Examples
### Sentiment Analysis
```bash
uv run classify-dataset.py \
--input-dataset stanfordnlp/imdb \
--column text \
--labels "positive,negative" \
--output-dataset user/imdb-sentiment
```
### Support Ticket Classification
```bash
uv run classify-dataset.py \
--input-dataset user/support-tickets \
--column content \
--labels "bug,feature_request,question,other" \
--output-dataset user/tickets-classified \
--prompt-style reasoning
```
### News Categorization
```bash
uv run classify-dataset.py \
--input-dataset ag_news \
--column text \
--labels "world,sports,business,tech" \
--output-dataset user/ag-news-categorized \
--model meta-llama/Llama-3.2-3B-Instruct
```
## π Running on HF Jobs
This script is optimized for [Hugging Face Jobs](https://huggingface.co/docs/hub/spaces-gpu-jobs) (requires Pro subscription or Team/Enterprise organization):
```bash
# Run on L4 GPU with vLLM image
hf jobs uv run \
--flavor l4x1 \
--image vllm/vllm-openai:latest \
classify-dataset.py \
--input-dataset stanfordnlp/imdb \
--column text \
--labels "positive,negative" \
--output-dataset user/imdb-classified
# Run on A10 GPU with custom model
hf jobs uv run \
--flavor a10g-large \
--image vllm/vllm-openai:latest \
classify-dataset.py \
--input-dataset user/reviews \
--column review_text \
--labels "1,2,3,4,5" \
--output-dataset user/reviews-rated \
--model mistralai/Mistral-7B-Instruct-v0.3 \
--prompt-style detailed
```
### GPU Flavors
- `t4-small`: Budget option for smaller models
- `l4x1`: Good balance for 7B models
- `a10g-small`: Fast inference for 3B models
- `a10g-large`: More memory for larger models
- `a100-large`: Maximum performance
## π§ Advanced Usage
### Using Different Models
The default model is SmolLM3-3B, but you can use any instruction-tuned model:
```bash
# Larger model for complex classification
uv run classify-dataset.py \
--input-dataset user/legal-docs \
--column text \
--labels "contract,patent,brief,memo,other" \
--output-dataset user/legal-classified \
--model Qwen/Qwen2.5-7B-Instruct
```
### Large Datasets
vLLM automatically handles batching for optimal performance. For very large datasets, it will process efficiently without manual intervention:
```bash
uv run classify-dataset.py \
--input-dataset user/huge-dataset \
--column text \
--labels "A,B,C" \
--output-dataset user/huge-classified
```
## π Performance
- **SmolLM3-3B**: ~50-100 texts/second on A10
- **7B models**: ~20-50 texts/second on A10
- vLLM automatically optimizes batching for best throughput
## π€ How It Works
1. **vLLM**: Provides efficient GPU batch inference
2. **Guided Decoding**: Uses outlines to guarantee valid label outputs
3. **Structured Generation**: Constrains model outputs to exact label choices
4. **UV**: Handles all dependencies automatically
The script loads your dataset, preprocesses texts, classifies each one using guided decoding to ensure only valid labels are generated, then saves the results as a new column in the output dataset.
## π Troubleshooting
### CUDA Not Available
This script requires a GPU. Run it on:
- A machine with NVIDIA GPU
- HF Jobs (recommended)
- Cloud GPU instances
### Out of Memory
- Use a smaller model
- Use a larger GPU (e.g., a100-large)
### Invalid/Skipped Texts
- Texts shorter than 3 characters are skipped
- Empty or None values are marked as invalid
- Very long texts are truncated to 4000 characters
### Classification Quality
- With guided decoding, outputs are guaranteed to be valid labels
- For better results, use clear and distinct label names
- Try the `reasoning` prompt style for complex classifications
- Use a larger model for nuanced tasks
### vLLM Version Issues
If you see `ImportError: cannot import name 'GuidedDecodingParams'`:
- Your vLLM version is too old (requires >= 0.6.6)
- The script specifies the correct version in its dependencies
- UV should automatically install the correct version
## π License
This script is provided as-is for use with the UV Scripts organization. |