|
--- |
|
viewer: false |
|
tags: [uv-script, classification, vllm, structured-outputs, gpu-required] |
|
--- |
|
|
|
# Dataset Classification Script |
|
|
|
GPU-accelerated text classification for Hugging Face datasets with guaranteed valid outputs through structured generation. |
|
|
|
## π Quick Start |
|
|
|
```bash |
|
# Classify IMDB reviews |
|
uv run classify-dataset.py \ |
|
--input-dataset stanfordnlp/imdb \ |
|
--column text \ |
|
--labels "positive,negative" \ |
|
--output-dataset user/imdb-classified |
|
``` |
|
|
|
That's it! No installation, no setup - just `uv run`. |
|
|
|
## π Requirements |
|
|
|
- **GPU Required**: Uses GPU-accelerated inference |
|
- Python 3.10+ |
|
- UV (will handle all dependencies automatically) |
|
- vLLM >= 0.6.6 |
|
|
|
## π― Features |
|
|
|
- **Guaranteed valid outputs** using structured generation with guided decoding |
|
- **Zero-shot classification** without training data required |
|
- **GPU-optimized** for maximum throughput and efficiency |
|
- **Default model**: HuggingFaceTB/SmolLM3-3B (fast 3B model with thinking capabilities) |
|
- **Robust text handling** with preprocessing and validation |
|
- **Automatic progress tracking** and detailed statistics |
|
- **Direct Hub integration** - read and write datasets seamlessly |
|
- **Label descriptions** support for providing context to improve accuracy |
|
- **Reasoning mode** for interpretable classifications with thinking traces |
|
- **JSON output parsing** for reliable extraction from reasoning mode |
|
- **Optimized batching** with vLLM's automatic batch processing |
|
- **Multiple guided backends** - supports outlines, xgrammar, and more |
|
|
|
## π» Usage |
|
|
|
### Basic Classification |
|
|
|
```bash |
|
uv run classify-dataset.py \ |
|
--input-dataset <dataset-id> \ |
|
--column <text-column> \ |
|
--labels <comma-separated-labels> \ |
|
--output-dataset <output-id> |
|
``` |
|
|
|
### Arguments |
|
|
|
**Required:** |
|
|
|
- `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`) |
|
- `--column`: Name of the text column to classify |
|
- `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`) |
|
- `--output-dataset`: Where to save the classified dataset |
|
|
|
**Optional:** |
|
|
|
- `--model`: Model to use (default: **`HuggingFaceTB/SmolLM3-3B`** - a fast 3B parameter model) |
|
- `--label-descriptions`: Provide descriptions for each label to improve classification accuracy |
|
- `--enable-reasoning`: Enable reasoning mode with thinking traces (adds reasoning column) |
|
- `--split`: Dataset split to process (default: `train`) |
|
- `--max-samples`: Limit samples for testing |
|
- `--shuffle`: Shuffle dataset before selecting samples (useful for random sampling) |
|
- `--shuffle-seed`: Random seed for shuffling (default: 42) |
|
- `--temperature`: Generation temperature (default: 0.1) |
|
- `--guided-backend`: Backend for guided decoding (default: `outlines`) |
|
- `--hf-token`: Hugging Face token (or use `HF_TOKEN` env var) |
|
|
|
### Label Descriptions |
|
|
|
Provide context for your labels to improve classification accuracy: |
|
|
|
```bash |
|
uv run classify-dataset.py \ |
|
--input-dataset user/support-tickets \ |
|
--column content \ |
|
--labels "bug,feature,question,other" \ |
|
--label-descriptions "bug:something is broken,feature:request for new functionality,question:asking for help,other:anything else" \ |
|
--output-dataset user/tickets-classified |
|
``` |
|
|
|
The model uses these descriptions to better understand what each label represents, leading to more accurate classifications. |
|
|
|
### Reasoning Mode |
|
|
|
Enable thinking traces for interpretable classifications: |
|
|
|
```bash |
|
uv run classify-dataset.py \ |
|
--input-dataset stanfordnlp/imdb \ |
|
--column text \ |
|
--labels "positive,negative,neutral" \ |
|
--enable-reasoning \ |
|
--output-dataset user/imdb-with-reasoning |
|
``` |
|
|
|
When `--enable-reasoning` is used: |
|
- The model generates step-by-step reasoning using SmolLM3's thinking capabilities |
|
- Output includes three columns: `classification`, `reasoning`, and `parsing_success` |
|
- Final answer must be in JSON format: `{"label": "chosen_label"}` |
|
- Useful for understanding complex classification decisions |
|
- Trade-off: Slower but more interpretable |
|
|
|
## π Examples |
|
|
|
### Sentiment Analysis |
|
|
|
```bash |
|
uv run classify-dataset.py \ |
|
--input-dataset stanfordnlp/imdb \ |
|
--column text \ |
|
--labels "positive,negative" \ |
|
--output-dataset user/imdb-sentiment |
|
``` |
|
|
|
### Support Ticket Classification |
|
|
|
```bash |
|
uv run classify-dataset.py \ |
|
--input-dataset user/support-tickets \ |
|
--column content \ |
|
--labels "bug,feature_request,question,other" \ |
|
--label-descriptions "bug:code or product not working as expected,feature_request:asking for new functionality,question:seeking help or clarification,other:general comments or feedback" \ |
|
--output-dataset user/tickets-classified |
|
``` |
|
|
|
### News Categorization |
|
|
|
```bash |
|
uv run classify-dataset.py \ |
|
--input-dataset ag_news \ |
|
--column text \ |
|
--labels "world,sports,business,tech" \ |
|
--output-dataset user/ag-news-categorized \ |
|
--model meta-llama/Llama-3.2-3B-Instruct |
|
``` |
|
|
|
### Complex Classification with Reasoning |
|
|
|
```bash |
|
uv run classify-dataset.py \ |
|
--input-dataset user/customer-feedback \ |
|
--column text \ |
|
--labels "very_positive,positive,neutral,negative,very_negative" \ |
|
--label-descriptions "very_positive:extremely satisfied,positive:generally satisfied,neutral:mixed feelings,negative:dissatisfied,very_negative:extremely dissatisfied" \ |
|
--enable-reasoning \ |
|
--output-dataset user/feedback-analyzed |
|
``` |
|
|
|
This combines label descriptions with reasoning mode for maximum interpretability. |
|
|
|
### ArXiv ML Research Classification |
|
|
|
Classify academic papers into machine learning research areas: |
|
|
|
```bash |
|
# Fast classification with random sampling |
|
uv run classify-dataset.py \ |
|
--input-dataset librarian-bots/arxiv-metadata-snapshot \ |
|
--column abstract \ |
|
--labels "llm,computer_vision,reinforcement_learning,optimization,theory,other" \ |
|
--label-descriptions "llm:language models and NLP,computer_vision:image and video processing,reinforcement_learning:RL and decision making,optimization:training and efficiency,theory:theoretical ML foundations,other:other ML topics" \ |
|
--output-dataset user/arxiv-ml-classified \ |
|
--split "train[:10000]" \ |
|
--max-samples 100 \ |
|
--shuffle |
|
|
|
# With reasoning for nuanced classification |
|
uv run classify-dataset.py \ |
|
--input-dataset librarian-bots/arxiv-metadata-snapshot \ |
|
--column abstract \ |
|
--labels "multimodal,agents,reasoning,safety,efficiency" \ |
|
--label-descriptions "multimodal:vision-language and cross-modal models,agents:autonomous agents and tool use,reasoning:reasoning and planning systems,safety:alignment and safety research,efficiency:model optimization and deployment" \ |
|
--enable-reasoning \ |
|
--output-dataset user/arxiv-frontier-research \ |
|
--split "train[:1000]" \ |
|
--max-samples 50 |
|
``` |
|
|
|
The reasoning mode is particularly valuable for academic abstracts where papers often span multiple topics and require careful analysis to determine the primary focus. |
|
|
|
## π Running on HF Jobs |
|
|
|
Optimized for [Hugging Face Jobs](https://huggingface.co/docs/hub/spaces-gpu-jobs) (requires Pro subscription or Team/Enterprise organization): |
|
```bash |
|
# Run on L4 GPU with vLLM image |
|
hf jobs uv run \ |
|
--flavor l4x1 \ |
|
--image vllm/vllm-openai:latest \ |
|
https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py \ |
|
--input-dataset stanfordnlp/imdb \ |
|
--column text \ |
|
--labels "positive,negative" \ |
|
--output-dataset user/imdb-classified |
|
``` |
|
|
|
### GPU Flavors |
|
- `t4-small`: Budget option for smaller models |
|
- `l4x1`: Good balance for 7B models |
|
- `a10g-small`: Fast inference for 3B models |
|
- `a10g-large`: More memory for larger models |
|
- `a100-large`: Maximum performance |
|
|
|
## π§ Advanced Usage |
|
|
|
### Random Sampling |
|
|
|
When working with ordered datasets, use `--shuffle` with `--max-samples` to get a representative sample: |
|
|
|
```bash |
|
# Get 50 random reviews instead of the first 50 |
|
uv run classify-dataset.py \ |
|
--input-dataset stanfordnlp/imdb \ |
|
--column text \ |
|
--labels "positive,negative" \ |
|
--output-dataset user/imdb-sample \ |
|
--max-samples 50 \ |
|
--shuffle \ |
|
--shuffle-seed 123 # For reproducibility |
|
``` |
|
|
|
This is especially important for: |
|
- Chronologically ordered datasets (news, papers, social media) |
|
- Pre-sorted datasets (by rating, category, etc.) |
|
- Testing on diverse samples before processing the full dataset |
|
|
|
### Using Different Models |
|
|
|
By default, this script uses **HuggingFaceTB/SmolLM3-3B** - a fast, efficient 3B parameter model that's perfect for most classification tasks. You can easily use any other instruction-tuned model: |
|
|
|
```bash |
|
# Larger model for complex classification |
|
uv run classify-dataset.py \ |
|
--input-dataset user/legal-docs \ |
|
--column text \ |
|
--labels "contract,patent,brief,memo,other" \ |
|
--output-dataset user/legal-classified \ |
|
--model Qwen/Qwen2.5-7B-Instruct |
|
``` |
|
|
|
### Large Datasets |
|
|
|
vLLM automatically handles batching for optimal performance. For very large datasets, it will process efficiently without manual intervention: |
|
|
|
```bash |
|
uv run classify-dataset.py \ |
|
--input-dataset user/huge-dataset \ |
|
--column text \ |
|
--labels "A,B,C" \ |
|
--output-dataset user/huge-classified |
|
``` |
|
|
|
## π Performance |
|
|
|
- **SmolLM3-3B (default)**: ~50-100 texts/second on A10 |
|
- **7B models**: ~20-50 texts/second on A10 |
|
- vLLM automatically optimizes batching for best throughput |
|
- Performance scales with GPU memory and compute capability |
|
|
|
## π€ How It Works |
|
|
|
1. **vLLM**: Provides efficient GPU batch inference with automatic batching |
|
2. **Guided Decoding**: Uses outlines backend to guarantee valid label outputs |
|
3. **Structured Generation**: Constrains model outputs to exact label choices |
|
4. **UV**: Handles all dependencies automatically |
|
|
|
The script loads your dataset, preprocesses texts, classifies each one with guaranteed valid outputs, then saves the results as a new column in the output dataset. |
|
|
|
## π Troubleshooting |
|
|
|
### CUDA Not Available |
|
|
|
This script requires a GPU. Run it on: |
|
|
|
- A machine with NVIDIA GPU |
|
- HF Jobs (recommended) |
|
- Cloud GPU instances |
|
|
|
### Out of Memory |
|
|
|
- Use a smaller model |
|
- Use a larger GPU (e.g., a100-large) |
|
|
|
### Invalid/Skipped Texts |
|
|
|
- Texts shorter than 3 characters are skipped |
|
- Empty or None values are marked as invalid |
|
- Very long texts are truncated to 4000 characters |
|
|
|
### Classification Quality |
|
|
|
- With guided decoding, outputs are guaranteed to be valid labels |
|
- For better results, use clear and distinct label names |
|
- Try the `reasoning` prompt style for complex classifications |
|
- Use a larger model for nuanced tasks |
|
|
|
### vLLM Version Issues |
|
|
|
If you see `ImportError: cannot import name 'GuidedDecodingParams'`: |
|
|
|
- Your vLLM version is too old (requires >= 0.6.6) |
|
- The script specifies the correct version in its dependencies |
|
- UV should automatically install the correct version |
|
|
|
## π¬ Advanced Workflows |
|
|
|
For complex real-world workflows that integrate UV scripts with the Python HF Jobs API, see the [ArXiv ML Trends example](examples/arxiv-workflow/). This demonstrates: |
|
|
|
- **Multi-stage pipelines**: Data preparation β GPU classification β Analysis |
|
- **Python API orchestration**: Using `run_uv_job()` to manage GPU jobs programmatically |
|
- **Production patterns**: Error handling, parallel execution, and incremental updates |
|
- **Cost optimization**: Choosing appropriate compute resources for each task |
|
|
|
```python |
|
# Example: Submit a classification job via Python API |
|
from huggingface_hub import run_uv_job |
|
|
|
job = run_uv_job( |
|
script="https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py", |
|
args=["--input-dataset", "my/dataset", "--labels", "A,B,C"], |
|
flavor="l4x1", |
|
image="vllm/vllm-openai:latest" |
|
) |
|
result = job.wait() |
|
``` |
|
|
|
## π License |
|
|
|
This script is provided as-is for use with the UV Scripts organization. |
|
|