classification / README.md
davanstrien's picture
davanstrien HF Staff
Add vLLM-based text classification script
52de1e3
|
raw
history blame
6.62 kB
metadata
viewer: false
tags:
  - uv-script
  - classification
  - vllm
  - structured-outputs
  - gpu-required

Dataset Classification with vLLM

Efficient text classification for Hugging Face datasets using vLLM with structured outputs. This script provides GPU-accelerated classification with guaranteed valid outputs through guided decoding.

πŸš€ Quick Start

# Classify IMDB reviews
uv run classify-dataset.py \
  --input-dataset stanfordnlp/imdb \
  --column text \
  --labels "positive,negative" \
  --output-dataset user/imdb-classified

That's it! No installation, no setup - just uv run.

πŸ“‹ Requirements

  • GPU Required: This script uses vLLM for efficient inference
  • Python 3.10+
  • UV (will handle all dependencies automatically)
  • vLLM >= 0.6.6 (for guided decoding support)

🎯 Features

  • Guaranteed valid outputs using vLLM's guided decoding with outlines
  • Zero-shot classification with structured generation
  • GPU-optimized with vLLM's automatic batching for maximum efficiency
  • Robust text handling with preprocessing and validation
  • Three prompt styles for different use cases
  • Automatic progress tracking and detailed statistics
  • Direct Hub integration - read and write datasets seamlessly

πŸ’» Usage

Basic Classification

uv run classify-dataset.py \
  --input-dataset <dataset-id> \
  --column <text-column> \
  --labels <comma-separated-labels> \
  --output-dataset <output-id>

Arguments

Required:

  • --input-dataset: Hugging Face dataset ID (e.g., stanfordnlp/imdb, user/my-dataset)
  • --column: Name of the text column to classify
  • --labels: Comma-separated classification labels (e.g., "spam,ham")
  • --output-dataset: Where to save the classified dataset

Optional:

  • --model: Model to use (default: HuggingFaceTB/SmolLM3-3B)
  • --prompt-style: Choose from simple, detailed, or reasoning (default: simple)
  • --split: Dataset split to process (default: train)
  • --max-samples: Limit samples for testing
  • --temperature: Generation temperature (default: 0.1)
  • --guided-backend: Backend for guided decoding (default: outlines)
  • --hf-token: Hugging Face token (or use HF_TOKEN env var)

Prompt Styles

  • simple: Direct classification prompt
  • detailed: Emphasizes exact category matching
  • reasoning: Includes brief analysis before classification

All styles benefit from structured output guarantees - the model can only output valid labels!

πŸ“Š Examples

Sentiment Analysis

uv run classify-dataset.py \
  --input-dataset stanfordnlp/imdb \
  --column text \
  --labels "positive,negative" \
  --output-dataset user/imdb-sentiment

Support Ticket Classification

uv run classify-dataset.py \
  --input-dataset user/support-tickets \
  --column content \
  --labels "bug,feature_request,question,other" \
  --output-dataset user/tickets-classified \
  --prompt-style reasoning

News Categorization

uv run classify-dataset.py \
  --input-dataset ag_news \
  --column text \
  --labels "world,sports,business,tech" \
  --output-dataset user/ag-news-categorized \
  --model meta-llama/Llama-3.2-3B-Instruct

πŸš€ Running on HF Jobs

This script is optimized for Hugging Face Jobs (requires Pro subscription or Team/Enterprise organization):

# Run on L4 GPU with vLLM image
hf jobs uv run \
  --flavor l4x1 \
  --image vllm/vllm-openai:latest \
  classify-dataset.py \
  --input-dataset stanfordnlp/imdb \
  --column text \
  --labels "positive,negative" \
  --output-dataset user/imdb-classified

# Run on A10 GPU with custom model
hf jobs uv run \
  --flavor a10g-large \
  --image vllm/vllm-openai:latest \
  classify-dataset.py \
  --input-dataset user/reviews \
  --column review_text \
  --labels "1,2,3,4,5" \
  --output-dataset user/reviews-rated \
  --model mistralai/Mistral-7B-Instruct-v0.3 \
  --prompt-style detailed

GPU Flavors

  • t4-small: Budget option for smaller models
  • l4x1: Good balance for 7B models
  • a10g-small: Fast inference for 3B models
  • a10g-large: More memory for larger models
  • a100-large: Maximum performance

πŸ”§ Advanced Usage

Using Different Models

The default model is SmolLM3-3B, but you can use any instruction-tuned model:

# Larger model for complex classification
uv run classify-dataset.py \
  --input-dataset user/legal-docs \
  --column text \
  --labels "contract,patent,brief,memo,other" \
  --output-dataset user/legal-classified \
  --model Qwen/Qwen2.5-7B-Instruct

Large Datasets

vLLM automatically handles batching for optimal performance. For very large datasets, it will process efficiently without manual intervention:

uv run classify-dataset.py \
  --input-dataset user/huge-dataset \
  --column text \
  --labels "A,B,C" \
  --output-dataset user/huge-classified

πŸ“ˆ Performance

  • SmolLM3-3B: ~50-100 texts/second on A10
  • 7B models: ~20-50 texts/second on A10
  • vLLM automatically optimizes batching for best throughput

🀝 How It Works

  1. vLLM: Provides efficient GPU batch inference
  2. Guided Decoding: Uses outlines to guarantee valid label outputs
  3. Structured Generation: Constrains model outputs to exact label choices
  4. UV: Handles all dependencies automatically

The script loads your dataset, preprocesses texts, classifies each one using guided decoding to ensure only valid labels are generated, then saves the results as a new column in the output dataset.

πŸ› Troubleshooting

CUDA Not Available

This script requires a GPU. Run it on:

  • A machine with NVIDIA GPU
  • HF Jobs (recommended)
  • Cloud GPU instances

Out of Memory

  • Use a smaller model
  • Use a larger GPU (e.g., a100-large)

Invalid/Skipped Texts

  • Texts shorter than 3 characters are skipped
  • Empty or None values are marked as invalid
  • Very long texts are truncated to 4000 characters

Classification Quality

  • With guided decoding, outputs are guaranteed to be valid labels
  • For better results, use clear and distinct label names
  • Try the reasoning prompt style for complex classifications
  • Use a larger model for nuanced tasks

vLLM Version Issues

If you see ImportError: cannot import name 'GuidedDecodingParams':

  • Your vLLM version is too old (requires >= 0.6.6)
  • The script specifies the correct version in its dependencies
  • UV should automatically install the correct version

πŸ“ License

This script is provided as-is for use with the UV Scripts organization.