davanstrien HF Staff commited on
Commit
52de1e3
·
1 Parent(s): 374adbf

Add vLLM-based text classification script

Browse files

- Zero-shot classification using vLLM with guided decoding
- Supports any HuggingFace dataset with text columns
- Guaranteed valid outputs through structured generation
- Optimized for GPU inference with automatic batching
- Three prompt styles: simple, detailed, reasoning
- Ready for HF Jobs with vLLM image support

Files changed (2) hide show
  1. README.md +217 -0
  2. classify-dataset.py +371 -0
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ viewer: false
3
+ tags: [uv-script, classification, vllm, structured-outputs, gpu-required]
4
+ ---
5
+
6
+ # Dataset Classification with vLLM
7
+
8
+ Efficient text classification for Hugging Face datasets using vLLM with structured outputs. This script provides GPU-accelerated classification with guaranteed valid outputs through guided decoding.
9
+
10
+ ## 🚀 Quick Start
11
+
12
+ ```bash
13
+ # Classify IMDB reviews
14
+ uv run classify-dataset.py \
15
+ --input-dataset stanfordnlp/imdb \
16
+ --column text \
17
+ --labels "positive,negative" \
18
+ --output-dataset user/imdb-classified
19
+ ```
20
+
21
+ That's it! No installation, no setup - just `uv run`.
22
+
23
+ ## 📋 Requirements
24
+
25
+ - **GPU Required**: This script uses vLLM for efficient inference
26
+ - Python 3.10+
27
+ - UV (will handle all dependencies automatically)
28
+ - vLLM >= 0.6.6 (for guided decoding support)
29
+
30
+ ## 🎯 Features
31
+
32
+ - **Guaranteed valid outputs** using vLLM's guided decoding with outlines
33
+ - **Zero-shot classification** with structured generation
34
+ - **GPU-optimized** with vLLM's automatic batching for maximum efficiency
35
+ - **Robust text handling** with preprocessing and validation
36
+ - **Three prompt styles** for different use cases
37
+ - **Automatic progress tracking** and detailed statistics
38
+ - **Direct Hub integration** - read and write datasets seamlessly
39
+
40
+ ## 💻 Usage
41
+
42
+ ### Basic Classification
43
+
44
+ ```bash
45
+ uv run classify-dataset.py \
46
+ --input-dataset <dataset-id> \
47
+ --column <text-column> \
48
+ --labels <comma-separated-labels> \
49
+ --output-dataset <output-id>
50
+ ```
51
+
52
+ ### Arguments
53
+
54
+ **Required:**
55
+ - `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`)
56
+ - `--column`: Name of the text column to classify
57
+ - `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`)
58
+ - `--output-dataset`: Where to save the classified dataset
59
+
60
+ **Optional:**
61
+ - `--model`: Model to use (default: `HuggingFaceTB/SmolLM3-3B`)
62
+ - `--prompt-style`: Choose from `simple`, `detailed`, or `reasoning` (default: `simple`)
63
+ - `--split`: Dataset split to process (default: `train`)
64
+ - `--max-samples`: Limit samples for testing
65
+ - `--temperature`: Generation temperature (default: 0.1)
66
+ - `--guided-backend`: Backend for guided decoding (default: `outlines`)
67
+ - `--hf-token`: Hugging Face token (or use `HF_TOKEN` env var)
68
+
69
+ ### Prompt Styles
70
+
71
+ - **simple**: Direct classification prompt
72
+ - **detailed**: Emphasizes exact category matching
73
+ - **reasoning**: Includes brief analysis before classification
74
+
75
+ All styles benefit from structured output guarantees - the model can only output valid labels!
76
+
77
+ ## 📊 Examples
78
+
79
+ ### Sentiment Analysis
80
+ ```bash
81
+ uv run classify-dataset.py \
82
+ --input-dataset stanfordnlp/imdb \
83
+ --column text \
84
+ --labels "positive,negative" \
85
+ --output-dataset user/imdb-sentiment
86
+ ```
87
+
88
+ ### Support Ticket Classification
89
+ ```bash
90
+ uv run classify-dataset.py \
91
+ --input-dataset user/support-tickets \
92
+ --column content \
93
+ --labels "bug,feature_request,question,other" \
94
+ --output-dataset user/tickets-classified \
95
+ --prompt-style reasoning
96
+ ```
97
+
98
+ ### News Categorization
99
+ ```bash
100
+ uv run classify-dataset.py \
101
+ --input-dataset ag_news \
102
+ --column text \
103
+ --labels "world,sports,business,tech" \
104
+ --output-dataset user/ag-news-categorized \
105
+ --model meta-llama/Llama-3.2-3B-Instruct
106
+ ```
107
+
108
+ ## 🚀 Running on HF Jobs
109
+
110
+ This script is optimized for [Hugging Face Jobs](https://huggingface.co/docs/hub/spaces-gpu-jobs) (requires Pro subscription or Team/Enterprise organization):
111
+
112
+ ```bash
113
+ # Run on L4 GPU with vLLM image
114
+ hf jobs uv run \
115
+ --flavor l4x1 \
116
+ --image vllm/vllm-openai:latest \
117
+ classify-dataset.py \
118
+ --input-dataset stanfordnlp/imdb \
119
+ --column text \
120
+ --labels "positive,negative" \
121
+ --output-dataset user/imdb-classified
122
+
123
+ # Run on A10 GPU with custom model
124
+ hf jobs uv run \
125
+ --flavor a10g-large \
126
+ --image vllm/vllm-openai:latest \
127
+ classify-dataset.py \
128
+ --input-dataset user/reviews \
129
+ --column review_text \
130
+ --labels "1,2,3,4,5" \
131
+ --output-dataset user/reviews-rated \
132
+ --model mistralai/Mistral-7B-Instruct-v0.3 \
133
+ --prompt-style detailed
134
+ ```
135
+
136
+ ### GPU Flavors
137
+ - `t4-small`: Budget option for smaller models
138
+ - `l4x1`: Good balance for 7B models
139
+ - `a10g-small`: Fast inference for 3B models
140
+ - `a10g-large`: More memory for larger models
141
+ - `a100-large`: Maximum performance
142
+
143
+ ## 🔧 Advanced Usage
144
+
145
+ ### Using Different Models
146
+
147
+ The default model is SmolLM3-3B, but you can use any instruction-tuned model:
148
+
149
+ ```bash
150
+ # Larger model for complex classification
151
+ uv run classify-dataset.py \
152
+ --input-dataset user/legal-docs \
153
+ --column text \
154
+ --labels "contract,patent,brief,memo,other" \
155
+ --output-dataset user/legal-classified \
156
+ --model Qwen/Qwen2.5-7B-Instruct
157
+ ```
158
+
159
+ ### Large Datasets
160
+
161
+ vLLM automatically handles batching for optimal performance. For very large datasets, it will process efficiently without manual intervention:
162
+
163
+ ```bash
164
+ uv run classify-dataset.py \
165
+ --input-dataset user/huge-dataset \
166
+ --column text \
167
+ --labels "A,B,C" \
168
+ --output-dataset user/huge-classified
169
+ ```
170
+
171
+ ## 📈 Performance
172
+
173
+ - **SmolLM3-3B**: ~50-100 texts/second on A10
174
+ - **7B models**: ~20-50 texts/second on A10
175
+ - vLLM automatically optimizes batching for best throughput
176
+
177
+ ## 🤝 How It Works
178
+
179
+ 1. **vLLM**: Provides efficient GPU batch inference
180
+ 2. **Guided Decoding**: Uses outlines to guarantee valid label outputs
181
+ 3. **Structured Generation**: Constrains model outputs to exact label choices
182
+ 4. **UV**: Handles all dependencies automatically
183
+
184
+ The script loads your dataset, preprocesses texts, classifies each one using guided decoding to ensure only valid labels are generated, then saves the results as a new column in the output dataset.
185
+
186
+ ## 🐛 Troubleshooting
187
+
188
+ ### CUDA Not Available
189
+ This script requires a GPU. Run it on:
190
+ - A machine with NVIDIA GPU
191
+ - HF Jobs (recommended)
192
+ - Cloud GPU instances
193
+
194
+ ### Out of Memory
195
+ - Use a smaller model
196
+ - Use a larger GPU (e.g., a100-large)
197
+
198
+ ### Invalid/Skipped Texts
199
+ - Texts shorter than 3 characters are skipped
200
+ - Empty or None values are marked as invalid
201
+ - Very long texts are truncated to 4000 characters
202
+
203
+ ### Classification Quality
204
+ - With guided decoding, outputs are guaranteed to be valid labels
205
+ - For better results, use clear and distinct label names
206
+ - Try the `reasoning` prompt style for complex classifications
207
+ - Use a larger model for nuanced tasks
208
+
209
+ ### vLLM Version Issues
210
+ If you see `ImportError: cannot import name 'GuidedDecodingParams'`:
211
+ - Your vLLM version is too old (requires >= 0.6.6)
212
+ - The script specifies the correct version in its dependencies
213
+ - UV should automatically install the correct version
214
+
215
+ ## 📝 License
216
+
217
+ This script is provided as-is for use with the UV Scripts organization.
classify-dataset.py ADDED
@@ -0,0 +1,371 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # /// script
3
+ # requires-python = ">=3.10"
4
+ # dependencies = [
5
+ # "vllm>=0.6.6",
6
+ # "transformers",
7
+ # "torch",
8
+ # "datasets",
9
+ # "huggingface-hub[hf_transfer]",
10
+ # ]
11
+ # ///
12
+
13
+ """
14
+ Classify text columns in Hugging Face datasets using vLLM with structured outputs.
15
+
16
+ This script provides efficient GPU-based classification with guaranteed valid outputs,
17
+ optimized for running on HF Jobs.
18
+
19
+ Example:
20
+ uv run classify-dataset.py \\
21
+ --input-dataset imdb \\
22
+ --column text \\
23
+ --labels "positive,negative" \\
24
+ --output-dataset user/imdb-classified
25
+
26
+ HF Jobs example:
27
+ hfjobs run --flavor a10 uv run classify-dataset.py \\
28
+ --input-dataset user/emails \\
29
+ --column content \\
30
+ --labels "spam,ham" \\
31
+ --output-dataset user/emails-classified \\
32
+ --prompt-style reasoning
33
+ """
34
+
35
+ import argparse
36
+ import logging
37
+ import os
38
+ import sys
39
+ from typing import List, Dict, Any, Optional
40
+
41
+ import torch
42
+ from datasets import load_dataset, Dataset
43
+ from huggingface_hub import HfApi
44
+ from vllm import LLM, SamplingParams
45
+ from vllm.sampling_params import GuidedDecodingParams
46
+
47
+ # Default model - SmolLM3 for good balance of speed and quality
48
+ DEFAULT_MODEL = "HuggingFaceTB/SmolLM3-3B"
49
+
50
+ # Prompt styles for classification
51
+ PROMPT_STYLES = {
52
+ "simple": """Classify this text as one of: {labels}
53
+
54
+ Text: {text}
55
+
56
+ Label:""",
57
+
58
+ "detailed": """Task: Classify the following text into EXACTLY ONE of these categories.
59
+ Available categories: {labels}
60
+
61
+ Text to classify:
62
+ {text}
63
+
64
+ Category:""",
65
+
66
+ "reasoning": """Analyze the following text and determine which category it belongs to.
67
+ Available categories: {labels}
68
+
69
+ Text to analyze:
70
+ {text}
71
+
72
+ Brief analysis: Let me examine the key aspects of this text.
73
+ Category:""",
74
+ }
75
+
76
+ # Minimum text length for valid classification
77
+ MIN_TEXT_LENGTH = 3
78
+
79
+ # Maximum text length (in characters) to avoid context overflow
80
+ MAX_TEXT_LENGTH = 4000
81
+
82
+ logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
83
+ logger = logging.getLogger(__name__)
84
+
85
+
86
+ def parse_args():
87
+ parser = argparse.ArgumentParser(
88
+ description="Classify text in HuggingFace datasets using vLLM with structured outputs",
89
+ formatter_class=argparse.RawDescriptionHelpFormatter,
90
+ epilog=__doc__
91
+ )
92
+
93
+ # Required arguments
94
+ parser.add_argument(
95
+ "--input-dataset",
96
+ type=str,
97
+ required=True,
98
+ help="Input dataset ID on Hugging Face Hub"
99
+ )
100
+ parser.add_argument(
101
+ "--column",
102
+ type=str,
103
+ required=True,
104
+ help="Name of the text column to classify"
105
+ )
106
+ parser.add_argument(
107
+ "--labels",
108
+ type=str,
109
+ required=True,
110
+ help="Comma-separated list of classification labels (e.g., 'positive,negative')"
111
+ )
112
+ parser.add_argument(
113
+ "--output-dataset",
114
+ type=str,
115
+ required=True,
116
+ help="Output dataset ID on Hugging Face Hub"
117
+ )
118
+
119
+ # Optional arguments
120
+ parser.add_argument(
121
+ "--model",
122
+ type=str,
123
+ default=DEFAULT_MODEL,
124
+ help=f"Model to use for classification (default: {DEFAULT_MODEL})"
125
+ )
126
+ # Removed --batch-size argument as vLLM handles batching internally
127
+ parser.add_argument(
128
+ "--prompt-style",
129
+ type=str,
130
+ choices=list(PROMPT_STYLES.keys()),
131
+ default="simple",
132
+ help="Prompt style to use (default: simple)"
133
+ )
134
+ parser.add_argument(
135
+ "--max-samples",
136
+ type=int,
137
+ default=None,
138
+ help="Maximum number of samples to process (for testing)"
139
+ )
140
+ parser.add_argument(
141
+ "--hf-token",
142
+ type=str,
143
+ default=os.environ.get("HF_TOKEN"),
144
+ help="Hugging Face API token (default: HF_TOKEN env var)"
145
+ )
146
+ parser.add_argument(
147
+ "--split",
148
+ type=str,
149
+ default="train",
150
+ help="Dataset split to process (default: train)"
151
+ )
152
+ parser.add_argument(
153
+ "--temperature",
154
+ type=float,
155
+ default=0.1,
156
+ help="Temperature for generation (default: 0.1)"
157
+ )
158
+ parser.add_argument(
159
+ "--max-tokens",
160
+ type=int,
161
+ default=50,
162
+ help="Maximum tokens to generate (default: 50)"
163
+ )
164
+ parser.add_argument(
165
+ "--guided-backend",
166
+ type=str,
167
+ default="outlines",
168
+ help="Guided decoding backend (default: outlines)"
169
+ )
170
+
171
+ return parser.parse_args()
172
+
173
+
174
+ def preprocess_text(text: str) -> str:
175
+ """Preprocess text for classification."""
176
+ if not text or not isinstance(text, str):
177
+ return ""
178
+
179
+ # Strip whitespace
180
+ text = text.strip()
181
+
182
+ # Truncate if too long
183
+ if len(text) > MAX_TEXT_LENGTH:
184
+ text = text[:MAX_TEXT_LENGTH] + "..."
185
+
186
+ return text
187
+
188
+
189
+ def validate_text(text: str) -> bool:
190
+ """Check if text is valid for classification."""
191
+ if not text or len(text) < MIN_TEXT_LENGTH:
192
+ return False
193
+ return True
194
+
195
+
196
+ def prepare_prompts(
197
+ texts: List[str],
198
+ labels: List[str],
199
+ prompt_template: str
200
+ ) -> tuple[List[str], List[int]]:
201
+ """Prepare prompts for classification, filtering invalid texts."""
202
+ prompts = []
203
+ valid_indices = []
204
+
205
+ for i, text in enumerate(texts):
206
+ processed_text = preprocess_text(text)
207
+ if validate_text(processed_text):
208
+ prompt = prompt_template.format(
209
+ labels=", ".join(labels),
210
+ text=processed_text
211
+ )
212
+ prompts.append(prompt)
213
+ valid_indices.append(i)
214
+
215
+ return prompts, valid_indices
216
+
217
+
218
+ def main():
219
+ args = parse_args()
220
+
221
+ # Check CUDA availability
222
+ if not torch.cuda.is_available():
223
+ logger.error("CUDA is not available. This script requires a GPU.")
224
+ logger.error("Please run on a machine with GPU support or use HF Jobs.")
225
+ sys.exit(1)
226
+
227
+ logger.info(f"CUDA available. Using device: {torch.cuda.get_device_name(0)}")
228
+
229
+ # Parse and validate labels
230
+ labels = [label.strip() for label in args.labels.split(",")]
231
+ if len(labels) < 2:
232
+ logger.error("At least two labels are required for classification.")
233
+ sys.exit(1)
234
+ logger.info(f"Classification labels: {labels}")
235
+
236
+ # Load dataset
237
+ logger.info(f"Loading dataset: {args.input_dataset}")
238
+ try:
239
+ dataset = load_dataset(args.input_dataset, split=args.split)
240
+
241
+ # Limit samples if specified
242
+ if args.max_samples:
243
+ dataset = dataset.select(range(min(args.max_samples, len(dataset))))
244
+ logger.info(f"Limited dataset to {len(dataset)} samples")
245
+
246
+ logger.info(f"Loaded {len(dataset)} samples from split '{args.split}'")
247
+ except Exception as e:
248
+ logger.error(f"Failed to load dataset: {e}")
249
+ sys.exit(1)
250
+
251
+ # Verify column exists
252
+ if args.column not in dataset.column_names:
253
+ logger.error(f"Column '{args.column}' not found in dataset.")
254
+ logger.error(f"Available columns: {dataset.column_names}")
255
+ sys.exit(1)
256
+
257
+ # Extract texts
258
+ texts = dataset[args.column]
259
+
260
+ # Initialize vLLM
261
+ logger.info(f"Initializing vLLM with model: {args.model}")
262
+ logger.info(f"Using guided decoding backend: {args.guided_backend}")
263
+ try:
264
+ llm = LLM(
265
+ model=args.model,
266
+ trust_remote_code=True,
267
+ dtype="auto",
268
+ gpu_memory_utilization=0.95,
269
+ guided_decoding_backend=args.guided_backend,
270
+ )
271
+ except Exception as e:
272
+ logger.error(f"Failed to initialize vLLM: {e}")
273
+ sys.exit(1)
274
+
275
+ # Set up guided decoding parameters
276
+ guided_params = GuidedDecodingParams(choice=labels)
277
+
278
+ # Set up sampling parameters with structured output
279
+ sampling_params = SamplingParams(
280
+ guided_decoding=guided_params,
281
+ temperature=args.temperature,
282
+ max_tokens=args.max_tokens,
283
+ )
284
+
285
+ # Get prompt template
286
+ prompt_template = PROMPT_STYLES[args.prompt_style]
287
+ logger.info(f"Using prompt style '{args.prompt_style}'")
288
+ logger.info("Using structured output with guided_choice - outputs guaranteed to be valid labels")
289
+
290
+ # Prepare all prompts
291
+ logger.info("Preparing prompts for classification...")
292
+ all_prompts, valid_indices = prepare_prompts(texts, labels, prompt_template)
293
+
294
+ if not all_prompts:
295
+ logger.error("No valid texts found for classification.")
296
+ sys.exit(1)
297
+
298
+ logger.info(f"Prepared {len(all_prompts)} valid prompts out of {len(texts)} texts")
299
+
300
+ # Let vLLM handle batching internally
301
+ logger.info("Starting classification (vLLM will handle batching internally)...")
302
+
303
+ try:
304
+ # Generate all classifications at once - vLLM handles batching
305
+ outputs = llm.generate(all_prompts, sampling_params)
306
+
307
+ # Map results back to original indices
308
+ all_classifications = [None] * len(texts)
309
+ for idx, output in enumerate(outputs):
310
+ original_idx = valid_indices[idx]
311
+ generated_text = output.outputs[0].text.strip()
312
+ all_classifications[original_idx] = generated_text
313
+
314
+ # Count statistics
315
+ valid_texts = len(valid_indices)
316
+ total_texts = len(texts)
317
+
318
+ except Exception as e:
319
+ logger.error(f"Classification failed: {e}")
320
+ sys.exit(1)
321
+
322
+ # Add classifications to dataset
323
+ dataset = dataset.add_column("classification", all_classifications)
324
+
325
+ # Calculate statistics
326
+ none_count = total_texts - valid_texts
327
+ if none_count > 0:
328
+ logger.warning(f"{none_count} texts were too short or invalid for classification")
329
+
330
+ # Show classification distribution
331
+ label_counts = {label: all_classifications.count(label) for label in labels}
332
+ logger.info("Classification distribution:")
333
+ for label, count in label_counts.items():
334
+ percentage = count / total_texts * 100 if total_texts > 0 else 0
335
+ logger.info(f" {label}: {count} ({percentage:.1f}%)")
336
+ if none_count > 0:
337
+ none_percentage = none_count / total_texts * 100
338
+ logger.info(f" Invalid/Skipped: {none_count} ({none_percentage:.1f}%)")
339
+
340
+ # Log success rate
341
+ success_rate = (valid_texts / total_texts * 100) if total_texts > 0 else 0
342
+ logger.info(f"Classification success rate: {success_rate:.1f}%")
343
+
344
+ # Save to Hub
345
+ logger.info(f"Pushing dataset to Hub: {args.output_dataset}")
346
+ try:
347
+ dataset.push_to_hub(
348
+ args.output_dataset,
349
+ token=args.hf_token,
350
+ commit_message=f"Add classifications using {args.model} with structured outputs"
351
+ )
352
+ logger.info(f"Successfully pushed to: https://huggingface.co/datasets/{args.output_dataset}")
353
+ except Exception as e:
354
+ logger.error(f"Failed to push to Hub: {e}")
355
+ sys.exit(1)
356
+
357
+
358
+ if __name__ == "__main__":
359
+ if len(sys.argv) == 1:
360
+ print("Example HF Jobs command:")
361
+ print("hf jobs uv run \\")
362
+ print(" --flavor l4x1 \\")
363
+ print(" --image vllm/vllm-openai:latest \\")
364
+ print(" classify-dataset.py \\")
365
+ print(" --input-dataset stanfordnlp/imdb \\")
366
+ print(" --column text \\")
367
+ print(" --labels 'positive,negative' \\")
368
+ print(" --output-dataset user/imdb-classified")
369
+ sys.exit(0)
370
+
371
+ main()