Fine-tuned Zero-Shot Classification Model
This model was fine-tuned using the SmartShot approach on a synthetic dataset derived from LLaMa to improve zero-shot classification performance.
Model Details
- Base Model: MoritzLaurer/roberta-base-zeroshot-v2.0-c
- Training Data: Synthetic data created for natural language inference tasks
- Fine-tuning Method: SmartShot approach with NLI framing
Usage
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="gincioks/smartshot-zeroshot-finetuned-v0.1.2")
text = "Shares of Hyundai Motor jumped nearly 8% on Wednesday, a day after South Korea announced a 'green new deal' to spur use of environmentally friendly vehicles."
labels = ["Shares that rise due to the 'green new deal'", "Shares that fall due to the 'green new deal'"]
results = classifier(text, labels)
print(results)
Training Procedure
This model was fine-tuned with the following parameters:
- Learning rate: 2e-05
- Epochs: 1
- Batch size: 1
- Warmup ratio: 0.06
Performance and Limitations
The model achieves improved performance on zero-shot classification tasks but may still have limitations in domains not covered by the training data.
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support