File size: 865 Bytes
6e774e2 0c85c74 6e774e2 6318c88 6e774e2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# Model Card for DeBERTa-v3-base-tasksource-nli
This is [DeBERTa-v3-base](https://hf.co/microsoft/deberta-v3-base) fine-tuned with multi-task learning on 600 tasks.
This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:
- Zero-shot entailment-based classification pipeline (similar to bart-mnli), see [ZS].
- Natural language inference, and many other tasks with tasksource-adapters, see [TA]
- Further fine-tuning with a new task (classification, token classification or multiple-choice).
# [ZS] Zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",model="Azma-AI/deberta-base-multi-label-classifier")
text = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(text, candidate_labels)
|