YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
RoBERTa-Base Model for Emotion Classification
This repository hosts a fine-tuned version of the RoBERTa model for emotion classification tasks. The model has been trained to accurately classify text into six emotion categories, making it suitable for sentiment analysis and emotional content understanding.
Model Details
- Model Name: RoBERTa-Base for Emotion Classification
- Model Architecture: RoBERTa Base
- Task: Emotion Classification
- Dataset: Hugging Face Emotion Dataset
- Quantization: Float16 version available
- Fine-tuning Framework: Hugging Face Transformers
Usage
Installation
pip install transformers torch
Loading the Model
from transformers import RobertaTokenizer, RobertaForSequenceClassification
import torch
import re
# Load model and tokenizer
model_path = "emotion-model" # or "quantized-emotion-model" for the quantized version
model = RobertaForSequenceClassification.from_pretrained(model_path)
tokenizer = RobertaTokenizer.from_pretrained(model_path)
# Set device
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = model.to(device)
Prediction Function
def predict_emotions(texts, model, tokenizer, device='cpu'):
"""
Predicts emotion labels for input text(s) using a fine-tuned transformer model.
Args:
texts (str or List[str]): A single string or list of strings to classify.
model: Trained transformer model.
tokenizer: Corresponding tokenizer.
device (str): 'cpu' or 'cuda'. Default is 'cpu'.
Returns:
List[str]: List of predicted emotion labels.
"""
# Ensure model is on correct device
model.to(device)
# If a single string is passed, convert to list
if isinstance(texts, str):
texts = [texts]
# Preprocess: simple text cleaning
def preprocess(text):
text = text.lower()
text = re.sub(r"http\S+|www\S+|https\S+", '', text)
text = re.sub(r'\@\w+|\#', '', text)
text = re.sub(r"[^a-zA-Z0-9\s.,!?']", '', text)
text = re.sub(r'\s+', ' ', text).strip()
return text
cleaned_texts = [preprocess(t) for t in texts]
# Tokenize
inputs = tokenizer(cleaned_texts, padding=True, truncation=True, return_tensors="pt").to(device)
# Inference
model.eval()
with torch.no_grad():
outputs = model(**inputs)
preds = torch.argmax(outputs.logits, dim=1).tolist()
# Emotion dataset label map
label_map = {
0: "sadness",
1: "joy",
2: "love",
3: "anger",
4: "fear",
5: "surprise"
}
return [label_map[p] for p in preds]
Example Usage
# Example texts
sample_texts = [
"I'm so happy about the new job opportunity!",
"I can't believe they cancelled my favorite show. This is terrible.",
"The sunset over the mountains took my breath away. It was magnificent!"
]
# Run predictions
results = predict_emotions(sample_texts, model, tokenizer, device)
# Show results
for text, emotion in zip(sample_texts, results):
print(f"Text: {text}\nPredicted Emotion: {emotion}\n")
Performance Metrics
- Accuracy: 0.94
- F1 Score: 0.939736
- Precision: 0.941654
- Recall: 0.94
Fine-Tuning Details
Dataset
The model was fine-tuned on the Hugging Face Emotion dataset which contains text labeled with six emotion categories:
- sadness
- joy
- love
- anger
- fear
- surprise
Training Configuration
- Epochs: 3
- Batch Size: 16
- Learning Rate: 2e-5
- Max Length: 128 tokens
- Evaluation Strategy: epoch
- Weight Decay: 0.01
- Optimizer: AdamW
Quantization
A quantized version of the model is available using PyTorch's float16 format to reduce model size and improve inference efficiency.
Repository Structure
.
βββ emotion-model/ # Full-precision model
β βββ config.json
β βββ model.safetensors
β βββ tokenizer_config.json
β βββ special_tokens_map.json
β βββ vocab.json
β βββ merges.txt
βββ quantized-emotion-model/ # Quantized model (float16)
β βββ config.json
β βββ model.safetensors
β βββ tokenizer_config.json
β βββ special_tokens_map.json
β βββ vocab.json
β βββ merges.txt
βββ README.md # Model documentation
Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Emotion detection can be subjective and context-dependent.
- The quantized version may show minor accuracy degradation compared to the full-precision model.
Contributing
Contributions are welcome! Feel free to open an issue or PR for improvements, fixes, or feature extensions.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support