YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

πŸ’¬ Sentiment-Analysis-for-Product-Release-Sentiment

A BERT-based sentiment analysis model fine-tuned on a product review dataset. It predicts the sentiment of a text as Positive, Neutral, or Negative with a confidence score. This model is ideal for analyzing customer feedback, reviews, or user comments.


✨ Model Highlights

  • πŸ“Œ Architecture: Based on bert-base-uncased by Google
  • πŸ”§ Fine-tuned on labeled product review data
  • πŸ” 3-way sentiment classification: Negative (0), Neutral (1), Positive (2)
  • πŸ’Ύ Quantized version available for faster inference

🧠 Intended Uses

  • βœ… Classifying product feedback and user reviews
  • βœ… Sentiment analysis for e-commerce platforms
  • βœ… Social media monitoring and customer opinion mining

🚫 Limitations

  • ❌ Designed for English texts only
  • ❌ May not perform well on sarcastic or ironic inputs
  • ❌ May struggle with domains very different from product reviews
  • ❌ Input texts longer than 128 tokens are truncated

πŸ‹οΈβ€β™‚οΈ Training Details

  • Base Model: bert-base-uncased
  • Dataset: Custom-labeled product review dataset
  • Epochs: 5
  • Batch Size: 8
  • Max Length: 128 tokens
  • Optimizer: AdamW
  • Loss Function: CrossEntropyLoss (with class balancing)
  • Hardware: Trained on NVIDIA GPU (CUDA-enabled)

πŸ“Š Evaluation Metrics

Metric Score
Accuracy 0.90
F1 0.90
Precision 0.90
Recall 0.90

πŸ”Ž Label Mapping

Label ID Sentiment
0 Negative
1 Neutral
2 Positive

πŸš€ Usage Example

from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from transformers import DataCollatorWithPadding
import torch
import torch.nn.functional as F

# Load model and tokenizer
model_name = "AventIQ-AI/Sentiment-Analysis-for-Product-Release-Sentiment"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name)
model.eval()

# Inference
def predict_sentiment(text):
    inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
    inputs = {k: v.to(quantized_model.device) for k, v in inputs.items()}
    with torch.no_grad():
        logits = quantized_model(**inputs).logits
        probs = F.softmax(logits, dim=1)

    predicted_class_id = torch.argmax(probs, dim=1).item()
    confidence = probs[0][predicted_class_id].item()

    label_map = {0: "Negative", 1: "Positive"}
    label = label_map[predicted_class_id]
    confidence_str = f"confidence : {confidence * 100:.1f}%"

    return label, confidence_str

# Example
print(predict_sentiment(
"The service was excellent and the staff was friendly.")
)

πŸ§ͺ Quantization

  • Applied post-training dynamic quantization using PyTorch to reduce model size and speed up inference.
  • Quantized model supports CPU-based deployments.

πŸ“ Repository Structure

.
β”œβ”€β”€ model/               # Quantized model files
β”œβ”€β”€ tokenizer/           # Tokenizer config and vocabulary
β”œβ”€β”€ model.safetensors/   # Fine-tuned full-precision model
β”œβ”€β”€ README.md            # Model documentation

πŸ“Œ Limitations

  • May not generalize to completely different domains (e.g., medical, legal)
  • Quantized version may show slight drop in accuracy compared to full-precision model

🀝 Contributing

We welcome contributions! Please feel free to raise an issue or submit a pull request if you find a bug or have a suggestion.

Downloads last month
4
Safetensors
Model size
109M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support