Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,134 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: bert-base-cased
|
3 |
+
datasets:
|
4 |
+
- ma2za/many_emotions
|
5 |
+
license: apache-2.0
|
6 |
+
tags:
|
7 |
+
- onnx
|
8 |
+
- emotion-detection
|
9 |
+
- BaseLM:bert-base-cased
|
10 |
+
---
|
11 |
+
|
12 |
+
# BERT-Based Emotion Detection on ma2za/many_emotions
|
13 |
+
|
14 |
+
This repository hosts a fine-tuned emotion detection model built on [BERT-base-cased](https://huggingface.co/bert-base-cased). The model is trained on the [ma2za/many_emotions](https://huggingface.co/datasets/ma2za/many_emotions) dataset to classify text into one of seven emotion categories: anger, fear, joy, love, sadness, surprise, and neutral. The model is available in both PyTorch and ONNX formats for efficient deployment.
|
15 |
+
|
16 |
+
## Model Details
|
17 |
+
|
18 |
+
### Model Description
|
19 |
+
|
20 |
+
- **Developed by:** *Your Name or Organization*
|
21 |
+
- **Model Type:** Sequence Classification (Emotion Detection)
|
22 |
+
- **Base Model:** bert-base-cased
|
23 |
+
- **Dataset:** ma2za/many_emotions
|
24 |
+
- **Export Format:** ONNX (for deployment)
|
25 |
+
- **License:** Apache-2.0
|
26 |
+
- **Tags:** onnx, emotion-detection, BERT, sequence-classification
|
27 |
+
|
28 |
+
This model was fine-tuned on the ma2za/many_emotions dataset, where the text is classified into emotion categories based on the content. For quick experimentation, a subset of the training data was used; however, the full model has been trained with the complete dataset and is now publicly available.
|
29 |
+
|
30 |
+
## Training Details
|
31 |
+
|
32 |
+
### Dataset Details
|
33 |
+
- **Dataset ID:** ma2za/many_emotions
|
34 |
+
- **Text Column:** `text`
|
35 |
+
- **Label Column:** `label`
|
36 |
+
|
37 |
+
### Training Hyperparameters
|
38 |
+
- **Epochs:** 1 (for quick test; adjust to your needs)
|
39 |
+
- **Per Device Batch Size:** 96
|
40 |
+
- **Learning Rate:** 1e-5
|
41 |
+
- **Weight Decay:** 0.01
|
42 |
+
- **Optimizer:** AdamW
|
43 |
+
- **Training Duration:** The full training run on the complete dataset (approximately 2.44 million training examples) was completed in about 3 hours and 40 minutes.
|
44 |
+
|
45 |
+
|
46 |
+
|
47 |
+
## ONNX Export
|
48 |
+
|
49 |
+
The model has been exported to the ONNX format using opset version 14, ensuring support for modern operators such as `scaled_dot_product_attention`. This enables flexible deployment scenarios across different platforms using ONNX Runtime.
|
50 |
+
|
51 |
+
## How to Load the Model
|
52 |
+
|
53 |
+
Instead of loading the model from a local directory, you can load it directly from the Hugging Face Hub using the repository name `iimran/EmotionDetection`.
|
54 |
+
|
55 |
+
### Loading with Transformers (PyTorch)
|
56 |
+
|
57 |
+
```python
|
58 |
+
import os
|
59 |
+
import numpy as np
|
60 |
+
import onnxruntime as ort
|
61 |
+
from transformers import AutoTokenizer, AutoConfig
|
62 |
+
from huggingface_hub import hf_hub_download
|
63 |
+
|
64 |
+
# Specify the repository details.
|
65 |
+
repo_id = "iimran/EmotionDetection"
|
66 |
+
filename = "model.onnx"
|
67 |
+
|
68 |
+
# Download the ONNX model file from the Hub.
|
69 |
+
onnx_model_path = hf_hub_download(repo_id=repo_id, filename=filename)
|
70 |
+
print("Model downloaded to:", onnx_model_path)
|
71 |
+
|
72 |
+
# Load the tokenizer and configuration from the repository.
|
73 |
+
tokenizer = AutoTokenizer.from_pretrained(repo_id)
|
74 |
+
config = AutoConfig.from_pretrained(repo_id)
|
75 |
+
|
76 |
+
# Check whether the configuration contains an id2label mapping.
|
77 |
+
if hasattr(config, "id2label") and config.id2label and len(config.id2label) > 0:
|
78 |
+
id2label = config.id2label
|
79 |
+
else:
|
80 |
+
# Default mapping for ma2za/many_emotions if not present in the config.
|
81 |
+
id2label = {
|
82 |
+
0: "anger",
|
83 |
+
1: "fear",
|
84 |
+
2: "joy",
|
85 |
+
3: "love",
|
86 |
+
4: "sadness",
|
87 |
+
5: "surprise",
|
88 |
+
6: "neutral"
|
89 |
+
}
|
90 |
+
print("id2label mapping:", id2label)
|
91 |
+
|
92 |
+
# Create an ONNX Runtime inference session using the local model file.
|
93 |
+
session = ort.InferenceSession(onnx_model_path)
|
94 |
+
|
95 |
+
def onnx_infer(text):
|
96 |
+
"""
|
97 |
+
Perform inference on the input text using the exported ONNX model.
|
98 |
+
Returns the predicted emotion label.
|
99 |
+
"""
|
100 |
+
# Tokenize the input text with a fixed maximum sequence length matching the model export.
|
101 |
+
inputs = tokenizer(
|
102 |
+
text,
|
103 |
+
return_tensors="np",
|
104 |
+
truncation=True,
|
105 |
+
padding="max_length",
|
106 |
+
max_length=256
|
107 |
+
)
|
108 |
+
|
109 |
+
# Prepare the model inputs.
|
110 |
+
ort_inputs = {
|
111 |
+
"input_ids": inputs["input_ids"],
|
112 |
+
"attention_mask": inputs["attention_mask"]
|
113 |
+
}
|
114 |
+
|
115 |
+
# Run the model.
|
116 |
+
outputs = session.run(None, ort_inputs)
|
117 |
+
logits = outputs[0]
|
118 |
+
|
119 |
+
# Get the predicted class id.
|
120 |
+
predicted_class_id = int(np.argmax(logits, axis=-1)[0])
|
121 |
+
|
122 |
+
# Map the predicted class id to its emotion label.
|
123 |
+
predicted_label = id2label.get(str(predicted_class_id), id2label.get(predicted_class_id, str(predicted_class_id)))
|
124 |
+
|
125 |
+
print("Predicted Emotion ID:", predicted_class_id)
|
126 |
+
print("Predicted Emotion:", predicted_label)
|
127 |
+
return predicted_label
|
128 |
+
|
129 |
+
# Test the inference function.
|
130 |
+
onnx_infer("That rude customer made me furious.")
|
131 |
+
```
|
132 |
+
## Evaluation
|
133 |
+
The model is primarily evaluated using the accuracy metric during training. For deployment, further evaluation on unseen data is recommended to ensure robustness in production settings.
|
134 |
+
|