Update README.md
Browse files
README.md
CHANGED
@@ -1,13 +1,92 @@
|
|
1 |
Pythia Quantized Model for Sentiment Analysis
|
2 |
-
|
3 |
-
|
4 |
-
for
|
|
|
5 |
|
6 |
Model Details
|
7 |
-
|
8 |
-
Task: Sentiment Analysis
|
9 |
-
Dataset: Twitter Tweets
|
10 |
-
Quantization: FP16
|
11 |
|
12 |
-
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
Pythia Quantized Model for Sentiment Analysis
|
2 |
+
=============================================
|
3 |
+
This repository hosts a quantized version of the Pythia model, fine-tuned for sentiment analysis tasks.
|
4 |
+
The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for
|
5 |
+
resource-constrained environments.
|
6 |
|
7 |
Model Details
|
8 |
+
-------------
|
|
|
|
|
|
|
9 |
|
10 |
+
* **Developed By:** AventIQ-AI
|
11 |
+
* **Model Architecture:** Pythia-410m
|
12 |
+
* **Task:** Sentiment Analysis
|
13 |
+
* **Dataset:** IMDb Reviews
|
14 |
+
* **Quantization:** Float16
|
15 |
+
* **Fine-tuning Framework:** Hugging Face Transformers
|
16 |
+
|
17 |
+
The quantized model achieves comparable performance to the full-precision model while reducing memory usage and inference time.
|
18 |
+
|
19 |
+
Usage
|
20 |
+
-----
|
21 |
+
|
22 |
+
### Installation
|
23 |
+
|
24 |
+
pip install transformers torch
|
25 |
+
|
26 |
+
### Loading the Model
|
27 |
+
|
28 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
29 |
+
|
30 |
+
tokenizer = AutoTokenizer.from_pretrained("AventIQ-AI/pythia-410m")
|
31 |
+
model = AutoModelForSequenceClassification.from_pretrained("AventIQ-AI/pythia-410m")
|
32 |
+
|
33 |
+
# Example usage
|
34 |
+
text = "This product is amazing!"
|
35 |
+
inputs = tokenizer(text, return_tensors="pt")
|
36 |
+
outputs = model(**inputs)
|
37 |
+
logits = outputs.logits
|
38 |
+
predicted_class = logits.argmax().item()
|
39 |
+
print("Predicted class:", predicted_class)
|
40 |
+
|
41 |
+
Performance Metrics
|
42 |
+
-------------------
|
43 |
+
|
44 |
+
* **Accuracy:** 0.56
|
45 |
+
* **F1 Score:** 0.56
|
46 |
+
* **Precision:** 0.68
|
47 |
+
* **Recall:** 0.56
|
48 |
+
|
49 |
+
Fine-Tuning Details
|
50 |
+
-------------------
|
51 |
+
|
52 |
+
### Dataset
|
53 |
+
|
54 |
+
The IMDb Reviews dataset was used, containing both positive and negative sentiment examples.
|
55 |
+
|
56 |
+
### Training
|
57 |
+
|
58 |
+
* Number of epochs: 3
|
59 |
+
* Batch size: 8
|
60 |
+
* evaluation_strategy= epoch
|
61 |
+
* Learning rate: 2e-5
|
62 |
+
|
63 |
+
### Quantization
|
64 |
+
|
65 |
+
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
|
66 |
+
|
67 |
+
Repository Structure
|
68 |
+
--------------------
|
69 |
+
|
70 |
+
.
|
71 |
+
βββ model/ # Contains the quantized model files
|
72 |
+
βββ tokenizer/ # Tokenizer configuration and vocabulary files
|
73 |
+
βββ model.safensors/ # Fine Tuned Model
|
74 |
+
βββ README.md # Model documentation
|
75 |
+
βββ LICENSE # License for the repository
|
76 |
+
|
77 |
+
Limitations
|
78 |
+
-----------
|
79 |
+
|
80 |
+
* The model may not generalize well to domains outside the fine-tuning dataset.
|
81 |
+
* Quantization may result in minor accuracy degradation compared to full-precision models.
|
82 |
+
|
83 |
+
|
84 |
+
License
|
85 |
+
-------
|
86 |
+
|
87 |
+
This project is licensed under the Apache License 2.0. See the LICENSE file for more details.
|
88 |
+
|
89 |
+
Contributing
|
90 |
+
------------
|
91 |
+
|
92 |
+
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
|