gautamnancy commited on
Commit
6e35907
·
verified ·
1 Parent(s): c5149b4

Upload 7 files

Browse files
README.md ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Sarcasm Detection with BERT
2
+
3
+ This repository contains a fine-tuned BERT model for detecting sarcasm in headlines and text. The model achieves high accuracy in distinguishing between sarcastic and non-sarcastic content using natural language processing techniques.
4
+
5
+ ---
6
+
7
+ ## Model Details
8
+
9
+ - **Model Name:** BERT-Base-Uncased Fine-tuned for Sarcasm Detection
10
+ - **Model Architecture:** BERT Base (110M parameters)
11
+ - **Task:** Binary Classification (Sarcastic vs Non-Sarcastic)
12
+ - **Dataset:** Sarcasm Headlines Dataset
13
+ - **Quantization:** Float16 (for optimized deployment)
14
+ - **Fine-tuning Framework:** Hugging Face Transformers
15
+
16
+ ---
17
+
18
+ ## Dataset
19
+
20
+ The model was trained on the **Sarcasm Headlines Dataset** which contains:
21
+ - **Total Samples:** 26,709 headlines
22
+ - **Features:**
23
+ - `headline`: The text content to classify
24
+ - `is_sarcastic`: Binary label (1 for sarcastic, 0 for non-sarcastic)
25
+ - **Train/Test Split:** 90% training, 10% evaluation
26
+
27
+ ---
28
+
29
+ ## Performance Metrics
30
+
31
+ | Epoch | Training Loss | Validation Loss | Accuracy |
32
+ |-------|---------------|-----------------|----------|
33
+ | 1 | 0.2048 | 0.1821 | 92.96% |
34
+ | 2 | 0.1138 | 0.2792 | 91.01% |
35
+ | 3 | 0.0586 | 0.2372 | **93.86%** |
36
+
37
+ **Final Model Performance:**
38
+ - **Best Accuracy:** 93.86%
39
+ - **Final Training Loss:** 0.146
40
+
41
+ ---
42
+
43
+ ## Installation
44
+
45
+ ```bash
46
+ pip install transformers datasets evaluate scikit-learn torch
47
+ ```
48
+
49
+ ---
50
+
51
+ ## Usage
52
+
53
+ ### Quick Start
54
+
55
+ ```python
56
+ from transformers import pipeline
57
+ import torch
58
+
59
+ # Load the trained model
60
+ classifier = pipeline("text-classification",
61
+ model="./sarcasm_model",
62
+ tokenizer="./sarcasm_model")
63
+
64
+ # Test examples
65
+ test_inputs = [
66
+ "I'm absolutely thrilled to be stuck in traffic again.",
67
+ "The weather is nice and sunny today.",
68
+ "Oh great, another email from the boss with more tasks."
69
+ ]
70
+
71
+ for sentence in test_inputs:
72
+ result = classifier(sentence)[0]
73
+ label = "Sarcastic" if result["label"] == "LABEL_1" else "Not Sarcastic"
74
+ print(f"'{sentence}' → {label} (Confidence: {result['score']:.2f})")
75
+ ```
76
+
77
+ ### Manual Model Loading
78
+
79
+ ```python
80
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
81
+ import torch
82
+
83
+ # Load model and tokenizer
84
+ model = AutoModelForSequenceClassification.from_pretrained("./sarcasm_model")
85
+ tokenizer = AutoTokenizer.from_pretrained("./sarcasm_model")
86
+
87
+ # Tokenize input
88
+ text = "Oh wonderful, another Monday morning!"
89
+ inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
90
+
91
+ # Inference
92
+ with torch.no_grad():
93
+ outputs = model(**inputs)
94
+ predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
95
+ predicted_class = outputs.logits.argmax(dim=1).item()
96
+
97
+ label_mapping = {0: "Not Sarcastic", 1: "Sarcastic"}
98
+ confidence = predictions[0][predicted_class].item()
99
+ print(f"Prediction: {label_mapping[predicted_class]} (Confidence: {confidence:.2f})")
100
+ ```
101
+
102
+ ---
103
+
104
+ ## Training Configuration
105
+
106
+ ### Model Parameters
107
+ - **Base Model:** `bert-base-uncased`
108
+ - **Number of Labels:** 2 (binary classification)
109
+ - **Max Sequence Length:** 128 tokens
110
+ - **Tokenization:** WordPiece with padding and truncation
111
+
112
+ ### Training Arguments
113
+ - **Learning Rate:** 2e-5
114
+ - **Batch Size:** 16 (training), 32 (evaluation)
115
+ - **Epochs:** 3
116
+ - **Weight Decay:** 0.01
117
+ - **Evaluation Strategy:** Every epoch
118
+ - **Optimizer:** AdamW (default)
119
+
120
+ ### Hardware Requirements
121
+ - **GPU:** NVIDIA Tesla T4 (or equivalent)
122
+ - **Memory:** ~4GB GPU memory for training
123
+ - **Training Time:** ~18 minutes for 3 epochs
124
+
125
+ ---
126
+
127
+ ## Model Architecture
128
+
129
+ The model uses BERT's transformer architecture with:
130
+ - **Encoder Layers:** 12
131
+ - **Attention Heads:** 12
132
+ - **Hidden Size:** 768
133
+ - **Vocabulary Size:** 30,522
134
+ - **Classification Head:** Linear layer (768 → 2)
135
+
136
+ ---
137
+
138
+ ## File Structure
139
+
140
+ ```
141
+ sarcasm-detection/
142
+ ├── sarcasm_model/ # Main fine-tuned model
143
+ │ ├── config.json
144
+ │ ├── model.safetensors
145
+ │ ├── tokenizer_config.json
146
+ │ ├── special_tokens_map.json
147
+ │ ├── vocab.txt
148
+ │ └── tokenizer.json
149
+ ├── quantized-model/ # Float16 quantized version
150
+ │ ├── config.json
151
+ │ ├── model.safetensors
152
+ │ └── tokenizer files...
153
+ ├── logs/ # Training logs
154
+ ├── sarcasm-detection.ipynb # Training notebook
155
+ └── README.md # This file
156
+ ```
157
+
158
+ ---
159
+
160
+ ## Quantization
161
+
162
+ A quantized version of the model is available for deployment optimization:
163
+
164
+ ```python
165
+ # Load quantized model (Float16)
166
+ quantized_model = AutoModelForSequenceClassification.from_pretrained("./quantized-model")
167
+ quantized_model = quantized_model.to(dtype=torch.float16)
168
+ ```
169
+
170
+ **Benefits of Quantization:**
171
+ - **Reduced Memory Usage:** ~50% smaller model size
172
+ - **Faster Inference:** Improved speed on compatible hardware
173
+ - **Minimal Accuracy Loss:** Maintains classification performance
174
+
175
+ ---
176
+
177
+ ## Limitations
178
+
179
+ - **Domain Specificity:** Trained primarily on headlines; may not generalize perfectly to other text types
180
+ - **Context Dependency:** Sarcasm detection can be highly context-dependent and subjective
181
+ - **Cultural Nuances:** May not capture sarcasm patterns from different cultural contexts
182
+ - **Short Text Focus:** Optimized for headline-length text (typically under 128 tokens)
183
+
184
+ ---
185
+
186
+ ## Potential Improvements
187
+
188
+ - **Data Augmentation:** Include more diverse sarcasm examples
189
+ - **Ensemble Methods:** Combine multiple models for better accuracy
190
+ - **Context Integration:** Incorporate additional context beyond the headline
191
+ - **Multi-language Support:** Extend to other languages
192
+ - **Real-time Processing:** Optimize for streaming applications
193
+
194
+ ---
195
+
196
+ ## Applications
197
+
198
+ - **Social Media Monitoring:** Detect sarcastic comments and posts
199
+ - **Content Moderation:** Identify potentially misleading sarcastic content
200
+ - **Sentiment Analysis Enhancement:** Improve sentiment classification accuracy
201
+ - **News Analysis:** Analyze editorial tone and bias in headlines
202
+ - **Customer Feedback:** Better understand customer sentiment in reviews
203
+
204
+ ---
205
+
206
+ ## Citation
207
+
208
+ If you use this model in your research, please cite:
209
+
210
+ ```bibtex
211
+ @misc{sarcasm_detection_bert,
212
+ title={BERT-based Sarcasm Detection for Headlines},
213
+ author={Your Name},
214
+ year={2025},
215
+ note={Fine-tuned BERT model for binary sarcasm classification}
216
+ }
217
+ ```
218
+
219
+ ---
220
+
221
+ ## Contributing
222
+
223
+ Contributions are welcome! Please feel free to:
224
+ - Report bugs or issues
225
+ - Suggest improvements
226
+ - Add new features
227
+ - Improve documentation
228
+
229
+ ---
230
+
231
+ ## License
232
+
233
+ This project is licensed under the MIT License. The underlying BERT model follows Google's Apache 2.0 license.
234
+
235
+ ---
236
+
237
+ ## Acknowledgments
238
+
239
+ - **Hugging Face** for the Transformers library
240
+ - **Google Research** for the original BERT model
241
+ - **Kaggle** for providing the Sarcasm Headlines Dataset
242
+ - **PyTorch** for the deep learning framework
config (1).json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForSequenceClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "problem_type": "single_label_classification",
21
+ "torch_dtype": "float16",
22
+ "transformers_version": "4.51.3",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
model (2).safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbf7b49382497d92d46b78336d6237fe51bea9d02e127b473a0bb681f9568363
3
+ size 249318428
special_tokens_map (1).json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer (1).json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config (1).json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "BertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
vocab (1).txt ADDED
The diff for this file is too large to render. See raw diff