chaitravi commited on
Commit
1b1f94c
Β·
verified Β·
1 Parent(s): a5baca0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -12
README.md CHANGED
@@ -1,12 +1,29 @@
1
- ---
2
- title: Hate Speech Classifier
3
- emoji: 🐒
4
- colorFrom: purple
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 5.23.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ToxiFilter: AI-Based Hate Speech Classifier 🚫
2
+
3
+ A fine-tuned BERT model that detects hate speech and offensive language in real-time messages. Developed as part of a capstone project, it powers a Gradio-based chat simulation where offensive content is automatically censored.
4
+
5
+ ## πŸ” Model Info
6
+
7
+ - **Base**: `bert-base-uncased`
8
+ - **Task**: Binary Classification
9
+ - **Labels**:
10
+ - `1`: Hate/Offensive
11
+ - `0`: Not Offensive
12
+ - **Accuracy**: ~92%
13
+ - **Dataset**: [tdavidson/hate_speech_offensive](https://huggingface.co/datasets/tdavidson/hate_speech_offensive)
14
+ (Labels 0 and 1 combined as "offensive")
15
+
16
+ ## πŸ›  Usage
17
+
18
+ ```python
19
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
20
+ import torch
21
+
22
+ model = AutoModelForSequenceClassification.from_pretrained("chaitravi/hate-speech-classifier")
23
+ tokenizer = AutoTokenizer.from_pretrained("chaitravi/hate-speech-classifier")
24
+
25
+ def classify(text):
26
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
27
+ outputs = model(**inputs)
28
+ pred = torch.argmax(outputs.logits, dim=1).item()
29
+ return "Hate/Offensive" if pred == 1 else "Not Offensive"