Spaces:
Runtime error
Runtime error
File size: 1,241 Bytes
86368a9 1b1f94c 86368a9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
license: apache-2.0
title: hate-speech-classifer
sdk: gradio
app_file: app.py
emoji: π
---
# ToxiFilter: AI-Based Hate Speech Classifier π«
A fine-tuned BERT model that detects hate speech and offensive language in real-time messages. Developed as part of a capstone project, it powers a Gradio-based chat simulation where offensive content is automatically censored.
## π Model Info
- **Base**: `bert-base-uncased`
- **Task**: Binary Classification
- **Labels**:
- `1`: Hate/Offensive
- `0`: Not Offensive
- **Accuracy**: ~92%
- **Dataset**: [tdavidson/hate_speech_offensive](https://huggingface.co/datasets/tdavidson/hate_speech_offensive)
(Labels 0 and 1 combined as "offensive")
## π Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained("chaitravi/hate-speech-classifier")
tokenizer = AutoTokenizer.from_pretrained("chaitravi/hate-speech-classifier")
def classify(text):
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
outputs = model(**inputs)
pred = torch.argmax(outputs.logits, dim=1).item()
return "Hate/Offensive" if pred == 1 else "Not Offensive" |