Spaces:
Runtime error
Runtime error
| license: apache-2.0 | |
| title: hate-speech-classifer | |
| sdk: gradio | |
| app_file: app.py | |
| emoji: π | |
| # ToxiFilter: AI-Based Hate Speech Classifier π« | |
| A fine-tuned BERT model that detects hate speech and offensive language in real-time messages. Developed as part of a capstone project, it powers a Gradio-based chat simulation where offensive content is automatically censored. | |
| ## π Model Info | |
| - **Base**: `bert-base-uncased` | |
| - **Task**: Binary Classification | |
| - **Labels**: | |
| - `1`: Hate/Offensive | |
| - `0`: Not Offensive | |
| - **Accuracy**: ~92% | |
| - **Dataset**: [tdavidson/hate_speech_offensive](https://huggingface.co/datasets/tdavidson/hate_speech_offensive) | |
| (Labels 0 and 1 combined as "offensive") | |
| ## π Usage | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForSequenceClassification | |
| import torch | |
| model = AutoModelForSequenceClassification.from_pretrained("chaitravi/hate-speech-classifier") | |
| tokenizer = AutoTokenizer.from_pretrained("chaitravi/hate-speech-classifier") | |
| def classify(text): | |
| inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True) | |
| outputs = model(**inputs) | |
| pred = torch.argmax(outputs.logits, dim=1).item() | |
| return "Hate/Offensive" if pred == 1 else "Not Offensive" |