yashmarathe's picture
Update README.md
86368a9 verified
|
raw
history blame
1.24 kB
metadata
license: apache-2.0
title: hate-speech-classifer
sdk: gradio
app_file: app.py
emoji: πŸš€

ToxiFilter: AI-Based Hate Speech Classifier 🚫

A fine-tuned BERT model that detects hate speech and offensive language in real-time messages. Developed as part of a capstone project, it powers a Gradio-based chat simulation where offensive content is automatically censored.

πŸ” Model Info

  • Base: bert-base-uncased
  • Task: Binary Classification
  • Labels:
    • 1: Hate/Offensive
    • 0: Not Offensive
  • Accuracy: ~92%
  • Dataset: tdavidson/hate_speech_offensive
    (Labels 0 and 1 combined as "offensive")

πŸ›  Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model = AutoModelForSequenceClassification.from_pretrained("chaitravi/hate-speech-classifier")
tokenizer = AutoTokenizer.from_pretrained("chaitravi/hate-speech-classifier")

def classify(text):
    inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
    outputs = model(**inputs)
    pred = torch.argmax(outputs.logits, dim=1).item()
    return "Hate/Offensive" if pred == 1 else "Not Offensive"