Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# ToxiFilter: AI-Based Hate Speech Classifier π«
|
| 2 |
|
| 3 |
A fine-tuned BERT model that detects hate speech and offensive language in real-time messages. Developed as part of a capstone project, it powers a Gradio-based chat simulation where offensive content is automatically censored.
|
|
@@ -26,4 +33,4 @@ def classify(text):
|
|
| 26 |
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
|
| 27 |
outputs = model(**inputs)
|
| 28 |
pred = torch.argmax(outputs.logits, dim=1).item()
|
| 29 |
-
return "Hate/Offensive" if pred == 1 else "Not Offensive"
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
title: hate-speech-classifer
|
| 4 |
+
sdk: gradio
|
| 5 |
+
app_file: app.py
|
| 6 |
+
emoji: π
|
| 7 |
+
---
|
| 8 |
# ToxiFilter: AI-Based Hate Speech Classifier π«
|
| 9 |
|
| 10 |
A fine-tuned BERT model that detects hate speech and offensive language in real-time messages. Developed as part of a capstone project, it powers a Gradio-based chat simulation where offensive content is automatically censored.
|
|
|
|
| 33 |
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
|
| 34 |
outputs = model(**inputs)
|
| 35 |
pred = torch.argmax(outputs.logits, dim=1).item()
|
| 36 |
+
return "Hate/Offensive" if pred == 1 else "Not Offensive"
|