yashmarathe commited on
Commit
86368a9
Β·
verified Β·
1 Parent(s): 1b1f94c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -1,3 +1,10 @@
 
 
 
 
 
 
 
1
  # ToxiFilter: AI-Based Hate Speech Classifier 🚫
2
 
3
  A fine-tuned BERT model that detects hate speech and offensive language in real-time messages. Developed as part of a capstone project, it powers a Gradio-based chat simulation where offensive content is automatically censored.
@@ -26,4 +33,4 @@ def classify(text):
26
  inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
27
  outputs = model(**inputs)
28
  pred = torch.argmax(outputs.logits, dim=1).item()
29
- return "Hate/Offensive" if pred == 1 else "Not Offensive"
 
1
+ ---
2
+ license: apache-2.0
3
+ title: hate-speech-classifer
4
+ sdk: gradio
5
+ app_file: app.py
6
+ emoji: πŸš€
7
+ ---
8
  # ToxiFilter: AI-Based Hate Speech Classifier 🚫
9
 
10
  A fine-tuned BERT model that detects hate speech and offensive language in real-time messages. Developed as part of a capstone project, it powers a Gradio-based chat simulation where offensive content is automatically censored.
 
33
  inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
34
  outputs = model(**inputs)
35
  pred = torch.argmax(outputs.logits, dim=1).item()
36
+ return "Hate/Offensive" if pred == 1 else "Not Offensive"