File size: 1,593 Bytes
8cc11fa
e44075f
 
8cc11fa
 
e44075f
8cc11fa
 
 
e44075f
8cc11fa
 
e44075f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
title: Emoji Offensive Agent
emoji: 🧠
colorFrom: pink
colorTo: yellow
sdk: streamlit
app_file: app.py
pinned: false
license: apache-2.0
short_description: Detects offensive emoji-based social media content.
---

# 🧠 Emoji Offensive Agent

This AI agent aims to detect hate speech or offensive content in social media texts, especially those that include **emojis**, **homophones**, or **subculture wordplay**.

It combines two pipelines:

1. **Emoji Translator** – Based on the fine-tuned `Qwen1.5-7B-Chat` model to convert emoji expressions into Chinese text.
2. **Hate Speech Classifier** – Uses a large pre-trained classifier (e.g., `roberta-offensive-language-detection`) to determine whether the final text is offensive (label `1`) or not (label `0`).

---

## 📦 Project Files

- `app.py`: The Streamlit app entry point.
- `agent.py`: The core logic for emoji translation + classification pipeline.
- `README.md`: You're reading it!
- `.gitattributes`: LFS file config (for large model files, if any).

---

## 🚀 Deployment

This Space uses **Streamlit** as its frontend framework. The backend pipeline automatically performs:
1. Emoji-to-text translation
2. Classification using a powerful hate speech model

You can deploy it on Hugging Face Spaces by making sure:

- The `app.py` file exists in the root directory.
- This README is properly parsed (with valid YAML at the top).

---

## 📍Example

Input:  
`你是 🐷`

Intermediate translation:  
`你是猪鼻子`

Final classification:  
**Offensive** 🔴 (`label: 1`)

---

Made with ❤️ by JenniferHJF