Spaces:
Running
Running
title: Emoji Offensive Agent | |
emoji: 🧠 | |
colorFrom: pink | |
colorTo: yellow | |
sdk: streamlit | |
app_file: app.py | |
pinned: false | |
license: apache-2.0 | |
short_description: Detects offensive emoji-based social media content. | |
# 🧠 Emoji Offensive Agent | |
This AI agent aims to detect hate speech or offensive content in social media texts, especially those that include **emojis**, **homophones**, or **subculture wordplay**. | |
It combines two pipelines: | |
1. **Emoji Translator** – Based on the fine-tuned `Qwen1.5-7B-Chat` model to convert emoji expressions into Chinese text. | |
2. **Hate Speech Classifier** – Uses a large pre-trained classifier (e.g., `roberta-offensive-language-detection`) to determine whether the final text is offensive (label `1`) or not (label `0`). | |
--- | |
## 📦 Project Files | |
- `app.py`: The Streamlit app entry point. | |
- `agent.py`: The core logic for emoji translation + classification pipeline. | |
- `README.md`: You're reading it! | |
- `.gitattributes`: LFS file config (for large model files, if any). | |
--- | |
## 🚀 Deployment | |
This Space uses **Streamlit** as its frontend framework. The backend pipeline automatically performs: | |
1. Emoji-to-text translation | |
2. Classification using a powerful hate speech model | |
You can deploy it on Hugging Face Spaces by making sure: | |
- The `app.py` file exists in the root directory. | |
- This README is properly parsed (with valid YAML at the top). | |
--- | |
## 📍Example | |
Input: | |
`你是 🐷` | |
Intermediate translation: | |
`你是猪鼻子` | |
Final classification: | |
**Offensive** 🔴 (`label: 1`) | |
--- | |
Made with ❤️ by JenniferHJF | |