A newer version of the Gradio SDK is available:
5.44.1
metadata
title: Toxic Eye
emoji: π
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 5.23.2
app_file: app.py
pinned: false
Toxic Eye: Multi-Model Toxicity Evaluation Platform
Overview
Toxic Eye is a comprehensive platform that evaluates text toxicity using multiple language models and classifiers. This platform provides a unique approach by combining both generative and classification models to analyze potentially toxic content.
Features
1. Text Generation Models
Our platform utilizes four state-of-the-art language models:
- Zephyr-7B: Specialized in understanding context and nuance
- Llama-2: Known for its robust performance in content analysis
- Mistral-7B: Offers precise and detailed text evaluation
- Claude-2: Provides comprehensive toxicity assessment
2. Classification Models
We employ four specialized classification models:
- Toxic-BERT: Fine-tuned for toxic content detection
- RoBERTa-Toxic: Advanced toxic pattern recognition
- DistilBERT-Toxic: Efficient toxicity classification
- XLM-RoBERTa-Toxic: Multilingual toxicity detection
3. Community Integration
Access to community insights and discussions about similar content patterns and toxicity analysis.
Technical Details
Model Architecture
Each model in our platform is carefully selected to provide complementary analysis:
def analyze_toxicity(text):
# Multiple model evaluation
llm_results = text_generation_models(text)
classification_results = toxicity_classifiers(text)
community_insights = fetch_community_data(text)
return combined_analysis(llm_results, classification_results, community_insights)
Performance Considerations
- Real-time analysis capabilities
- Efficient multi-model parallel processing
- Optimized response generation
Usage Guidelines
- Enter the text you want to analyze in the input box
- Review results from multiple models
- Compare different model perspectives
- Check community insights for context
References
Citation
If you use this platform in your research, please cite:
@software{toxic_eye,
title = {Toxic Eye: Multi-Model Toxicity Evaluation Platform},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/spaces/[your-username]/toxic-eye}
}