nyasukun commited on
Commit
a5ffa85
·
1 Parent(s): 10ba643

name change

Browse files
Files changed (2) hide show
  1. README.md +7 -7
  2. app.py +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
- title: Toxic Total
3
- emoji: 💬
4
  colorFrom: yellow
5
  colorTo: purple
6
  sdk: gradio
@@ -9,10 +9,10 @@ app_file: app.py
9
  pinned: false
10
  ---
11
 
12
- # Toxic Total: Multi-Model Toxicity Evaluation Platform
13
 
14
  ## Overview
15
- Toxic Total is a comprehensive platform that evaluates text toxicity using multiple language models and classifiers. This platform provides a unique approach by combining both generative and classification models to analyze potentially toxic content.
16
 
17
  ## Features
18
 
@@ -68,10 +68,10 @@ def analyze_toxicity(text):
68
 
69
  If you use this platform in your research, please cite:
70
  ```bibtex
71
- @software{toxic_total,
72
- title = {Toxic Total: Multi-Model Toxicity Evaluation Platform},
73
  year = {2024},
74
  publisher = {Hugging Face},
75
- url = {https://huggingface.co/spaces/[your-username]/toxic-total}
76
  }
77
  ```
 
1
  ---
2
+ title: Toxic Eye
3
+ emoji: 👀
4
  colorFrom: yellow
5
  colorTo: purple
6
  sdk: gradio
 
9
  pinned: false
10
  ---
11
 
12
+ # Toxic Eye: Multi-Model Toxicity Evaluation Platform
13
 
14
  ## Overview
15
+ Toxic Eye is a comprehensive platform that evaluates text toxicity using multiple language models and classifiers. This platform provides a unique approach by combining both generative and classification models to analyze potentially toxic content.
16
 
17
  ## Features
18
 
 
68
 
69
  If you use this platform in your research, please cite:
70
  ```bibtex
71
+ @software{toxic_eye,
72
+ title = {Toxic Eye: Multi-Model Toxicity Evaluation Platform},
73
  year = {2024},
74
  publisher = {Hugging Face},
75
+ url = {https://huggingface.co/spaces/[your-username]/toxic-eye}
76
  }
77
  ```
app.py CHANGED
@@ -207,7 +207,7 @@ class UIComponents:
207
  def create_header(self):
208
  """ヘッダーセクションの作成"""
209
  return gr.Markdown("""
210
- # Toxic Total
211
  This system evaluates the toxicity level of input text using multiple approaches.
212
  """)
213
 
 
207
  def create_header(self):
208
  """ヘッダーセクションの作成"""
209
  return gr.Markdown("""
210
+ # Toxic Eye
211
  This system evaluates the toxicity level of input text using multiple approaches.
212
  """)
213