llepogam commited on
Commit
ce9d6b8
·
1 Parent(s): da2e635
Files changed (1) hide show
  1. app.py +6 -3
app.py CHANGED
@@ -127,13 +127,13 @@ st.title("🚫 Offensive Speech Detection")
127
  st.markdown("""
128
  This application helps identify potentially offensive oontent in text provided by an user.
129
 
130
- It uses a machine learning model to analyze text and determine if it contains offensive speech.
131
 
132
 
133
  **How it works:**
134
  1. Enter your text in the input box below
135
- 2. The model will analyze the content and provide a prediction
136
- 3. Results show both the classification and confidence level
137
  """)
138
 
139
 
@@ -160,6 +160,9 @@ with st.expander("❓ Frequently Asked Questions"):
160
  **Q: What is considered offensive speech?**
161
  - A: The model is using a dataset of tweets, which were tagged as offensive or not. More information on the dataset can be found here : https://huggingface.co/datasets/christophsonntag/OLID
162
 
 
 
 
163
  **Q: How is the prediction done?**
164
  - A: The model predicts a value between 1 and 0. The closer it is to 1, the more offensive is the prediction. When the prediction is higher than 0.5, the text is considered as offensive
165
 
 
127
  st.markdown("""
128
  This application helps identify potentially offensive oontent in text provided by an user.
129
 
130
+ It uses a trained neural network to analyze text and determine if it contains offensive speech.
131
 
132
 
133
  **How it works:**
134
  1. Enter your text in the input box below
135
+ 2. The model will analyze the content and provide a prediction based on the model
136
+ 3. Results show both the classification and value predicted by the model
137
  """)
138
 
139
 
 
160
  **Q: What is considered offensive speech?**
161
  - A: The model is using a dataset of tweets, which were tagged as offensive or not. More information on the dataset can be found here : https://huggingface.co/datasets/christophsonntag/OLID
162
 
163
+ **Q: What type of model it is?**
164
+ - A: It is a neural network with an initial preprocessing, a vectorization, an embedding layers and GRU layers
165
+
166
  **Q: How is the prediction done?**
167
  - A: The model predicts a value between 1 and 0. The closer it is to 1, the more offensive is the prediction. When the prediction is higher than 0.5, the text is considered as offensive
168