Update README.md
Browse files
README.md
CHANGED
@@ -18,18 +18,16 @@ pipeline_tag: text-classification
|
|
18 |
|
19 |
## π Model Description
|
20 |
|
21 |
-
This model is a fine-tuned version** of `RoBERTa-base-bne`, specifically trained to classify the toxicity level of **Spanish-language user comments on news articles**. It distinguishes between
|
22 |
|
23 |
- **Non-toxic**
|
24 |
- **Toxic**
|
25 |
|
26 |
-
The model follows instruction-based prompts and returns a single classification label in response.
|
27 |
-
|
28 |
---
|
29 |
|
30 |
## π Training Data
|
31 |
|
32 |
-
The model was fine-tuned on the **[SocialTOX dataset](https://huggingface.co/datasets/gplsi/SocialTOX)**, a collection of Spanish-language comments annotated for varying levels of toxicity. These comments come from news platforms and represent real-world scenarios of online discourse. In this case, a Binary classifier was
|
33 |
|
34 |
---
|
35 |
|
|
|
18 |
|
19 |
## π Model Description
|
20 |
|
21 |
+
This model is a fine-tuned version** of `RoBERTa-base-bne`, specifically trained to classify the toxicity level of **Spanish-language user comments on news articles**. It distinguishes between two categories:
|
22 |
|
23 |
- **Non-toxic**
|
24 |
- **Toxic**
|
25 |
|
|
|
|
|
26 |
---
|
27 |
|
28 |
## π Training Data
|
29 |
|
30 |
+
The model was fine-tuned on the **[SocialTOX dataset](https://huggingface.co/datasets/gplsi/SocialTOX)**, a collection of Spanish-language comments annotated for varying levels of toxicity. These comments come from news platforms and represent real-world scenarios of online discourse. In this case, a Binary classifier was developed, where the classes \textit{Slightly toxic} and \textit{Toxic} were merged into a single \textit{Toxic} category.
|
31 |
|
32 |
---
|
33 |
|