🧠 Toxicity_model_RoBERTa-base-bne– Spanish Toxicity Classifier Multiclass (Fine-tuned)

πŸ“Œ Model Description

This model is a fine-tuned version of RoBERTa-base-bne, specifically trained to classify the toxicity level of Spanish-language user comments on news articles. It distinguishes between three categories:

  • Non-toxic
  • Slightly toxic
  • Toxic

πŸ“‚ Training Data

The model was fine-tuned on the SocialTOX dataset, a collection of Spanish-language comments annotated for varying levels of toxicity. These comments come from news platforms and represent real-world scenarios of online discourse. In this case, a multiclass classifier was developed.


Training hyperparameters

  • epochs: 7
  • learning_rate: 1.51E-06
  • Adam_epsilon: 2.80E-08
  • weight_decay: 3.88E-12
  • batch_size: 16
  • max_seq_length: 512
Downloads last month
95
Safetensors
Model size
125M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for gplsi/Toxicity_model

Finetuned
(21)
this model

Dataset used to train gplsi/Toxicity_model