RoBERTa-ca Model Card
RoBERTa-ca is a new foundational Catalan language model built on the RoBERTa architecture. It uses vocabulary adaptation from mRoBERTa, a method that initializes all weights from mRoBERTa while applying a specialized treatment to the embedding matrix. This treatment carefully handles the differences between the two tokenizers. The model is then continually pretrained using a Catalan-only corpus, consisting of 95GB of high-quality Catalan data.
Technical Description
Technical details of the RoBERTa-ca model.
Description | Value |
---|---|
Model Parameters | 125M |
Tokenizer Type | SPM |
Vocabulary size | 50,304 |
Precision | bfloat16 |
Context length | 512 |
Training Hyperparemeters
Hyperparameter | Value |
---|---|
Pretraining Objective | Masked Language Modeling |
Learning Rate | 3E-05 |
Learning Rate Scheduler | Cosine |
Warmup | 2425 |
Optimizer | AdamW |
Optimizer Hyperparameters | AdamW (β1=0.9,β2=0.98,ε =1e-06 ) |
Optimizer Decay | 1E-02 |
Global Batch Size | 1024 |
Dropout | 1E-01 |
Attention Dropout | 1E-01 |
Activation Function | GeLU |
How to use
>>> from transformers import pipeline
>>> from pprint import pprint
>>> unmasker = pipeline('fill-mask', model='BSC-LT/RoBERTa-ca')
>>> pprint(unmasker("M'encanta la<mask>de Barcelona.",top_k=3))
[{'score': 0.6109828948974609,
'sequence': "M'encanta la ciutat de Barcelona.",
'token': 1125,
'token_str': 'ciutat'},
{'score': 0.04469362273812294,
'sequence': "M'encanta la platja de Barcelona.",
'token': 5404,
'token_str': 'platja'},
{'score': 0.02249019406735897,
'sequence': "M'encanta la gent de Barcelona.",
'token': 1261,
'token_str': 'gent'}]
>>> pprint(unmasker("Adoro menjar un bon plat de<mask>al costat de la platja.",top_k=3))
[{'score': 0.12922883033752441,
'sequence': 'Adoro menjar un bon plat de peix al costat de la platja.',
'token': 5802,
'token_str': 'peix'},
{'score': 0.12800152599811554,
'sequence': 'Adoro menjar un bon plat de carn al costat de la platja.',
'token': 6432,
'token_str': 'carn'},
{'score': 0.06676974892616272,
'sequence': 'Adoro menjar un bon plat de marisc al costat de la platja.',
'token': 31717,
'token_str': 'marisc'}]
>>> pprint(unmasker("Intento anar a la platja de<mask>cada any, és fantástica.",top_k=3))
[{'score': 0.06159511208534241,
'sequence': 'Intento anar a la platja de Pals cada any, és fantástica.',
'token': 28365,
'token_str': 'Pals'},
{'score': 0.04985760524868965,
'sequence': 'Intento anar a la platja de Calella cada any, és fantástica.',
'token': 11472,
'token_str': 'Calella'},
{'score': 0.048444587737321854,
'sequence': 'Intento anar a la platja de Lloret cada any, és fantástica.',
'token': 11420,
'token_str': 'Lloret'}]
Which is equivalent to the following torch script:
from transformers import AutoTokenizer, AutoModelForMaskedLM
import torch
model = AutoModelForMaskedLM.from_pretrained("BSC-LT/RoBERTa-ca")
tokenizer = AutoTokenizer.from_pretrained("BSC-LT/RoBERTa-ca")
# The index of "<mask>" token is -3 given that the -1 position is the EOS token "</s>" and -2 the position of the "." token.
outputs = model(**tokenizer("La capital d'Espanya és<mask>.", return_tensors="pt")).logits
predicted_token = tokenizer.decode(torch.argmax(outputs[0,-3,:]))
print(f"La predicció és \"{predicted_token}\"." ) # The prediction is "Madrid"
In most of the evaluations presented below, the model is adjusted to each use case using specific logits to encode the text.
EVALUATION: CLUB Benchmark
Model performance in Catalan Language is assessed using the Catalan benchmark CLUB. CLUB (Catalan Language Understanding Benchmark) consists of 6 tasks: Named Entity Recognition (NER), Part-of-Speech Tagging (POS), Semantic Textual Similarity (STS), Text Classification (TC), Textual Entailment (TE), and Question Answering (QA). This benchmark evaluates the model's capabilities in the Catalan language.
The following base foundational models have been considered for the comparison:
Multilingual Foundational Model | Number of Parameters | Vocab Size | Description |
---|---|---|---|
BERTa | 126M | 52K | BERTa is a Catalan-specific language model pretrained with Catalan-only data. |
BERTinho | 109M | 30K | BERTinho is monolingual BERT model for Galician language. |
mBERT | 178M | 120K | Multilingual BERT model pretrained on the top 104 languages with the largest Wikipedia. |
mRoBERTa | 283M | 256K | RoBERTa base model pretrained with 35 European languages and a larger vocabulary size. |
roberta-base-bne | 125M | 50K | RoBERTa base model pretrained with 570GB of data from web crawlings performed by the National Library of Spain from 2009 to 2019. |
RoBERTa-ca | 125M | 50K | RoBERTa-ca is a Catalan-specific language model obtained by using vocabulary adaptation from mRoBERTa. |
xlm-roberta-base | 279M | 250K | Foundational RoBERTa model pretrained with CommonCrawl data containing 100 languages. |
xlm-roberta-large | 561M | 250K | Foundational RoBERTa model pretrained with CommonCrawl data containing 100 languages. |
tasks | roberta-base-bne (125M) | berta (126M) | mBERT (178M) | xlm-roberta-base (279M) | xlm-roberta-large (561M) | roberta-ca (125M) | mRoBERTa (283M) |
---|---|---|---|---|---|---|---|
ner (F1) | 87.59 | 89.47 | 85.89 | 87.50 | 89.47 | 89.70 | 88.33 |
pos (F1) | 98.64 | 98.89 | 98.78 | 98.91 | 99.03 | 99.00 | 98.98 |
sts (Person) | 74.27 | 81.39 | 77.05 | 75.11 | 83.49 | 82.99 | 79.52 |
tc (Acc.) | 73.86 | 73.16 | 72.00 | 73.05 | 74.10 | 72.81 | 72.41 |
te (Acc.) | 72.27 | 80.11 | 75.86 | 78.27 | 86.63 | 82.14 | 82.38 |
viquiquad (F1) | 82.56 | 86.74 | 87.42 | 86.81 | 90.35 | 87.31 | 87.86 |
xquad (F1) | 60.56 | 67.38 | 67.72 | 68.56 | 76.08 | 70.53 | 69.40 |
Additional information
Author
The Language Technologies Lab from Barcelona Supercomputing Center.
Contact
For further information, please send an email to [email protected].
Copyright
Copyright(c) 2025 by Language Technologies Lab, Barcelona Supercomputing Center.
Funding
This work has been promoted and financed by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337.
Acknowledgements
This project has benefited from the contributions of numerous teams and institutions through data contributions.
In Catalonia, many institutions have been involved in the project. Our thanks to Òmnium Cultural, Parlament de Catalunya, Institut d'Estudis Aranesos, Racó Català, Vilaweb, ACN, Nació Digital, El món and Aquí Berguedà.
At national level, we are especially grateful to our ILENIA project partners: CENID, HiTZ and CiTIUS for their participation. We also extend our genuine gratitude to the Spanish Senate and Congress, Fundación Dialnet, Fundación Elcano and the ‘Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)’ of the University of Las Palmas de Gran Canaria.
At the international level, we thank the Welsh government, DFKI, Occiglot project, especially Malte Ostendorff, and The Common Crawl Foundation, especially Pedro Ortiz, for their collaboration.
Their valuable efforts have been instrumental in the development of this work.
Disclaimer
Be aware that the model may contain biases or other unintended distortions. When third parties deploy systems or provide services based on this model, or use the model themselves, they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence.
The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use.
License
- Downloads last month
- 246