HiTZ/eu_Qwen3-8B-Base
This is a Basque (eu) language-specific base language model trained by the HiTZ Research Center, starting from Qwen3-8B-Base and further pretrained on curated Basque data.
This model is released as a base model, intended for further fine-tuning or adaptation (e.g., instruction tuning, domain adaptation).
Training Data
To train language-specific base LLMs, we followed the methodology proposed by Etxaniz et al. (2024), originally developed for Basque, and extended it to other low-resource languages. To enable fair comparisons across languages, we limited the corpus size for each language to roughly the same number of tokens. We also included a small English subset to mitigate catastrophic forgetting.
Corpus composition
| Language | Documents | Tokens (Qwen3) |
|---|---|---|
| Basque (eu) | 4.2M | ~3.5B |
| English (en) | 0.5M | ~0.3B |
Token counts vary slightly depending on the tokenizer, but remain comparable in overall size.
Data sources
Basque data was obtained from the Latxa corpus, which consists primarily of large-scale web-crawled content, news articles, and encyclopedic text.
The English subset was sampled from the FineWeb corpus.
Model Training
- Sequence length: 8,196 tokens
- Effective batch size: 256 sequences
- Tokens per optimization step: ~2M
- Learning rate schedule: cosine decay with 10% warm-up
- Peak learning rate: 1e-5
Training was conducted on the CINECA Leonardo high-performance computing cluster using Fully Sharded Data Parallel (FSDP) across 32 nodes, each equipped with 4 NVIDIA A100 GPUs (64 GB).
Getting Started
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "HiTZ/eu_Qwen3-8B-Base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("Kaixo!", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Acknowledgements
This work has been partially supported by the Basque Government (Research group funding IT1570-22 and IKER-GAITU project), the Spanish Ministry for Digital Transformation and of Civil Service, and the EU-funded NextGenerationEU Recovery, Transformation and Resilience Plan (ILENIA project, 2022/TL22/00215335; and ALIA project).
- Downloads last month
- 43
Model tree for HiTZ/eu_Qwen3-8B-Base
Base model
Qwen/Qwen3-8B-Base