π° UK news-Style Article Generator (guardian1-prov)
guardian1-prov is a LoRA fine-tuned version of Googleβs Gemma-2-2B-IT, trained to generate long-form news articles in the style, tone, and structure of uk-news is a LoRA fine-tuned ... the style and structure of major UK online newspapers in 2024." The model imitates expressive British journalistic writing, headline structure, contextual openings, quotations, political commentary, and typical UK broadsheets sentence rhythm.
This model is suitable for:
- π News article generation
- βοΈ Creative writing in a journalistic tone
- π° Synthetic newsroom content
- π Data-to-text generation
- π Style-transfer experiments
This model was trained using Hugging Face AutoTrain.
π¦ Model Details
- Base model:
google/gemma-2-2b-it - Parameter-efficient training: LoRA (int4 quantization)
- Training platform: AutoTrain Advanced
- Dataset size: 1,048 broadsheet-style articles (2024)
- Training epochs: 3
- Max sequence length: 2048 tokens
- Task: Causal Language Modeling (SFT)
- Limitation: it does not not generate correct news, this generates only the style. It needs to be supplied with correct facts and human supervision.
π§ Training Dataset
The dataset used is:
theoracle/guardian_article_prov
It contains 1,048 newsarticles, each merged into a single text field: