πŸ“° UK news-Style Article Generator (guardian1-prov)

guardian1-prov is a LoRA fine-tuned version of Google’s Gemma-2-2B-IT, trained to generate long-form news articles in the style, tone, and structure of uk-news is a LoRA fine-tuned ... the style and structure of major UK online newspapers in 2024." The model imitates expressive British journalistic writing, headline structure, contextual openings, quotations, political commentary, and typical UK broadsheets sentence rhythm.

This model is suitable for:

  • πŸ“ News article generation
  • ✏️ Creative writing in a journalistic tone
  • πŸ“° Synthetic newsroom content
  • πŸ” Data-to-text generation
  • πŸ“š Style-transfer experiments

This model was trained using Hugging Face AutoTrain.


πŸ“¦ Model Details

  • Base model: google/gemma-2-2b-it
  • Parameter-efficient training: LoRA (int4 quantization)
  • Training platform: AutoTrain Advanced
  • Dataset size: 1,048 broadsheet-style articles (2024)
  • Training epochs: 3
  • Max sequence length: 2048 tokens
  • Task: Causal Language Modeling (SFT)
  • Limitation: it does not not generate correct news, this generates only the style. It needs to be supplied with correct facts and human supervision.

🧠 Training Dataset

The dataset used is:

theoracle/guardian_article_prov

It contains 1,048 newsarticles, each merged into a single text field:

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for theoracle/guardian1-prov

Base model

google/gemma-2-2b
Adapter
(296)
this model

Dataset used to train theoracle/guardian1-prov