πŸ‡¦πŸ‡Ώ Azerbaijani Speech-to-Text

State-of-the-art transformer model achieving 100% output diversity!

πŸ† Performance

  • Loss: 0.0028 (99.5% reduction)
  • Diversity: 100% (15/15 unique outputs)
  • Quality: Production-grade

πŸ“Š Model Details

  • Parameters: 25M
  • Architecture: Transformer (Pre-LayerNorm)
  • d_model: 384, feedforward: 1536
  • Layers: 6 encoder + 6 decoder
  • Training: 140 augmented samples

πŸš€ Demo

Try it: https://huggingface.co/spaces/jamil2/azerbaijani-stt-demo

πŸ’» Usage

from huggingface_hub import hf_hub_download
import torch

model_path = hf_hub_download(
    repo_id="jamil2/azerbaijani-stt",
    filename="az_model.pt"
)
checkpoint = torch.load(model_path)

πŸ“ Examples

Banking: "Salam, avtomobilim ΓΌΓ§ΓΌn KASKO qiymΙ™ti bilmΙ™k istΙ™yirΙ™m..." Culture: "Bu gΓΌn AzΙ™rbaycan mΙ™dΙ™niyyΙ™tinin vΙ™ AzΙ™rbaycan dilinin..."

πŸ“œ License

MIT - Free to use!

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using jamil2/azerbaijani-stt 1