OpenSML: An Family of Open Small Language Models (coming soon)

William Zebrowski

Introduce OpenSML, a series of Open SMaLl Language Models. These models arcitecture are built stricly will Apple's MLX framework.

The pre-training dataset is a slice of OpenWebText dataset with approximately 2.3 billion tokens.

Bias, Risks, and Limitations

OpenSMLis shared to advance open research by granting access to cutting-edge language models. However, because it’s trained on publicly sourced data and released without safety warranties, it may produce content that is inaccurate, harmful, biased, or otherwise objectionable. Users and developers should therefore conduct rigorous safety evaluations and put in place filtering or other safeguards that suit their specific use cases.

Citation

If you find our work useful, please cite:

@misc{zebrowski2025opensml,
  title={OpenSML: A Family of Small Language Models},
  author={William Zebrowski},
  year={2025},
  howpublished={\url{https://github.com/wzebrowski/opensml}}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support