Spaces:
Running
Running
title: README | |
emoji: π | |
colorFrom: red | |
colorTo: yellow | |
sdk: static | |
pinned: true | |
thumbnail: >- | |
https://cdn-uploads.huggingface.co/production/uploads/673ab3647afcea17eb4378fd/XKKbARx5zCfggwT6LbCfo.jpeg | |
<center> | |
<img src="https://huggingface.co/spaces/SmallDoge/README/resolve/main/org_icon.png" alt="Doge" width="1080" height="720"> | |
</center> | |
# SmallDoge | |
Welcome to **SmallDoge**, where we pioneer the development of compact, high-performance small language models. Our focus is on creating ultra-fast SLMs using innovative dynamic algorithms. Committed to transparency and collaboration, all our training details and code are openly accessible on the [SmallDoge GitHub repository](https://github.com/SmallDoges/small-doge). | |
**Our Mission:** To democratize access to advanced AI by developing efficient, open-source small language models that empower a wide range of applications and research. | |
Join our community on [Discord](https://discord.gg/P2yYH95N)! | |
## Explore Our Projects | |
We offer a suite of resources and models: | |
- [**Small-Doges**](https://huggingface.co/collections/SmallDoge/doge-slm-679cc991f027c4a3abbded4a): A versatile series of SLMs, including pre-trained base models, supervised fine-tuned models, and models enhanced with reinforcement learning. | |
- [**Doge-CheckPoints**](https://huggingface.co/collections/SmallDoge/doge-checkpoint-679ce95dae1d498e3ad35068): A collection of model checkpoints designed for seamless continued training on new datasets, ensuring smoother adaptation and minimizing training instability. | |
- [**Small-Datasets**](https://huggingface.co/collections/SmallDoge/small-datasets-67cec4630df59e5581afbea1): Curated, multi-stage, high-quality datasets specifically engineered to effectively train small language models, boosting their capabilities and helpfulness. | |
- [**Doge-Downstream-Applications**](https://huggingface.co/collections/SmallDoge/doge-downstream-applications-679ce627a0b7820e04ca22bd): A selection of SLMs optimized for various downstream tasks and real-world applications. | |