Improve model card: Add paper, code, and project page links, and main title
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,11 +1,16 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
|
|
|
| 4 |
license: apache-2.0
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
-
library_name: transformers
|
| 7 |
-
base_model: Qwen/Qwen2.5-32B
|
| 8 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
<center><img src="banner.png" alt="k2-think-banner"/></center>
|
| 10 |
|
| 11 |
<p align="center">
|
|
@@ -69,8 +74,8 @@ We deploy K2-THINK on Cerebras Wafer-Scale Engine (WSE) systems, leveraging the
|
|
| 69 |
|
| 70 |
| Platform | Throughput (tokens/sec) | Example: 32k-token response (time) |
|
| 71 |
| --------------------------------- | ----------------------: | ---------------------------------: |
|
| 72 |
-
| **Cerebras WSE (our deployment)** |
|
| 73 |
-
| Typical **H100/H200** GPU setup |
|
| 74 |
|
| 75 |
---
|
| 76 |
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: Qwen/Qwen2.5-32B
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
+
library_name: transformers
|
| 6 |
license: apache-2.0
|
| 7 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
+
|
| 10 |
+
# K2-Think: A Parameter-Efficient Reasoning System
|
| 11 |
+
|
| 12 |
+
📚 [Paper](https://huggingface.co/papers/2509.07604) - 📝 [Code](https://github.com/MBZUAI-IFM/K2-Think-SFT) - 🏢 [Project Page](https://k2think.ai)
|
| 13 |
+
|
| 14 |
<center><img src="banner.png" alt="k2-think-banner"/></center>
|
| 15 |
|
| 16 |
<p align="center">
|
|
|
|
| 74 |
|
| 75 |
| Platform | Throughput (tokens/sec) | Example: 32k-token response (time) |
|
| 76 |
| --------------------------------- | ----------------------: | ---------------------------------: |
|
| 77 |
+
| **Cerebras WSE (our deployment)** | **\\~2,000** | **\\~16 s** |
|
| 78 |
+
| Typical **H100/H200** GPU setup | \\~200 | \\~160 s |
|
| 79 |
|
| 80 |
---
|
| 81 |
|