update README.md
Browse files
README.md
CHANGED
@@ -12,14 +12,14 @@ library_name: transformers
|
|
12 |
|
13 |
<p align="center">
|
14 |
<a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">GitHub Repo</a> |
|
15 |
-
<a href="
|
16 |
</p>
|
17 |
<p align="center">
|
18 |
π Join us on <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
|
19 |
</p>
|
20 |
|
21 |
## What's New
|
22 |
-
- [2025.06.06] **MiniCPM4** series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report
|
23 |
|
24 |
## MiniCPM4 Series
|
25 |
MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems.
|
@@ -30,20 +30,21 @@ MiniCPM4 series are highly efficient large language models (LLMs) designed expli
|
|
30 |
- [BitCPM4-0.5B](https://huggingface.co/openbmb/BitCPM4-0.5B): Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. (**<-- you are here**)
|
31 |
- [BitCPM4-1B](https://huggingface.co/openbmb/BitCPM4-1B): Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width.
|
32 |
- [MiniCPM4-Survey](https://huggingface.co/openbmb/MiniCPM4-Survey): Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers.
|
33 |
-
- [MiniCPM4-MCP](https://huggingface.co/openbmb/MiniCPM4-MCP): Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy
|
34 |
|
35 |
## Introduction
|
36 |
-
BitCPM4 are ternary quantized models derived from the
|
37 |
- Improvements of the training method
|
38 |
- Searching hyperparameters with a wind-tunnel on a small model.
|
39 |
- Using a two-stage training method: training in high-precision first and then QAT, making the best of the trained high-precision models and significantly reducing the computational resources required for the QAT phase.
|
40 |
- High parameter efficiency
|
41 |
-
- Achieving comparable performance to full-precision models of similar parameter models with a bit width of only 1.58 bits, demonstrating high parameter efficiency.
|
42 |
|
43 |
## Usage
|
44 |
### Inference with Transformers
|
45 |
BitCPM4's parameters are stored in a fake-quantized format, which supports direct inference within the Huggingface framework.
|
46 |
-
```
|
|
|
47 |
import torch
|
48 |
|
49 |
path = "openbmb/BitCPM4-0.5B"
|
@@ -83,14 +84,15 @@ BitCPM4's performance is comparable with other full-precision models in same mod
|
|
83 |
- Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
|
84 |
|
85 |
## LICENSE
|
86 |
-
- This repository
|
87 |
-
- The usage of MiniCPM model weights must strictly follow [MiniCPM Model License](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).
|
88 |
-
- The models and weights of MiniCPM are completely free for academic research. after filling out a [questionnaire](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, are also available for free commercial use.
|
89 |
|
90 |
## Citation
|
91 |
-
|
92 |
-
- Please cite our [paper](TODO) if you find our work valuable.
|
93 |
|
94 |
```bibtex
|
95 |
-
|
|
|
|
|
|
|
|
|
96 |
```
|
|
|
12 |
|
13 |
<p align="center">
|
14 |
<a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">GitHub Repo</a> |
|
15 |
+
<a href="https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf" target="_blank">Technical Report</a>
|
16 |
</p>
|
17 |
<p align="center">
|
18 |
π Join us on <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
|
19 |
</p>
|
20 |
|
21 |
## What's New
|
22 |
+
- [2025.06.06] **MiniCPM4** series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report [here](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf).π₯π₯π₯
|
23 |
|
24 |
## MiniCPM4 Series
|
25 |
MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems.
|
|
|
30 |
- [BitCPM4-0.5B](https://huggingface.co/openbmb/BitCPM4-0.5B): Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. (**<-- you are here**)
|
31 |
- [BitCPM4-1B](https://huggingface.co/openbmb/BitCPM4-1B): Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width.
|
32 |
- [MiniCPM4-Survey](https://huggingface.co/openbmb/MiniCPM4-Survey): Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers.
|
33 |
+
- [MiniCPM4-MCP](https://huggingface.co/openbmb/MiniCPM4-MCP): Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements.
|
34 |
|
35 |
## Introduction
|
36 |
+
BitCPM4 are ternary quantized models derived from the MiniCPM series models through quantization-aware training (QAT), achieving significant improvements in both training efficiency and model parameter efficiency.
|
37 |
- Improvements of the training method
|
38 |
- Searching hyperparameters with a wind-tunnel on a small model.
|
39 |
- Using a two-stage training method: training in high-precision first and then QAT, making the best of the trained high-precision models and significantly reducing the computational resources required for the QAT phase.
|
40 |
- High parameter efficiency
|
41 |
+
- Achieving comparable performance to full-precision models of similar parameter models with a bit width of only 1.58 bits, demonstrating high parameter efficiency.
|
42 |
|
43 |
## Usage
|
44 |
### Inference with Transformers
|
45 |
BitCPM4's parameters are stored in a fake-quantized format, which supports direct inference within the Huggingface framework.
|
46 |
+
```python
|
47 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
48 |
import torch
|
49 |
|
50 |
path = "openbmb/BitCPM4-0.5B"
|
|
|
84 |
- Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
|
85 |
|
86 |
## LICENSE
|
87 |
+
- This repository and MiniCPM models are released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
|
|
|
|
|
88 |
|
89 |
## Citation
|
90 |
+
- Please cite our [paper](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf) if you find our work valuable.
|
|
|
91 |
|
92 |
```bibtex
|
93 |
+
@article{minicpm4,
|
94 |
+
title={{MiniCPM4}: Ultra-Efficient LLMs on End Devices},
|
95 |
+
author={MiniCPM Team},
|
96 |
+
year={2025}
|
97 |
+
}
|
98 |
```
|