Add model card for DCM
Browse filesThis PR adds a model card for the Dual-Expert Consistency Model (DCM) as described in [DCM: Dual-Expert Consistency Model for Efficient and High-Quality Video Generation](https://huggingface.co/papers/2506.03123).
It includes:
- A brief model description
- a link to the paper
- a link to the Github repository
- the relevant `pipeline_tag` and `library_name`.
README.md
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
library_name: diffusers
|
4 |
+
pipeline_tag: text-to-video
|
5 |
+
---
|
6 |
+
|
7 |
+
# DCM: Dual-Expert Consistency Model for Efficient and High-Quality Video Generation
|
8 |
+
|
9 |
+
This repository hosts the Dual-Expert Consistency Model (DCM) as presented in the paper [DCM: Dual-Expert Consistency Model for Efficient and High-Quality Video Generation](https://huggingface.co/papers/2506.03123). DCM addresses the challenge of applying Consistency Models to video diffusion, which often leads to temporal inconsistency and loss of detail. By using a dual-expert approach, DCM achieves state-of-the-art visual quality with significantly reduced sampling steps.
|
10 |
+
|
11 |
+
For more information, please refer to the project's [Github repository](https://github.com/Vchitect/DCM).
|