BestWishYsh commited on
Commit
6f90ee8
·
verified ·
1 Parent(s): 241b93f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -1
README.md CHANGED
@@ -13,4 +13,39 @@ thumbnail: >-
13
  https://cdn-uploads.huggingface.co/production/uploads/63468720dd6d90d82ccf3450/N9kKR052363-MYkJkmD2V.png
14
  ---
15
 
16
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  https://cdn-uploads.huggingface.co/production/uploads/63468720dd6d90d82ccf3450/N9kKR052363-MYkJkmD2V.png
14
  ---
15
 
16
+ <div align=center>
17
+ <img src="https://github.com/PKU-YuanGroup/OpenS2V-Nexus/blob/main/__assets__/OpenS2V-Nexus_logo.png?raw=true" width="300px">
18
+ </div>
19
+ <h2 align="center"> <a href="https://pku-yuangroup.github.io/OpenS2V-Nexus/">OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation</a></h2>
20
+
21
+ <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for the latest update. </h2>
22
+
23
+
24
+ ## ✨ Summary
25
+ **OpenS2V-Eval** introduces 180 prompts from seven major categories of S2V, which incorporate both real and synthetic test data. Furthermore,
26
+ to accurately align human preferences with S2V benchmarks, we propose three automatic metrics: **NexusScore**, **NaturalScore**, **GmeScore**
27
+ to separately quantify subject consistency, naturalness, and text relevance in generated videos. Building on this, we conduct a comprehensive
28
+ evaluation of 14 representative S2V models, highlighting their strengths and weaknesses across different content.
29
+
30
+ ## 📣 Evaluate Your Own Models
31
+ For how to evaluate your customized model like OpenS2V-Eval in the [OpenS2V-Nexus paper](https://huggingface.co/papers/), please refer to [here](https://github.com/PKU-YuanGroup/OpenS2V-Nexus/tree/main/eval).
32
+
33
+ ## ⚙️ Get Videos Generated by Different S2V models
34
+ For more details, please refer to [here](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval/tree/main/Results).
35
+
36
+ ## 💡 Description
37
+ - **Repository:** [Code](https://github.com/PKU-YuanGroup/OpenS2V-Nexus), [Page](https://pku-yuangroup.github.io/OpenS2V-Nexus/), [Dataset](https://huggingface.co/datasets/BestWishYsh/OpenS2V-5M), [Benchmark](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval)
38
+ - **Paper:** [https://huggingface.co/papers](https://huggingface.co/papers)
39
+ - **Point of Contact:** [Shenghai Yuan]([email protected])
40
+
41
+ ## ✏️ Citation
42
+ If you find our paper and code useful in your research, please consider giving a star and citation.
43
+
44
+ ```BibTeX
45
+ @article{yuan2025opens2v,
46
+ title={OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation},
47
+ author={Yuan, Shenghai and He, Xianyi and Deng, Yufan and Ye, Yang and Huang, Jinfa and Ma Chongyang and Luo, Jiebo and Yuan, Li},
48
+ journal={arXiv preprint arXiv},
49
+ year={2025}
50
+ }
51
+ ```