| --- |
| pretty_name: SongFormBench |
| tags: |
| - MSA |
| - Benchmark |
| license: cc-by-4.0 |
| language: |
| - en |
| - zh |
| --- |
| |
| # SongFormBench π |
|
|
| [English ο½ [δΈζ](README_ZH.md)] |
|
|
| **A High-Quality Benchmark for Music Structure Analysis** |
|
|
| <div align="center"> |
|
|
|  |
|  |
| [](https://arxiv.org/abs/2510.02797) |
| [](https://github.com/ASLP-lab/SongFormer) |
| [](https://huggingface.co/spaces/ASLP-lab/SongFormer) |
| [](https://huggingface.co/ASLP-lab/SongFormer) |
| [](https://huggingface.co/datasets/ASLP-lab/SongFormDB) |
| [](https://huggingface.co/datasets/ASLP-lab/SongFormBench) |
| [](https://discord.gg/p5uBryC4Zs) |
| [](http://www.npu-aslp.org/) |
|
|
| </div> |
|
|
| <div align="center"> |
| <h3> |
| Chunbo Hao<sup>1*</sup>, Ruibin Yuan<sup>2,5*</sup>, Jixun Yao<sup>1</sup>, Qixin Deng<sup>3,5</sup>,<br>Xinyi Bai<sup>4,5</sup>, Wei Xue<sup>2</sup>, Lei Xie<sup>1β </sup> |
| </h3> |
| |
| <p> |
| <sup>*</sup>Equal contribution <sup>β </sup>Corresponding author |
| </p> |
| |
| <p> |
| <sup>1</sup>Audio, Speech and Language Processing Group (ASLP@NPU),<br>Northwestern Polytechnical University<br> |
| <sup>2</sup>Hong Kong University of Science and Technology<br> |
| <sup>3</sup>Northwestern University<br> |
| <sup>4</sup>Cornell University<br> |
| <sup>5</sup>Multimodal Art Projection (M-A-P) |
| </p> |
| </div> |
| |
| --- |
|
|
| ## π What is SongFormBench? |
|
|
| SongFormBench is a **carefully curated, expert-annotated benchmark** designed to revolutionize music structure analysis (MSA) evaluation. Our dataset provides a unified standard for comparing MSA models. |
|
|
| ### π Dataset Composition |
|
|
| - **πΈ SongFormBench-HarmonixSet (BHX)**: 200 songs from HarmonixSet |
| - **π€ SongFormBench-CN (BC)**: 100 Chinese popular songs |
|
|
| **Total: 300 high-quality annotated songs** |
|
|
| --- |
|
|
| ## β¨ Key Highlights |
|
|
| ### π― **Unified Evaluation Standard** |
| - Establishes a **standardized benchmark** for fair comparison across MSA models |
| - Eliminates inconsistencies in evaluation protocols |
|
|
| ### π·οΈ **Simple Label System** |
| - Adopts the widely used 7-class classification system (as described in [arxiv.org/abs/2205.14700](https://arxiv.org/abs/2205.14700) |
| ) |
| - Preserves **pre-chorus** segments for enhanced granularity |
| - Easy conversion to 7-class (pre-chorus β verse) for compatibility |
|
|
| ### π¨βπ¬ **Expert-Verified Quality** |
| - Multi-source validation |
| - Manual corrections by expert annotators |
|
|
| ### π **Multilingual Coverage** |
| - **First Chinese MSA dataset** (100 songs) |
| - Bridges the gap in Chinese music structure analysis |
| - Enables cross-lingual MSA research |
|
|
| --- |
|
|
| ## π Getting Started |
|
|
| ### Quick Load |
| ```python |
| from datasets import load_dataset |
| |
| # Load the complete benchmark |
| dataset = load_dataset("ASLP-lab/SongFormBench") |
| ``` |
|
|
| --- |
|
|
| ## π Resources & Links |
|
|
| - π Paper: *coming soon* |
| - π» Code: [GitHub Repository](https://github.com/ASLP-lab/SongFormer) |
| - π§βπ» Model: [SongFormer](https://huggingface.co/ASLP-lab/SongFormer) |
| - π Dataset: [SongFormDB](https://huggingface.co/datasets/ASLP-lab/SongFormDB) |
|
|
| --- |
|
|
| ## π€ Citation |
|
|
| ```bibtex |
| @misc{hao2025songformer, |
| title = {SongFormer: Scaling Music Structure Analysis with Heterogeneous Supervision}, |
| author = {Chunbo Hao and Ruibin Yuan and Jixun Yao and Qixin Deng and Xinyi Bai and Wei Xue and Lei Xie}, |
| year = {2025}, |
| eprint = {2510.02797}, |
| archivePrefix = {arXiv}, |
| primaryClass = {eess.AS}, |
| url = {https://arxiv.org/abs/2510.02797} |
| } |
| ``` |
|
|
| --- |
|
|
| ## πΌ Mel Spectrogram Details |
|
|
| <details> |
| <summary>Click to expand/collapse</summary> |
|
|
| Environment configuration can refer to the official implementation of BigVGan. If the audio source becomes invalid, you can reconstruct the audio using the following method. |
|
|
| ### πΈ SongFormBench-HarmonixSet |
| Uses official HarmonixSet mel spectrograms. To reproduce: |
|
|
| ```bash |
| # Clone BigVGAN repository |
| git clone https://github.com/NVIDIA/BigVGAN.git |
| |
| # Navigate to utils |
| cd utils/HarmonixSet |
| |
| # Update BIGVGAN_REPO_DIR in inference_e2e.sh |
| # Run the inference script |
| bash inference_e2e.sh |
| ``` |
|
|
| ### π€ SongFormBench-CN |
| Reproduce using [**bigvgan_v2_44khz_128band_256x**](https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_256x) |
|
|
| You should first download bigvgan_v2_44khz_128band_256x, then add its project directory to your PYTHONPATH, after which you can use the code below: |
|
|
| ```python |
| # See implementation |
| utils/CN/infer.py |
| ``` |
| </details> |
|
|
| --- |
|
|
| ## π§ Contact |
|
|
| For questions, issues, or collaboration opportunities, please visit our [GitHub repository](https://github.com/ASLP-lab/SongFormer) or open an issue. |