license: mit
language:
- en
CineMA - A Foundation Model for Cine Cardiac Magnetic Resonance Images 🎥🫀
This repository contains the weights for CineMA, a foundation model for Cine cardiac magnetic resonance (CMR) imaging based on Masked-Autoencoder. The model was pre-trained on over 74,000 pairs of short-axis and long-axis cine CMR images from the UK Biobank.
CineMA was evaluated across a diverse range of clinically relevant downstream tasks, including
- Ventricle and myocardium segmentation
- Cardiovascular disease (CVD) detection and classification
- Patient sex classification
- CMR machine vendor classification
- Ejection fraction (EF) regression
- Patient body mass index (BMI) regression
- Patient age regression
- Mid-ventricular and apical landmark localization
These tasks were studied across multiple datasets:
Compared to convolutional neural network baselines such as UNet and ResNet, CineMA demonstrated superior or comparable performance, especially in sample efficiency and generalization to out-of-distribution data not seen during pretraining or fine-tuning.
By releasing the model weights and code for pretraining, fine-tuning, and inference, CineMA aims to lower the barrier to entry for cardiac imaging research, foster reproducibility, and encourage broader adoption across institutions.
➡️ Manuscript: TBD
➡️ Code: mathpluscode/CineMA
Fine-tuned CineMA Models
The filenames of fine-tuned model weights follow the convention of finetuned/<task>/<data>_<view>_<seed>.safetensors
where number 0, 1, and 2 correspond to the different training seeds.
Check the "Inference Example" column to see example inference scripts using these trained models.
Pre-trained CineMA Model
The pre-trained CineMA model backbone is available at pretrained/cinema.safetensors with configuration pretrained/cinema.yaml.
Following scripts demonstrated how to fine-tune this backbone using a preprocessed version of ACDC dataset:
- Ventricle and myocardium segmentation
- Cardiovascular disease classification
- Ejection fraction regression
Citation
Contact
For questions or collaborations, please contact Yunguan Fu ([email protected]).