File size: 5,606 Bytes
c0d9a8c 9f80d3c e8f0d8f 9f80d3c 0fd059f 9f80d3c 5b3d1c4 9f80d3c dc7d426 5b3d1c4 8f90997 dc7d426 9f80d3c d632a9f c0d9a8c 9f80d3c c0d9a8c 9089216 c0d9a8c 9f80d3c d3ff712 3cefdcd 669d61d 1e86ae8 d3ff712 9f80d3c ada1a96 1e86ae8 ada1a96 9f80d3c 1e86ae8 ada1a96 9f80d3c 434aa8d 7c9a918 9f80d3c 91b1738 434aa8d 91b1738 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 |
---
title: 'DDMR: Deep Deformation Map Registration of CT/MRIs'
colorFrom: indigo
colorTo: indigo
sdk: docker
app_port: 7860
emoji: 🧠
pinned: false
license: mit
app_file: demo/app.py
---
<div align="center">
<img src="https://user-images.githubusercontent.com/30429725/204778476-4d24c659-9287-48b8-b616-92016ffcf4f6.svg" alt="drawing" width="600">
</div>
<div align="center">
<h1 align="center">DDMR: Deep Deformation Map Registration</h1>
<h3 align="center">Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation</h3>
[](https://github.com/DAVFoundation/captain-n3m0/blob/master/LICENSE)
[](https://github.com/jpdefrutos/DDMR/actions/workflows/deploy.yml)
[](https://doi.org/10.1371/journal.pone.0282110)
<a target="_blank" href="https://huggingface.co/spaces/andreped/DDMR"><img src="https://img.shields.io/badge/🤗%20Hugging%20Face-Spaces-yellow.svg"></a>
**DDMR** was developed by SINTEF Health Research. The corresponding manuscript describing the framework has been published in [PLOS ONE](https://journals.plos.org/plosone/) and the manuscript is openly available [here](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0282110).
</div>
## 💻 Getting started
1. Setup virtual environment:
```
virtualenv -ppython3 venv --clear
source venv/bin/activate
```
2. Install requirements:
```
pip install git+https://github.com/jpdefrutos/DDMR
```
## 🤖 How to use
Use the following CLI command to register images
```
ddmr --fixed path/to/fixed_image.nii.gz --moving path/to/moving_image.nii.gz --outputdir path/to/output/dir -a <anatomy> --model <model> --gpu <gpu-number> --original-resolution
```
where:
* anatomy: is the type of anatomy you want to register: B (brain) or L (liver)
* model: is the model you want to use:
+ BL-N (baseline with NCC)
+ BL-NS (baseline with NCC and SSIM)
+ SG-ND (segmentation guided with NCC and DSC)
+ SG-NSD (segmentation guided with NCC, SSIM, and DSC)
+ UW-NSD (uncertainty weighted with NCC, SSIM, and DSC)
+ UW-NSDH (uncertainty weighted with NCC, SSIM, DSC, and HD).
* gpu: is the GPU number you want to the model to run on, if you have multiple and want to use only one GPU
* original-resolution: (flag) whether to upsample the registered image to the fixed image resolution (disabled if the flag is not present)
Use ```ddmr --help``` to see additional options like using precomputed segmentations to crop the images to the desired ROI, or debugging.
## 🤗 Demo <a target="_blank" href="https://huggingface.co/spaces/andreped/DDMR"><img src="https://img.shields.io/badge/🤗%20Hugging%20Face-Spaces-yellow.svg"></a>
A live demo to easily test the best performing pretrained models was developed in Gradio and is deployed on `Hugging Face`.
To access the live demo, click on the `Hugging Face` badge above. Below is a snapshot of the current state of the demo app.
<img width="1800" alt="Screenshot 2023-10-22 at 14 42 49" src="https://github.com/jpdefrutos/DDMR/assets/29090665/ceb8797d-1a06-4929-994c-0838e1261e32">
<details>
<summary>
### Development</summary>
To develop the Gradio app locally, you can use either Python or Docker.
#### Python
You can run the app locally by:
```
python demo/app.py --cwd ./ --share 0
```
Then open `http://127.0.0.1:7860` in your favourite internet browser to view the demo.
#### Docker
Alternatively, you can use docker:
```
docker build -t ddmr .
docker run -it -p 7860:7860 ddmr
```
Then open `http://127.0.0.1:7860` in your favourite internet browser to view the demo.
</details>
## 🏋️♂️ Training
Use the "MultiTrain" scripts to launch the trainings, providing the neccesary parameters. Those in the COMET folder accepts a `.ini` configuration file (see `COMET/train_config_files/` for example configurations).
For instance:
```
python TrainingScripts/Train_3d.py
```
## 🔍 Evaluate
Use Evaluate_network to test the trained models. On the Brain folder, use `Evaluate_network__test_fixed.py` instead.
For instance:
```
python EvaluationScripts/evaluation.py
```
## ✨ How to cite
Please, consider citing our paper, if you find the work useful:
<pre>
@article{perezdefrutos2022ddmr,
title = {Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation},
author = {Pérez de Frutos, Javier AND Pedersen, André AND Pelanis, Egidijus AND Bouget, David AND Survarachakan, Shanmugapriya AND Langø, Thomas AND Elle, Ole-Jakob AND Lindseth, Frank},
journal = {PLOS ONE},
publisher = {Public Library of Science},
year = {2023},
month = {02},
volume = {18},
doi = {10.1371/journal.pone.0282110},
url = {https://doi.org/10.1371/journal.pone.0282110},
pages = {1-14},
number = {2}
}
</pre>
## ⭐ Acknowledgements
This project is based on [VoxelMorph](https://github.com/voxelmorph/voxelmorph) library, and its related publication:
<pre>
@article{balakrishnan2019voxelmorph,
title={VoxelMorph: A Learning Framework for Deformable Medical Image Registration},
author={Balakrishnan, Guha and Zhao, Amy and Sabuncu, Mert R. and Guttag, John and Dalca, Adrian V.},
journal={IEEE Transactions on Medical Imaging},
year={2019},
volume={38},
number={8},
pages={1788-1800},
doi={10.1109/TMI.2019.2897538}
}
</pre>
|