File size: 1,889 Bytes
419f171 c87930c 419f171 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
---
license: mit
tags:
- depth-estimation
- onnx
- computer-vision
- visiondepth3d
- mit-license
---
# Distill-Any-Depth-Large (ONNX) – For VisionDepth3D
> **Model Origin:** This model is based on [Distill-Any-Depth by Westlake-AGI-Lab](https://github.com/Westlake-AGI-Lab/Distill-Any-Depth), originally developed by Westlake-AGI-Lab.
> I did not train this model — I have converted it to ONNX format for fast, GPU-accelerated inference within tools such as VisionDepth3D.
## 🧠 About This Model
This is a direct conversion of the **Distill-Any-Depth** PyTorch model to **ONNX**, real-time depth estimation from single RGB images.
### ✔️ Key Features:
- ONNX format (exported from PyTorch)
- Compatible with ONNX Runtime and TensorRT
- Excellent for 2D to 3D depth workflows
- Works seamlessly with **VisionDepth3D**
## 📌 Intended Use
- Real-time or batch depth map generation
- 2D to 3D conversion pipelines (e.g., SBS 3D video)
- Works on Windows, Linux (CUDA-supported)
## 📜 License and Attribution
### Citation
```
@article{he2025distill,
title = {Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator},
author = {Xiankang He and Dongyan Guo and Hongji Li and Ruibo Li and Ying Cui and Chi Zhang},
year = {2025},
journal = {arXiv preprint arXiv: 2502.19204}
}
```
- **Source Model:** [Distill-Any-Depth by Westlake-AGI-Lab](https://github.com/Westlake-AGI-Lab/Distill-Any-Depth)
- **License:** MIT
- **Modifications:** Only format conversion (no retraining or weight changes)
> If you use this model, please credit the original authors: Westlake-AGI-Lab.
## 💻 How to Use In VisionDepth3D
Place Folder containing onnx model into weights folder in VisionDepth3D
```
VisionDepth3D¬
Weights¬
Distill Any Depth Large¬
model.onnx
``` |