metadata
license: mit
tags:
- depth-estimation
- onnx
- computer-vision
- visiondepth3d
- mit-license
Distill-Any-Depth-Small (ONNX) – For VisionDepth3D
Model Origin: This model is based on Distill-Any-Depth by Westlake-AGI-Lab, originally developed by Westlake-AGI-Lab.
I did not train this model — I have converted it to ONNX format for fast, GPU-accelerated inference within tools such as VisionDepth3D.
🧠 About This Model
This is a direct conversion of the Distill-Any-Depth PyTorch model to ONNX, intended for lightweight, real-time depth estimation from single RGB images.
✔️ Key Features:
- ONNX format (exported from PyTorch)
- Compatible with ONNX Runtime and TensorRT
- Excellent for 2D to 3D depth workflows
- Works seamlessly with VisionDepth3D
📌 Intended Use
- Real-time or batch depth map generation
- 2D to 3D conversion pipelines (e.g., SBS 3D video)
- Works on Windows, Linux (CUDA-supported)
📜 License and Attribution
Citation
@article{he2025distill,
title = {Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator},
author = {Xiankang He and Dongyan Guo and Hongji Li and Ruibo Li and Ying Cui and Chi Zhang},
year = {2025},
journal = {arXiv preprint arXiv: 2502.19204}
}
- Source Model: Distill-Any-Depth by Westlake-AGI-Lab
- License: MIT
- Modifications: Only format conversion (no retraining or weight changes)
If you use this model, please credit the original authors: Westlake-AGI-Lab.
💻 How to Use In VisionDepth3D
Place Folder containing onnx model into weights folder in VisionDepth3D
VisionDepth3D¬
Weights¬
Distill Any Depth Small¬
model.onnx