Spaces:
Running
Running
| import streamlit as st | |
| from streamlit_extras.switch_page_button import switch_page | |
| st.title("Depth Anything V2") | |
| st.success("""[Original tweet](https://twitter.com/mervenoyann/status/1803063120354492658) (June 18, 2024)""", icon="ℹ️") | |
| st.markdown(""" """) | |
| st.markdown(""" | |
| I love Depth Anything V2 😍 | |
| It’s <a href='Depth_Anything' target='_self'>Depth Anything</a>, but scaled with both larger teacher model and a gigantic dataset! Let’s unpack 🤓🧶! | |
| """, unsafe_allow_html=True) | |
| st.markdown(""" """) | |
| st.image("pages/Depth_Anything_v2/image_1.jpg", use_column_width=True) | |
| st.markdown(""" """) | |
| st.markdown(""" | |
| The authors have analyzed Marigold, a diffusion based model against Depth Anything and found out what’s up with using synthetic images vs real images for MDE: | |
| 🔖 Real data has a lot of label noise, inaccurate depth maps (caused by depth sensors missing transparent objects etc) | |
| 🔖 Synthetic data have more precise and detailed depth labels and they are truly ground-truth, but there’s a distribution shift between real and synthetic images, and they have restricted scene coverage | |
| """) | |
| st.markdown(""" """) | |
| st.image("pages/Depth_Anything_v2/image_2.jpg", use_column_width=True) | |
| st.markdown(""" """) | |
| st.markdown(""" | |
| The authors train different image encoders only on synthetic images and find out unless the encoder is very large the model can’t generalize well (but large models generalize inherently anyway) 🧐 | |
| But they still fail encountering real images that have wide distribution in labels 🥲 | |
| """) | |
| st.markdown(""" """) | |
| st.image("pages/Depth_Anything_v2/image_3.jpg", use_column_width=True) | |
| st.markdown(""" """) | |
| st.markdown(""" | |
| Depth Anything v2 framework is to... | |
| 🦖 Train a teacher model based on DINOv2-G based on 595K synthetic images | |
| 🏷️ Label 62M real images using teacher model | |
| 🦕 Train a student model using the real images labelled by teacher | |
| Result: 10x faster and more accurate than Marigold! | |
| """) | |
| st.markdown(""" """) | |
| st.image("pages/Depth_Anything_v2/image_4.jpg", use_column_width=True) | |
| st.markdown(""" """) | |
| st.markdown(""" | |
| The authors also construct a new benchmark called DA-2K that is less noisy, highly detailed and more diverse! | |
| I have created a [collection](https://t.co/3fAB9b2sxi) that has the models, the dataset, the demo and CoreML converted model 😚 | |
| """) | |
| st.markdown(""" """) | |
| st.info(""" | |
| Ressources: | |
| [Depth Anything V2](https://arxiv.org/abs/2406.09414) | |
| by Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao (2024) | |
| [GitHub](https://github.com/DepthAnything/Depth-Anything-V2) | |
| [Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/depth_anything_v2)""", icon="📚") | |
| st.markdown(""" """) | |
| st.markdown(""" """) | |
| st.markdown(""" """) | |
| col1, col2, col3 = st.columns(3) | |
| with col1: | |
| if st.button('Previous paper', use_container_width=True): | |
| switch_page("DenseConnector") | |
| with col2: | |
| if st.button('Home', use_container_width=True): | |
| switch_page("Home") | |
| with col3: | |
| if st.button('Next paper', use_container_width=True): | |
| switch_page("Florence-2") |