NikV09 commited on
Commit
f1b59a9
·
verified ·
1 Parent(s): b9f0c7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -4
README.md CHANGED
@@ -2,9 +2,37 @@
2
  tags:
3
  - model_hub_mixin
4
  - pytorch_model_hub_mixin
 
 
 
 
 
 
 
 
 
 
 
5
  ---
 
6
 
7
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
- - Code: [More Information Needed]
9
- - Paper: [More Information Needed]
10
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  tags:
3
  - model_hub_mixin
4
  - pytorch_model_hub_mixin
5
+ - computer-vision
6
+ - 3d-reconstruction
7
+ - multi-view-stereo
8
+ - depth-estimation
9
+ - camera-pose
10
+ - covisibility
11
+ - mapanything
12
+ license: apache-2.0
13
+ language:
14
+ - en
15
+ pipeline_tag: image-to-3d
16
  ---
17
+ ## Overview
18
 
19
+ MapAnything is a simple, end-to-end trained transformer model that directly regresses the factored metric 3D geometry of a scene given various types of modalities as inputs. A single feed-forward model supports over 12 different 3D reconstruction tasks including multi-image sfm, multi-view stereo, mocoular metric depth estimation, registration, depth completion and more.
20
+
21
+ This is the Apache 2.0 variant of the model.
22
+
23
+ ## Quick Start
24
+
25
+ Please refer to our [Github Repo](https://github.com/facebookresearch/map-anything)
26
+
27
+ ## Citation
28
+
29
+ If you find our repository useful, please consider giving it a star ⭐ and citing our paper in your work:
30
+
31
+ ```bibtex
32
+ @inproceedings{keetha2025mapanything,
33
+ title={{MapAnything}: Universal Feed-Forward Metric {3D} Reconstruction},
34
+ author={Nikhil Keetha and Norman Müller and Johannes Schönberger and Lorenzo Porzi and Yuchen Zhang and Tobias Fischer and Arno Knapitsch and Duncan Zauss and Ethan Weber and Nelson Antunes and Jonathon Luiten and Manuel Lopez-Antequera and Samuel Rota Bulò and Christian Richardt and Deva Ramanan and Sebastian Scherer and Peter Kontschieder},
35
+ booktitle={arXiv},
36
+ year={2025}
37
+ }
38
+ ```