File size: 2,602 Bytes
0dccd41
 
 
3c1b615
 
 
 
 
 
89d6bcc
3c1b615
 
 
 
 
 
 
 
 
 
 
0dccd41
 
d8253c1
 
 
0dccd41
 
 
3c1b615
0dccd41
 
 
 
d8253c1
 
0dccd41
 
 
 
 
 
 
 
cdd2449
0dccd41
3c1b615
 
 
0dccd41
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
license: mit
---
# Fastmap evaluation suite. 


You only need the databases to run fastmap. Download the images if you want to produce colored point cloud. 
Download the subset of data you want to your local directory. 
```bash
huggingface-cli download whc/fastmap_sfm --repo-type dataset --local-dir ./ --include 'databases/tnt_*' 'ground_truths/tnt_*'
```
or use the python interface
```python
from huggingface_hub import hf_hub_download, snapshot_download
snapshot_download(
    repo_id="whc/fastmap_sfm", repo_type='dataset',
    local_dir="./",
    allow_patterns=["ground_truths/*",],
    max_workers=8
)
```

## Images 
* MipNeRF360, Tanks and Temples, NeRF-OSR, DroneDeploy, ZipNeRF and Mill-19 store the images for a scene in the same directory. All the images are used for running SfM.
* For Eyeful Tower, images are stored in different subdirectories for each scene. We extract the image paths from the provided `cameras.json`, which contains the GT poses for each image. The subdirectory structure is preserved.
* Urbanscene3D also stores images in different subdirectories. We extract the image paths from the refined GT provided [here](https://github.com/cmusatyalab/mega-nerf?tab=readme-ov-file#urbanscene-3d) by [MegaNeRF](https://github.com/cmusatyalab/mega-nerf).

## Databases
Unless specified below, all databases `.db` files are produced with 
```bash
colmap feature_extractor --ImageReader.single_camera 1 --ImageReader.camera_model SIMPLE_RADIAL --image_path {imgdir} --database_path {db_fname}
colmap exhaustive_matcher --database_path {db_fname}
```

* For nerf-osr (i.e. those with prefix `nosr_`), `dploy_house4`, urbanscene (`urbn_`), eyeful tower (`eft_`), we use `--ImageReader.single_camera 0`. 
* For eth3d MVS (`eth3d_dslr_`), we populate the databases with author provided ground truth camera intrinsics. This is to keep the eval protocol consistent with Glomap. see [issue 96](https://github.com/colmap/glomap/issues/96).


## Ground truths

We use two types of ground-truth formats. The first is colmap `.bin` output. 
Secondly, for datasets whose ground truths come in non-standard format, we verify and convert them to a white-box json file each containing a list of `{fname: str, c2w: list[float]}`. 
The `c2w` matrix is in OpenCV convention.

We have added our gt convertion script `convert_gt.py` to this repo as a reference. 

## glomap/colmap container
We provide a singularity container `glomap_250121.sif` in the repo, and a docker container [here](https://hub.docker.com/r/haochenw/glomap/tags).
The Dockerfile is printed in the dockerhub overview.