Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Multi-View 3D Point Tracking Datasets
This repository hosts the training and evaluation datasets associated with the paper Multi-View 3D Point Tracking.
Project Page: https://ethz-vlg.github.io/mvtracker/ Code/Github Repository: https://github.com/ethz-vlg/mvtracker
Abstract
We introduce the first data-driven multi-view 3D point tracker, designed to track arbitrary points in dynamic scenes using multiple camera views. Unlike existing monocular trackers, which struggle with depth ambiguities and occlusion, or prior multi-camera methods that require over 20 cameras and tedious per-sequence optimization, our feed-forward model directly predicts 3D correspondences using a practical number of cameras (e.g., four), enabling robust and accurate online tracking. Given known camera poses and either sensor-based or estimated multi-view depth, our tracker fuses multi-view features into a unified point cloud and applies k-nearest-neighbors correlation alongside a transformer-based update to reliably estimate long-range 3D correspondences, even under occlusion. We train on 5K synthetic multi-view Kubric sequences and evaluate on two real-world benchmarks: Panoptic Studio and DexYCB, achieving median trajectory errors of 3.1 cm and 2.0 cm, respectively. Our method generalizes well to diverse camera setups of 1-8 views with varying vantage points and video lengths of 24-150 frames. By releasing our tracker alongside training and evaluation datasets, we aim to set a new standard for multi-view 3D tracking research and provide a practical tool for real-world applications.
Dataset Details
To benchmark multi-view 3D point tracking, we provide preprocessed versions of three datasets:
- MV-Kubric: a synthetic training dataset adapted from single-view Kubric into a multi-view setting.
- Panoptic Studio: evaluation benchmark with real-world activities such as basketball, juggling, and toy play (10 sequences).
- DexYCB: evaluation benchmark with real-world hand–object interactions (10 sequences).
You can download and extract them as (~72 GB after extraction):
# MV-Kubric (simulated + DUSt3R depths)
wget https://huggingface.co/datasets/ethz-vlg/mv3dpt-datasets/resolve/main/kubric-multiview--test.tar.gz -P datasets/
wget https://huggingface.co/datasets/ethz-vlg/mv3dpt-datasets/resolve/main/kubric-multiview--test--dust3r-depth.tar.gz -P datasets/
tar -xvzf datasets/kubric-multiview--test.tar.gz -C datasets/
tar -xvzf datasets/kubric-multiview--test--dust3r-depth.tar.gz -C datasets/
rm datasets/kubric-multiview*.tar.gz
# Panoptic Studio (optimization-based depth from Dynamic3DGS)
wget https://huggingface.co/datasets/ethz-vlg/mv3dpt-datasets/resolve/main/panoptic-multiview.tar.gz -P datasets/
tar -xvzf datasets/panoptic-multiview.tar.gz -C datasets/
rm datasets/panoptic-multiview.tar.gz
# DexYCB (Kinect + DUSt3R depths)
wget https://huggingface.co/datasets/ethz-vlg/mv3dpt-datasets/resolve/main/dex-ycb-multiview.tar.gz -P datasets/
wget https://huggingface.co/datasets/ethz-vlg/mv3dpt-datasets/resolve/main/dex-ycb-multiview--dust3r-depth.tar.gz -P datasets/
tar -xvzf datasets/dex-ycb-multiview.tar.gz -C datasets/
tar -xvzf datasets/dex-ycb-multiview--dust3r-depth.tar.gz -C datasets/
rm datasets/dex-ycb-multiview*.tar.gz
For licensing and usage terms, please refer to the original datasets from which these preprocessed versions are derived.
Sample Usage
This dataset repository contains the data for the MVTracker model. With minimal dependencies in place (as described in the GitHub repository), you can try MVTracker directly via PyTorch Hub:
import torch
import numpy as np
from huggingface_hub import hf_hub_download
device = "cuda" if torch.cuda.is_available() else "cpu"
mvtracker = torch.hub.load("ethz-vlg/mvtracker", "mvtracker", pretrained=True, device=device)
# Example input from demo sample (downloaded automatically)
sample = np.load(hf_hub_download("ethz-vlg/mvtracker", "data_sample.npz"))
rgbs = torch.from_numpy(sample["rgbs"]).float()
depths = torch.from_numpy(sample["depths"]).float()
intrs = torch.from_numpy(sample["intrs"]).float()
extrs = torch.from_numpy(sample["extrs"]).float()
query_points = torch.from_numpy(sample["query_points"]).float()
with torch.no_grad():
results = mvtracker(
rgbs=rgbs[None].to(device) / 255.0,
depths=depths[None].to(device),
intrs=intrs[None].to(device),
extrs=extrs[None].to(device),
query_points_3d=query_points[None].to(device),
)
pred_tracks = results["traj_e"].cpu() # [T,N,3]
pred_vis = results["vis_e"].cpu() # [T,N]
print(pred_tracks.shape, pred_vis.shape)
Citation
If you find our repository useful, please consider giving it a star ⭐ and citing our work:
@inproceedings{rajic2025mvtracker,
title = {Multi-View 3D Point Tracking},
author = {Raji{\v{c}}, Frano and Xu, Haofei and Mihajlovic, Marko and Li, Siyuan and Demir, Irem and G{\"u}ndo{\u{g}}du, Emircan and Ke, Lei and Prokudin, Sergey and Pollefeys, Marc and Tang, Siyu},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2025}
}
- Downloads last month
- 90