Datasets:
Weight Space Representation Learning on Diverse NeRF Architectures (ICLR 2026)
This repository contains the datasets for the paper Weight Space Representation Learning on Diverse NeRF Architectures. The paper proposes a framework that is capable of processing NeRFs with diverse architectures (MLPs, tri-planes, and hash tables) by training a graph metanetwork to obtain an architecture-agnostic latent space.
NeRF weights
Main dataset structure:
.
└── nerf
└── shapenet
├── hash
│ └── class_id
│ └── nerf_id
│ ├── train
│ │ └── *.png # object views used to train the NeRF
│ ├── grid.pth # nerfacc-like occupancy grid parameters
│ ├── nerf_weights.pth # nerfacc-like NeRF parameters
│ └── transforms_train.json # camera poses
├── mlp
│ └── class_id
│ └── nerf_id
│ ├── train
│ │ └── *.png
│ ├── grid.pth
│ ├── nerf_weights.pth
│ └── transforms_train.json
├── triplane
│ └── class_id
│ └── nerf_id
│ ├── train
│ │ └── *.png
│ ├── grid.pth
│ ├── nerf_weights.pth
│ └── transforms_train.json
├── test.txt # test split
├── train.txt # training split
└── val.txt # validation split
Unseen architectures (nerf/shapenet/hash_unseen, nerf/shapenet/mlp_unseen, and nerf/shapenet/triplane_unseen) and Objaverse NeRFs (nerf/objaverse) have analogous directory structures.
NeRF graphs
Main dataset structure:
.
└── graph
└── shapenet
├── hash
│ ├── test
│ │ └── *.pt # torch_geometric-like graph data
│ ├── train
│ │ └── *.pt
│ └── val
│ └── *.pt
├── mlp
│ ├── test
│ │ └── *.pt
│ ├── train
│ │ └── *.pt
│ └── val
│ └── *.pt
└── triplane
├── test
│ └── *.pt
├── train
│ └── *.pt
└── val
└── *.pt
Unseen architectures (graph/shapenet/hash_unseen, graph/shapenet/mlp_unseen, and graph/shapenet/triplane_unseen) and Objaverse NeRFs (graph/objaverse) have analogous directory structures.
NeRF embeddings
Main dataset structure:
.
└── emb
└── model
└── shapenet
├── hash
│ ├── test
│ │ └── *.h5
│ ├── train
│ │ └── *.h5
│ └── val
│ └── *.h5
├── mlp
│ ├── test
│ │ └── *.h5
│ ├── train
│ │ └── *.h5
│ └── val
│ └── *.h5
└── triplane
├── test/
│ └── *.h5
├── train
│ └── *.h5
└── val
└── *.h5
where models are:
l_con, akal_rec, akal_rec_con, aka
Unseen architectures (emb/model/shapenet/hash_unseen, emb/model/shapenet/mlp_unseen, and emb/model/shapenet/triplane_unseen) and Objaverse NeRFs (emb/model/objaverse) have analogous directory structures.
Language data
The language directory contains embeddings (i.e. those found in emb/l_rec_con/shapenet) paired with textual annotations from the ShapeNeRF-Text dataset. This directory structure allows running the official LLaNA code without any additional preprocessing.
Cite us
If you find our work useful, please cite us:
@inproceedings{ballerini2026weight,
title = {Weight Space Representation Learning on Diverse {NeRF} Architectures},
author = {Ballerini, Francesco and Zama Ramirez, Pierluigi and Di Stefano, Luigi and Salti, Samuele},
booktitle = {The Fourteenth International Conference on Learning Representations},
year = {2026}
}
- Downloads last month
- 468