The Dataset Viewer has been disabled on this dataset.

Weight Space Representation Learning on Diverse NeRF Architectures (ICLR 2026)

paper code models

teaser

This repository contains the datasets for the paper Weight Space Representation Learning on Diverse NeRF Architectures. The paper proposes a framework that is capable of processing NeRFs with diverse architectures (MLPs, tri-planes, and hash tables) by training a graph metanetwork to obtain an architecture-agnostic latent space.

NeRF weights

Main dataset structure:

.
└── nerf
    └── shapenet
        ├── hash
        │   └── class_id
        │       └── nerf_id
        │           ├── train
        │           │   └── *.png              # object views used to train the NeRF
        │           ├── grid.pth               # nerfacc-like occupancy grid parameters
        │           ├── nerf_weights.pth       # nerfacc-like NeRF parameters
        │           └── transforms_train.json  # camera poses
        ├── mlp
        │   └── class_id
        │       └── nerf_id
        │           ├── train
        │           │   └── *.png                 
        │           ├── grid.pth                
        │           ├── nerf_weights.pth        
        │           └── transforms_train.json
        ├── triplane
        │   └── class_id
        │       └── nerf_id
        │           ├── train
        │           │   └── *.png                 
        │           ├── grid.pth                
        │           ├── nerf_weights.pth        
        │           └── transforms_train.json
        ├── test.txt                           # test split
        ├── train.txt                          # training split
        └── val.txt                            # validation split

Unseen architectures (nerf/shapenet/hash_unseen, nerf/shapenet/mlp_unseen, and nerf/shapenet/triplane_unseen) and Objaverse NeRFs (nerf/objaverse) have analogous directory structures.

NeRF graphs

Main dataset structure:

.
└── graph
    └── shapenet
        ├── hash
        │   ├── test
        │   │   └── *.pt  # torch_geometric-like graph data
        │   ├── train
        │   │   └── *.pt
        │   └── val
        │       └── *.pt
        ├── mlp
        │   ├── test
        │   │   └── *.pt
        │   ├── train
        │   │   └── *.pt
        │   └── val
        │       └── *.pt
        └── triplane
            ├── test
            │   └── *.pt
            ├── train
            │   └── *.pt
            └── val
                └── *.pt

Unseen architectures (graph/shapenet/hash_unseen, graph/shapenet/mlp_unseen, and graph/shapenet/triplane_unseen) and Objaverse NeRFs (graph/objaverse) have analogous directory structures.

NeRF embeddings

Main dataset structure:

.
└── emb
    └── model
        └── shapenet
            ├── hash
            │   ├── test
            │   │   └── *.h5
            │   ├── train
            │   │   └── *.h5
            │   └── val
            │       └── *.h5
            ├── mlp
            │   ├── test
            │   │   └── *.h5
            │   ├── train
            │   │   └── *.h5
            │   └── val
            │       └── *.h5
            └── triplane
                ├── test/
                │   └── *.h5
                ├── train
                │   └── *.h5
                └── val
                    └── *.h5

where models are:

  • l_con, aka LC\mathcal{L}_\text{C}
  • l_rec, aka LR\mathcal{L}_\text{R}
  • l_rec_con, aka LR+C\mathcal{L}_\text{R+C}

Unseen architectures (emb/model/shapenet/hash_unseen, emb/model/shapenet/mlp_unseen, and emb/model/shapenet/triplane_unseen) and Objaverse NeRFs (emb/model/objaverse) have analogous directory structures.

Language data

The language directory contains LR+C\mathcal{L}_\text{R+C} embeddings (i.e. those found in emb/l_rec_con/shapenet) paired with textual annotations from the ShapeNeRF-Text dataset. This directory structure allows running the official LLaNA code without any additional preprocessing.

Cite us

If you find our work useful, please cite us:

@inproceedings{ballerini2026weight,
  title = {Weight Space Representation Learning on Diverse {NeRF} Architectures},
  author = {Ballerini, Francesco and Zama Ramirez, Pierluigi and Di Stefano, Luigi and Salti, Samuele},
  booktitle = {The Fourteenth International Conference on Learning Representations},
  year = {2026}
}
Downloads last month
468

Models trained or fine-tuned on frallebini/gmnerf

Paper for frallebini/gmnerf