Datasets:
pytc
/

License:
DenSpineEM / README.md
jasonkena's picture
first README.md
40e64f6 verified
metadata
license: mit

Dataset Card for DenSpine

Volumetric Files

The dataset is comprised of dendrites from 3 brain samples: seg_den (also known as M50), mouse (M10), and human (H10).

Every species has 3 volumetric .h5 files:

  • {species}_raw.h5: instance segmentation of entire dendrites in volume (labelled 1-50 or 1-10), where trunks and spines share the same label
  • {species}_spine.h5: "binary" segmentation, where trunks are labelled 0 and spines are labelled their raw dendrite label
  • {species}_seg.h5: spine instance segmentation (labelled 51-... or 11-...), where every spine in the volume is labelled uniquely

Point Cloud Files

In addition, we provide preprocessed point clouds sampled along a dendrite's centerline skeletons for ease of use in evaluating point-cloud based methods.

data=np.load(f"{species}_1000000_10000/{idx}.npz", allow_pickle=True)
trunk_id, pc, trunk_pc, label = data["trunk_id"], data["pc"], data["trunk_pc"], data["label"]
  • trunk_id is an integer which corresponds to the dendrite's raw label
  • pc is a shape [1000000,3] isotropic point cloud
  • trunk_pc is a shape [skeleton_length, 3] (ordered) array, which represents the centerline of the trunk of pc
  • label is a shape [1000000] array with values corresponding to the seg labels of each point in the point cloud

We provide a comprehensive example of how to instantiate a PyTorch dataloader using our dataset in dataloader.py (potentially using the FFD transform with frenet=True).

Training splits for seg_den

The folds used for training/evaluating the seg_den dataset, based on raw labels are defined as follows:

seg_den_folds = [
        [3, 5, 11, 12, 23, 28, 29, 32, 39, 42],
        [8, 15, 19, 27, 30, 34, 35, 36, 46, 49],
        [9, 14, 16, 17, 21, 26, 31, 33, 43, 44],
        [2, 6, 7, 13, 18, 24, 25, 38, 41, 50],
        [1, 4, 10, 20, 22, 37, 40, 45, 47, 48],
]