image
imagewidth (px) 1.28k
1.28k
| obstacle_mask
imagewidth (px) 1.28k
1.28k
| lane_mask
imagewidth (px) 1.28k
1.28k
|
---|---|---|
IGCV Segmentation Dataset
Dataset for training a semantic image segmentation model for the Intelligent Ground Vehicle Competition.
Composition
Each instance consists of an reference image from the point of view of the robot and the corresponding obstacle (e.g. construction drums, buckets) and lane segmentation masks.
Train
256 frames rendered in 4 different lighting environments using Blender = 1024 images
Test
10 frames captured from the SCR 2023 IGVC run (manually segmented) + 13 frames rendered in 4 lighting environments = 62 images
Usage
For usage with PyTorch it is recommended to wrap the dataset into a Dataset
adapter class and generate a training/validation split:
from torch.utils.data import Dataset
from datasets import load_dataset, Dataset as HFDataset
import numpy as np
class Split:
TRAIN = "train"
VALID = "valid"
TEST = "test"
class SegmentationDataset(Dataset):
def __init__(self, path="Nico0302/IGVC-Segmentation", split=Split.TRAIN, transform=None, mask_name="obstacle_mask", valid_size=0.125):
self.path = path
self.split = split
self.transform = transform
self.mask_name = mask_name
self.valid_size = valid_size
self.data = self._read_split()
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
item = self.data[idx]
sample = dict(image=np.array(item["image"]), mask=np.array(item[self.mask_name]))
if self.transform is not None:
sample = self.transform(**sample)
return {
"image": np.transpose(sample["image"], (2, 0, 1)), # HWC to CHW (3, H, W)
"mask": np.expand_dims(sample["mask"].astype(np.float32) / 255.0, 0), # HW to CHW (1, H, W)
}
def _read_split(self):
dataset = load_dataset(self.path, split="test" if self.split == Split.TEST else "train")
assert isinstance(dataset, HFDataset), "Dataset must be a Hugging Face Dataset"
if (self.split == Split.TEST):
return dataset
splits = dataset.train_test_split(test_size=self.valid_size, seed=42)
if self.split == Split.VALID:
return splits["test"]
return splits["train"]
Using this adapter, the dataset can simple be passed to the DataLoader
:
train_dataset = SegmentationDataset(split=Split.TRAIN)
valid_dataset = SegmentationDataset(split=Split.VALID)
test_dataset = SegmentationDataset(split=Split.TEST)
train_dataloader = DataLoader(train_dataset)
valid_dataloader = DataLoader(valid_dataset)
test_dataloader = DataLoader(test_dataset)
Acknowledgements
Thank you for Sooner Competitive Robotics for allowing me to use frames from their IGVC 2023 run video as part of the test set.
Citation
If you are using this dataset, please cite
@misc{gres2025IGVC,
author = { Nicolas Gres },
title = { IGCV Segmentation Dataset },
year = 2025,
url = { https://huggingface.co/datasets/Nico0302/IGVC-Segmentation },
publisher = { Hugging Face }
}
- Downloads last month
- 81