File size: 11,395 Bytes
00d035d
1c85479
 
00d035d
1c85479
 
00d035d
 
dc202e1
373dd85
00d035d
 
 
 
 
 
 
 
 
e1c361c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00d035d
e1c361c
fcfd9a4
 
e1c361c
fcfd9a4
 
 
 
00d035d
e1c361c
 
 
 
 
 
00d035d
 
2ee9e64
00d035d
dd3dcb9
dcbf207
00d035d
 
89b503f
00d035d
dd3dcb9
00d035d
1c85479
f911003
00d035d
 
 
 
 
89b503f
00d035d
 
 
 
 
 
 
 
 
 
0e118d3
00d035d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b79470e
 
00d035d
b79470e
 
00d035d
b79470e
 
00d035d
 
 
b79470e
00d035d
 
 
 
 
 
 
 
 
 
 
 
 
 
2ea49a0
00d035d
 
0e118d3
00d035d
 
 
 
 
 
 
 
 
 
 
2d216b6
 
2ea49a0
00d035d
2d216b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00d035d
2d216b6
 
 
 
 
00d035d
 
 
 
 
 
 
 
 
 
dd3dcb9
00d035d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dd3dcb9
00d035d
 
 
 
 
 
 
 
 
 
dd3dcb9
00d035d
b265621
00d035d
 
 
 
 
 
 
 
b265621
00d035d
 
 
 
 
 
b265621
00d035d
 
 
 
b265621
00d035d
 
 
 
 
b265621
00d035d
 
 
 
6a15d4f
 
9064cc2
6a15d4f
 
9064cc2
6a15d4f
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
---
language:
- en
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- image-classification
- object-detection
- image-to-text
tags:
- computer-vision
- photography
- annotations
- EXIF
- scene-understanding
- multimodal
dataset_info:
  features:
  - name: image_id
    dtype: string
  - name: image
    dtype: image
  - name: image_title
    dtype: string
  - name: image_description
    dtype: string
  - name: scene_description
    dtype: string
  - name: all_labels
    sequence: string
  - name: segmented_objects
    sequence: string
  - name: segmentation_masks
    sequence:
      sequence: float64
  - name: exif_make
    dtype: string
  - name: exif_model
    dtype: string
  - name: exif_f_number
    dtype: string
  - name: exif_exposure_time
    dtype: string
  - name: exif_exposure_mode
    dtype: string
  - name: exif_exposure_program
    dtype: string
  - name: exif_metering_mode
    dtype: string
  - name: exif_lens
    dtype: string
  - name: exif_focal_length
    dtype: string
  - name: exif_iso
    dtype: string
  - name: exif_date_original
    dtype: string
  - name: exif_software
    dtype: string
  - name: exif_orientation
    dtype: string
  splits:
  - name: train
    num_bytes: 3715850996.79
    num_examples: 7010
  - name: validation
    num_bytes: 408185964.0
    num_examples: 762
  download_size: 4134168610
  dataset_size: 4124036960.79
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
---

# DataSeeds.AI Sample Dataset (DSD)

![DSD Example](./GSD-example.jpeg)

## Dataset Summary

The DataSeeds.AI Sample Dataset (DSD) is a high-fidelity, human-curated computer vision-ready dataset comprised of 7,772 peer-ranked, fully annotated photographic images, 350,000+ words of descriptive text, and comprehensive metadata. While the DSD is being released under an open source license, a sister dataset of over 10,000 fully annotated and segmented images is available for immediate commercial licensing, and the broader GuruShots ecosystem contains over 100 million images in its catalog.

Each image includes multi-tier human annotations and semantic segmentation masks. Generously contributed to the community by the GuruShots photography platform, where users engage in themed competitions, the DSD uniquely captures aesthetic preference signals and high-quality technical metadata (EXIF) across an expansive diversity of photographic styles, camera types, and subject matter. The dataset is optimized for fine-tuning and evaluating multimodal vision-language models, especially in scene description and stylistic comprehension tasks.

* **Technical Report** - [Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from DataSeeds' Annotated Imagery](https://huggingface.co/papers/2506.05673)
* **Github Repo** - Access the complete weights and code which were used to evaluate the DSD -- [https://github.com/DataSeeds-ai/DSD-finetune-blip-llava](https://github.com/DataSeeds-ai/DSD-finetune-blip-llava)

This dataset is ready for commercial/non-commercial use.

## Dataset Structure

* **Size**: 7,772 images (7,010 train, 762 validation)
* **Format**: Apache Parquet files for metadata, with images in JPG format
* **Total Size**: ~4.1GB
* **Languages**: English (annotations)
* **Annotation Quality**: All annotations were verified through a multi-tier human-in-the-loop process

### Data Fields

| Column Name | Description | Data Type |
|-------------|-------------|-----------|
| `image_id` | Unique identifier for the image | string |
| `image` | Image file, PIL type | image |
| `image_title` | Human-written title summarizing the content or subject | string |
| `image_description` | Human-written narrative describing what is visibly present | string |
| `scene_description` | Technical and compositional details about image capture | string |
| `all_labels` | All object categories identified in the image | list of strings |
| `segmented_objects` | Objects/elements that have segmentation masks | list of strings |
| `segmentation_masks` | Segmentation polygons as coordinate points [x,y,...] | list of lists of floats |
| `exif_make` | Camera manufacturer | string |
| `exif_model` | Camera model | string |
| `exif_f_number` | Aperture value (lower = wider aperture) | string |
| `exif_exposure_time` | Sensor exposure time (e.g., 1/500 sec) | string |
| `exif_exposure_mode` | Camera exposure setting (Auto/Manual/etc.) | string |
| `exif_exposure_program` | Exposure program mode | string |
| `exif_metering_mode` | Light metering mode | string |
| `exif_lens` | Lens information and specifications | string |
| `exif_focal_length` | Lens focal length (millimeters) | string |
| `exif_iso` | Camera sensor sensitivity to light | string |
| `exif_date_original` | Original timestamp when image was taken | string |
| `exif_software` | Post-processing software used | string |
| `exif_orientation` | Image layout (horizontal/vertical) | string |

## How to Use

### Basic Loading

```python
from datasets import load_dataset

# Load the training split of the dataset
dataset = load_dataset("Dataseeds/DataSeeds.AI-Sample-Dataset-DSD", split="train")

# Access the first sample
sample = dataset[0]

# Extract the different features from the sample
image = sample["image"]  # The PIL Image object
title = sample["image_title"]
description = sample["image_description"]
segments = sample["segmented_objects"]
masks = sample["segmentation_masks"] # The PIL Image object for the mask

print(f"Title: {title}")
print(f"Description: {description}")
print(f"Segmented objects: {segments}")
```

### PyTorch DataLoader

```python
from datasets import load_dataset
from torch.utils.data import DataLoader
import torch

# Load dataset
dataset = load_dataset("Dataseeds/DataSeeds.AI-Sample-Dataset-DSD", split="train")

# Convert to PyTorch format
dataset.set_format(type="torch", columns=["image", "image_title", "segmentation_masks"])

# Create DataLoader
dataloader = DataLoader(dataset, batch_size=16, shuffle=True)
```

### TensorFlow

```python
import tensorflow as tf
from datasets import load_dataset

TARGET_IMG_SIZE = (224, 224)
BATCH_SIZE = 16
dataset = load_dataset("Dataseeds/DataSeeds.AI-Sample-Dataset-DSD", split="train")

def hf_dataset_generator():
    for example in dataset:
        yield example['image'], example['image_title']

def preprocess(image, title):
    # Resize the image to a fixed size
    image = tf.image.resize(image, TARGET_IMG_SIZE)
    image = tf.cast(image, tf.uint8)
    return image, title

# The output_signature defines the data types and shapes
tf_dataset = tf.data.Dataset.from_generator(
    hf_dataset_generator,
    output_signature=(
        tf.TensorSpec(shape=(None, None, 3), dtype=tf.uint8),
        tf.TensorSpec(shape=(), dtype=tf.string),
    )
)

# Apply the preprocessing, shuffle, and batch
tf_dataset = (
    tf_dataset.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE)
    .shuffle(buffer_size=100)
    .batch(BATCH_SIZE)
    .prefetch(tf.data.AUTOTUNE)
)

print("Dataset is ready.")
for images, titles in tf_dataset.take(1):
    print("Image batch shape:", images.shape)
    print("A title from the batch:", titles.numpy()[0].decode('utf-8'))
```

## Dataset Characterization

**Data Collection Method**: Manual curation from GuruShots photography platform

**Labeling Method**: Human annotators with multi-tier verification process

## Benchmark Results

To validate the impact of data quality, we fine-tuned two state-of-the-art vision-language models—**LLaVA-NEXT** and **BLIP2**—on the DSD scene description task. We observed consistent and measurable improvements over base models:

### LLaVA-NEXT Results

| Model | BLEU-4 | ROUGE-L | BERTScore F1 | CLIPScore |
|-------|--------|---------|--------------|-----------|
| Base | 0.0199 | 0.2089 | 0.2751 | 0.3247 |
| Fine-tuned | 0.0246 | 0.2140 | 0.2789 | 0.3260 |
| **Relative Improvement** | **+24.09%** | **+2.44%** | **+1.40%** | **+0.41%** |

### BLIP2 Results

| Model | BLEU-4 | ROUGE-L | BERTScore F1 | CLIPScore |
|-------|--------|---------|--------------|-----------|
| Base | 0.001 | 0.126 | 0.0545 | 0.2854 |
| Fine-tuned | 0.047 | 0.242 | -0.0537 | 0.2583 |
| **Relative Improvement** | **+4600%** | **+92.06%** | -198.53% | -9.49% |

These improvements demonstrate the dataset's value in improving scene understanding and textual grounding of visual features, especially in fine-grained photographic tasks.

## Use Cases

The DSD is perfect for fine-tuning multimodal models for:

* **Image captioning** - Rich human-written descriptions
* **Scene description** - Technical photography analysis
* **Semantic segmentation** - Pixel-level object understanding
* **Aesthetic evaluation** - Style classification based on peer rankings
* **EXIF-aware analysis** - Technical metadata integration
* **Multimodal training** - Vision-language model development

## Commercial Dataset Access & On-Demand Licensing

While the DSD is being released under an open source license, it represents only a small fraction of the broader commercial capabilities of the GuruShots ecosystem.

DataSeeds.AI operates a live, ongoing photography catalog that has amassed over 100 million images, sourced from both amateur and professional photographers participating in thousands of themed challenges across diverse geographic and stylistic contexts. Unlike most public datasets, this corpus is:

* Fully licensed for downstream use in AI training
* Backed by structured consent frameworks and traceable rights, with active opt-in from creators
* Rich in EXIF metadata, including camera model, lens type, and occasionally location data
* Curated through a built-in human preference signal based on competitive ranking, yielding rare insight into subjective aesthetic quality

### On-Demand Dataset Creation

Uniquely, DataSeeds.AI has the ability to source new image datasets to spec via a just-in-time, first-party data acquisition engine. Clients (e.g. AI labs, model developers, media companies) can request:

* Specific content themes (e.g., "urban decay at dusk," "elderly people with dogs in snowy environments")
* Defined technical attributes (camera type, exposure time, geographic constraints)
* Ethical/region-specific filtering (e.g., GDPR-compliant imagery, no identifiable faces, kosher food imagery)
* Matching segmentation masks, EXIF metadata, and tiered annotations

Within days, the DataSeeds.AI platform can launch curated challenges to its global network of contributors and deliver targeted datasets with commercial-grade licensing terms.

### Sales Inquiries

To inquire about licensing or customized dataset sourcing, contact:
**[[email protected]](mailto:[email protected])**

## License & Citation

**License**: Apache 2.0

**For commercial licenses, annotation, or access to the full 100M+ image catalog with on-demand annotations**: [[email protected]](mailto:[email protected])

### Citation

If you find the data useful, please cite:

```bibtex
@article{abdoli2025peerranked,
    title={Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from GuruShots' Annotated Imagery}, 
    author={Sajjad Abdoli and Freeman Lewin and Gediminas Vasiliauskas and Fabian Schonholz},
    journal={arXiv preprint arXiv:2506.05673},
    year={2025},
}
```