Zedge's picture
Add image-to-text task category (#4)
373dd85 verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 1K<n<10K
task_categories:
  - image-classification
  - object-detection
  - image-to-text
tags:
  - computer-vision
  - photography
  - annotations
  - EXIF
  - scene-understanding
  - multimodal
dataset_info:
  features:
    - name: image_id
      dtype: string
    - name: image
      dtype: image
    - name: image_title
      dtype: string
    - name: image_description
      dtype: string
    - name: scene_description
      dtype: string
    - name: all_labels
      sequence: string
    - name: segmented_objects
      sequence: string
    - name: segmentation_masks
      sequence:
        sequence: float64
    - name: exif_make
      dtype: string
    - name: exif_model
      dtype: string
    - name: exif_f_number
      dtype: string
    - name: exif_exposure_time
      dtype: string
    - name: exif_exposure_mode
      dtype: string
    - name: exif_exposure_program
      dtype: string
    - name: exif_metering_mode
      dtype: string
    - name: exif_lens
      dtype: string
    - name: exif_focal_length
      dtype: string
    - name: exif_iso
      dtype: string
    - name: exif_date_original
      dtype: string
    - name: exif_software
      dtype: string
    - name: exif_orientation
      dtype: string
  splits:
    - name: train
      num_bytes: 3715850996.79
      num_examples: 7010
    - name: validation
      num_bytes: 408185964
      num_examples: 762
  download_size: 4134168610
  dataset_size: 4124036960.79
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*

DataSeeds.AI Sample Dataset (DSD)

DSD Example

Dataset Summary

The DataSeeds.AI Sample Dataset (DSD) is a high-fidelity, human-curated computer vision-ready dataset comprised of 7,772 peer-ranked, fully annotated photographic images, 350,000+ words of descriptive text, and comprehensive metadata. While the DSD is being released under an open source license, a sister dataset of over 10,000 fully annotated and segmented images is available for immediate commercial licensing, and the broader GuruShots ecosystem contains over 100 million images in its catalog.

Each image includes multi-tier human annotations and semantic segmentation masks. Generously contributed to the community by the GuruShots photography platform, where users engage in themed competitions, the DSD uniquely captures aesthetic preference signals and high-quality technical metadata (EXIF) across an expansive diversity of photographic styles, camera types, and subject matter. The dataset is optimized for fine-tuning and evaluating multimodal vision-language models, especially in scene description and stylistic comprehension tasks.

This dataset is ready for commercial/non-commercial use.

Dataset Structure

  • Size: 7,772 images (7,010 train, 762 validation)
  • Format: Apache Parquet files for metadata, with images in JPG format
  • Total Size: ~4.1GB
  • Languages: English (annotations)
  • Annotation Quality: All annotations were verified through a multi-tier human-in-the-loop process

Data Fields

Column Name Description Data Type
image_id Unique identifier for the image string
image Image file, PIL type image
image_title Human-written title summarizing the content or subject string
image_description Human-written narrative describing what is visibly present string
scene_description Technical and compositional details about image capture string
all_labels All object categories identified in the image list of strings
segmented_objects Objects/elements that have segmentation masks list of strings
segmentation_masks Segmentation polygons as coordinate points [x,y,...] list of lists of floats
exif_make Camera manufacturer string
exif_model Camera model string
exif_f_number Aperture value (lower = wider aperture) string
exif_exposure_time Sensor exposure time (e.g., 1/500 sec) string
exif_exposure_mode Camera exposure setting (Auto/Manual/etc.) string
exif_exposure_program Exposure program mode string
exif_metering_mode Light metering mode string
exif_lens Lens information and specifications string
exif_focal_length Lens focal length (millimeters) string
exif_iso Camera sensor sensitivity to light string
exif_date_original Original timestamp when image was taken string
exif_software Post-processing software used string
exif_orientation Image layout (horizontal/vertical) string

How to Use

Basic Loading

from datasets import load_dataset

# Load the training split of the dataset
dataset = load_dataset("Dataseeds/DataSeeds.AI-Sample-Dataset-DSD", split="train")

# Access the first sample
sample = dataset[0]

# Extract the different features from the sample
image = sample["image"]  # The PIL Image object
title = sample["image_title"]
description = sample["image_description"]
segments = sample["segmented_objects"]
masks = sample["segmentation_masks"] # The PIL Image object for the mask

print(f"Title: {title}")
print(f"Description: {description}")
print(f"Segmented objects: {segments}")

PyTorch DataLoader

from datasets import load_dataset
from torch.utils.data import DataLoader
import torch

# Load dataset
dataset = load_dataset("Dataseeds/DataSeeds.AI-Sample-Dataset-DSD", split="train")

# Convert to PyTorch format
dataset.set_format(type="torch", columns=["image", "image_title", "segmentation_masks"])

# Create DataLoader
dataloader = DataLoader(dataset, batch_size=16, shuffle=True)

TensorFlow

import tensorflow as tf
from datasets import load_dataset

TARGET_IMG_SIZE = (224, 224)
BATCH_SIZE = 16
dataset = load_dataset("Dataseeds/DataSeeds.AI-Sample-Dataset-DSD", split="train")

def hf_dataset_generator():
    for example in dataset:
        yield example['image'], example['image_title']

def preprocess(image, title):
    # Resize the image to a fixed size
    image = tf.image.resize(image, TARGET_IMG_SIZE)
    image = tf.cast(image, tf.uint8)
    return image, title

# The output_signature defines the data types and shapes
tf_dataset = tf.data.Dataset.from_generator(
    hf_dataset_generator,
    output_signature=(
        tf.TensorSpec(shape=(None, None, 3), dtype=tf.uint8),
        tf.TensorSpec(shape=(), dtype=tf.string),
    )
)

# Apply the preprocessing, shuffle, and batch
tf_dataset = (
    tf_dataset.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE)
    .shuffle(buffer_size=100)
    .batch(BATCH_SIZE)
    .prefetch(tf.data.AUTOTUNE)
)

print("Dataset is ready.")
for images, titles in tf_dataset.take(1):
    print("Image batch shape:", images.shape)
    print("A title from the batch:", titles.numpy()[0].decode('utf-8'))

Dataset Characterization

Data Collection Method: Manual curation from GuruShots photography platform

Labeling Method: Human annotators with multi-tier verification process

Benchmark Results

To validate the impact of data quality, we fine-tuned two state-of-the-art vision-language models—LLaVA-NEXT and BLIP2—on the DSD scene description task. We observed consistent and measurable improvements over base models:

LLaVA-NEXT Results

Model BLEU-4 ROUGE-L BERTScore F1 CLIPScore
Base 0.0199 0.2089 0.2751 0.3247
Fine-tuned 0.0246 0.2140 0.2789 0.3260
Relative Improvement +24.09% +2.44% +1.40% +0.41%

BLIP2 Results

Model BLEU-4 ROUGE-L BERTScore F1 CLIPScore
Base 0.001 0.126 0.0545 0.2854
Fine-tuned 0.047 0.242 -0.0537 0.2583
Relative Improvement +4600% +92.06% -198.53% -9.49%

These improvements demonstrate the dataset's value in improving scene understanding and textual grounding of visual features, especially in fine-grained photographic tasks.

Use Cases

The DSD is perfect for fine-tuning multimodal models for:

  • Image captioning - Rich human-written descriptions
  • Scene description - Technical photography analysis
  • Semantic segmentation - Pixel-level object understanding
  • Aesthetic evaluation - Style classification based on peer rankings
  • EXIF-aware analysis - Technical metadata integration
  • Multimodal training - Vision-language model development

Commercial Dataset Access & On-Demand Licensing

While the DSD is being released under an open source license, it represents only a small fraction of the broader commercial capabilities of the GuruShots ecosystem.

DataSeeds.AI operates a live, ongoing photography catalog that has amassed over 100 million images, sourced from both amateur and professional photographers participating in thousands of themed challenges across diverse geographic and stylistic contexts. Unlike most public datasets, this corpus is:

  • Fully licensed for downstream use in AI training
  • Backed by structured consent frameworks and traceable rights, with active opt-in from creators
  • Rich in EXIF metadata, including camera model, lens type, and occasionally location data
  • Curated through a built-in human preference signal based on competitive ranking, yielding rare insight into subjective aesthetic quality

On-Demand Dataset Creation

Uniquely, DataSeeds.AI has the ability to source new image datasets to spec via a just-in-time, first-party data acquisition engine. Clients (e.g. AI labs, model developers, media companies) can request:

  • Specific content themes (e.g., "urban decay at dusk," "elderly people with dogs in snowy environments")
  • Defined technical attributes (camera type, exposure time, geographic constraints)
  • Ethical/region-specific filtering (e.g., GDPR-compliant imagery, no identifiable faces, kosher food imagery)
  • Matching segmentation masks, EXIF metadata, and tiered annotations

Within days, the DataSeeds.AI platform can launch curated challenges to its global network of contributors and deliver targeted datasets with commercial-grade licensing terms.

Sales Inquiries

To inquire about licensing or customized dataset sourcing, contact: [email protected]

License & Citation

License: Apache 2.0

For commercial licenses, annotation, or access to the full 100M+ image catalog with on-demand annotations: [email protected]

Citation

If you find the data useful, please cite:

@article{abdoli2025peerranked,
    title={Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from GuruShots' Annotated Imagery}, 
    author={Sajjad Abdoli and Freeman Lewin and Gediminas Vasiliauskas and Fabian Schonholz},
    journal={arXiv preprint arXiv:2506.05673},
    year={2025},
}