Datasets:
image imagewidth (px) 512 5k | label class label 19
classes |
|---|---|
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back | |
03_back |
🍌 MultiBanana: A Challenging Benchmark for Multi-Reference Text-to-Image Generation 🍌
CVPR 2026 (Main)
This repository provides the datasets for
“MultiBanana: A Challenging Benchmark for Multi-Reference Text-to-Image Generation” by Yuta Oshima, Daiki Miyake, Kohsei Matsutani, Yusuke Iwasawa, Masahiro Suzuki, Yutaka Matsuo and Hiroki Furuta
Paper Link
https://arxiv.org/abs/2511.22989
Github Repository
For the usage of this benchmark, please see Github Repository.
https://github.com/matsuolab/multibanana/
Dataset Summary
MultiBanana-Bench comprises 32 tasks designed to evaluate how well image generation models can faithfully incorporate information from multiple reference images. We report evaluation scores using Qwen3-VL-8B-Instruct, a fixed, open-weight judge model. We hope this benchmark, along with its evaluation framework using an open-source VLM as a judge, will serve as a foundation for future research in multi-reference text-to-image generation.
Acknowledgement
This dataset partially incorporates a subset of images from the LAION-5B dataset. We acknowledge and thank the LAION team for making such a valuable large-scale dataset openly available to the research community.
Opt-out and Removal
This dataset contains images from the LAION-5B dataset, which are available on the public internet. If your image represents you or your copyrighted work and you wish to have it removed from this dataset, please contact us at {yuta.oshima, daiki.miyake}[at]g.ecc.u-tokyo.ac.jp with the relevant image. We will remove the entry from the dataset immediately.
Safety and Ethics
While this dataset is derived from LAION, web-crawled data may still contain inappropriate or harmful content. We have applied auto filtering with Gemini-Flash-2.5 and human annotator. Please exercise caution when you viewing the images.
Citation
@misc{oshima2025multibanana,
title={MultiBanana: A Challenging Benchmark for Multi-Reference Text-to-Image Generation},
author={Yuta Oshima and Daiki Miyake and Kohsei Matsutani and Yusuke Iwasawa and Masahiro Suzuki and Yutaka Matsuo and Hiroki Furuta},
year={2025},
eprint={2511.22989},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.22989},
}
- Downloads last month
- 2,940