Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
watsonxDocsQA / README.md
benjamsz's picture
Update README.md
fc4b541 verified
metadata
license: apache-2.0
configs:
  - config_name: corpus
    data_files:
      - split: train
        path: corpus/train-*
  - config_name: question_answers
    data_files:
      - split: train
        path: question_answers/train-*
      - split: test
        path: question_answers/test-*
dataset_info:
  - config_name: corpus
    features:
      - name: doc_id
        dtype: string
      - name: url
        dtype: string
      - name: title
        dtype: string
      - name: document
        dtype: string
      - name: md_document
        dtype: string
    splits:
      - name: train
        num_bytes: 10625185
        num_examples: 1144
    download_size: 3327056
    dataset_size: 10625185
  - config_name: question_answers
    features:
      - name: question_id
        dtype: string
      - name: question
        dtype: string
      - name: correct_answer
        dtype: string
      - name: correct_answer_document_ids
        dtype: string
      - name: ground_truths_contexts
        dtype: string
    splits:
      - name: train
        num_bytes: 60268
        num_examples: 45
      - name: test
        num_bytes: 33340
        num_examples: 30
    download_size: 58074
    dataset_size: 93608

watsonxDocsQA Dataset

Overview

watsonxDocsQA is a new open-source dataset and benchmark contributed by IBM. The dataset is derived from enterprise product documentation and is designed specifically for end-to-end Retrieval-Augmented Generation (RAG) evaluation. The dataset consists of two components:

  • Documents: A corpus of 1,144 text and markdown files generated by crawling enterprise documentation (main page - crawl March 2024).
  • Benchmark: A set of 75 question-answer (QA) pairs with gold document labels and answers. The QA pairs are crafted as follows:
    • 25 questions: Human-generated by two subject matter experts.
    • 50 questions: Synthetically generated using the tiiuae/falcon-180b model, then manually filtered and reviewed for quality. The methodology is detailed in Yehudai et al. 2024.

Data Description

Corpus Dataset

The corpus dataset contains the following fields:

Field Description
doc_id Unique identifier for the document
title Document title as it appears on the HTML page
document Textual representation of the content
md_document Markdown representation of the content
url Origin URL of the document

Question-Answers Dataset

The QA dataset includes these fields:

Field Description
question_id Unique identifier for the question
question Text of the question
correct_answer Ground-truth answer
ground_truths_contexts_ids List of ground-truth document IDs
ground_truths_contexts List of grounding texts on which the answer is based

Samples

Below is an example from the question_answers dataset:

  • question_id: watsonx_q_2
  • question: What foundation models have been built by IBM?
  • correct_answer:
    "Foundation models built by IBM include:
    • granite-13b-chat-v2
    • granite-13b-chat-v1
    • granite-13b-instruct-v1"
  • ground_truths_contexts_ids: B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C
  • ground_truths_contexts: Foundation models built by IBM \n\nIn IBM watsonx.ai, ...

Citation

If you decide to use this dataset, please consider citing our preprint

@misc{orbach2025analysishyperparameteroptimizationmethods,
      title={An Analysis of Hyper-Parameter Optimization Methods for Retrieval Augmented Generation}, 
      author={Matan Orbach and Ohad Eytan and Benjamin Sznajder and Ariel Gera and Odellia Boni and Yoav Kantor and Gal Bloch and Omri Levy and Hadas Abraham and Nitzan Barzilay and Eyal Shnarch and Michael E. Factor and Shila Ofek-Koifman and Paula Ta-Shma and Assaf Toledo},
      year={2025},
      eprint={2505.03452},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.03452}, 
}

Contact

For questions or feedback, please: