HumanEval / README.md
fzoll's picture
Update README.md
ed1f48a verified
metadata
task_categories:
  - text-retrieval
task_ids:
  - document-retrieval
config_names:
  - corpus
tags:
  - text-retrieval
dataset_info:
  - config_name: default
    features:
      - name: query-id
        dtype: string
      - name: corpus-id
        dtype: string
      - name: score
        dtype: float64
  - config_name: corpus
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
  - config_name: queries
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
configs:
  - config_name: default
    data_files:
      - split: test
        path: relevance.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: queries.jsonl

The HumanEval dataset released by OpenAI includes 164 programming problems with a handwritten function signature, docstring, body, and several unit tests for each problem. The dataset was handcrafted by engineers and researchers at OpenAI.

Usage

import datasets

# Download the dataset
queries = datasets.load_dataset("embedding-benchmark/HumanEval", "queries")
documents = datasets.load_dataset("embedding-benchmark/HumanEval", "corpus")
pair_labels = datasets.load_dataset("embedding-benchmark/HumanEval", "default")