Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
json
Sub-tasks:
document-retrieval
Size:
10K - 100K
Tags:
text-retrieval
metadata
task_categories:
- text-retrieval
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- text-retrieval
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
- config_name: corpus
features:
- name: id
dtype: string
- name: text
dtype: string
- config_name: queries
features:
- name: id
dtype: string
- name: text
dtype: string
configs:
- config_name: default
data_files:
- split: test
path: relevance.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
APPS is a benchmark for code generation with 10000 problems. It can be used to evaluate the ability of language models to generate code from natural language specifications. To create the APPS dataset, the authors manually curated problems from open-access sites where programmers share problems with each other, including Codewars, AtCoder, Kattis, and Codeforces.
Usage
import datasets
# Download the dataset
queries = datasets.load_dataset("embedding-benchmark/APPS", "queries")
documents = datasets.load_dataset("embedding-benchmark/APPS", "corpus")
pair_labels = datasets.load_dataset("embedding-benchmark/APPS", "default")