Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
ECCO / README.md
vishruthnath's picture
Update README.md
b5689bf verified
metadata
dataset_info:
  - config_name: edit
    features:
      - name: input
        dtype: string
      - name: target
        dtype: string
      - name: problem_id
        dtype: string
    splits:
      - name: train
        num_bytes: 56166875
        num_examples: 48386
      - name: val
        num_bytes: 3336062
        num_examples: 3338
      - name: test
        num_bytes: 857857
        num_examples: 794
    download_size: 365069
    dataset_size: 60360794
  - config_name: generate
    features:
      - name: problem_id
        dtype: string
      - name: problem_description
        dtype: string
    splits:
      - name: train
        num_bytes: 1793963
        num_examples: 1262
      - name: val
        num_bytes: 96855
        num_examples: 69
      - name: test
        num_bytes: 60776
        num_examples: 49
    download_size: 37588
    dataset_size: 1951594
  - config_name: generate_eval
    features:
      - name: problem_id
        dtype: string
      - name: runtimes
        sequence: float64
      - name: memories
        sequence: float64
      - name: num_sol
        dtype: int64
    splits:
      - name: test
        num_bytes: 770704
        num_examples: 48
    download_size: 147211
    dataset_size: 770704
configs:
  - config_name: edit
    data_files:
      - split: train
        path: edit/train-*
      - split: val
        path: edit/val-*
      - split: test
        path: edit/test-*
  - config_name: generate
    data_files:
      - split: train
        path: generate/train-*
      - split: val
        path: generate/val-*
      - split: test
        path: generate/test-*
  - config_name: generate_eval
    data_files:
      - split: test
        path: generate_eval/test-*

ECCO

Dataset from the paper "ECCO: Can We Improve Model-Generated Code Efficiency Without Sacrificing Functional Correctness?"

teaser

The dataset consists of 2 subsets edit and generate each with 3 splits (train, val and test).

Code repository: https://github.com/CodeEff/ECCO

Loading the dataset / benchmark

dataset = load_dataset('CodeEff/ECCO', 'edit') # For history-based editing setting
dataset = load_dataset('CodeEff/ECCO', 'generate') # For nl-instructed generation setting

These are used to generate code by each model across the 2 paradigms. We use the test split for the evaluation/results and the train and val splits for finetuning and few-shot prompting.

Download the test cases

mkdir data && cd data
wget https://huggingface.co/datasets/CodeEff/ECCO/resolve/main/test_cases.zip
unzip test_cases.zip

Evaluation dataset

The dataset also consists of an additional 3rd subset generate_eval which consists of the runtime and memory of a spectrum of user solutions for each problem in the test split.
This is used for the percentile evaluation of the NL-instructed generation paradigm.

Data Sources

Dataset is sourced from IBM CodeNet which consists of primarily competetive programming solutions. This is further filtered for efficiency and correctness as described in our paper.

Citation

@article{waghjale2024ecco,
  title={ECCO: Can We Improve Model-Generated Code Efficiency Without Sacrificing Functional Correctness?},
  author={Waghjale, Siddhant and Veerendranath, Vishruth and Wang, Zora Zhiruo and Fried, Daniel},
  journal={arXiv preprint arXiv:2407.14044},
  year={2024}
}