Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
File size: 3,338 Bytes
4838129
 
8ed21d3
4838129
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9819ff7
 
 
 
8ed21d3
 
 
 
 
 
 
 
 
 
 
 
 
 
d78f3f2
 
 
 
1c3510e
 
 
 
 
 
 
 
 
 
 
b8392e1
1c3510e
 
 
 
4838129
 
 
 
 
 
 
 
 
8ed21d3
 
 
 
 
 
 
 
1c3510e
 
b8392e1
 
4838129
f341a85
4838129
f341a85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b5689bf
 
 
 
 
 
 
f341a85
daf233f
f341a85
 
 
 
 
 
 
 
b5689bf
f341a85
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
dataset_info:
- config_name: edit
  features:
  - name: input
    dtype: string
  - name: target
    dtype: string
  - name: problem_id
    dtype: string
  splits:
  - name: train
    num_bytes: 56166875
    num_examples: 48386
  - name: val
    num_bytes: 3336062
    num_examples: 3338
  - name: test
    num_bytes: 857857
    num_examples: 794
  download_size: 365069
  dataset_size: 60360794
- config_name: generate
  features:
  - name: problem_id
    dtype: string
  - name: problem_description
    dtype: string
  splits:
  - name: train
    num_bytes: 1793963
    num_examples: 1262
  - name: val
    num_bytes: 96855
    num_examples: 69
  - name: test
    num_bytes: 60776
    num_examples: 49
  download_size: 37588
  dataset_size: 1951594
- config_name: generate_eval
  features:
  - name: problem_id
    dtype: string
  - name: runtimes
    sequence: float64
  - name: memories
    sequence: float64
  - name: num_sol
    dtype: int64
  splits:
  - name: test
    num_bytes: 770704
    num_examples: 48
  download_size: 147211
  dataset_size: 770704
configs:
- config_name: edit
  data_files:
  - split: train
    path: edit/train-*
  - split: val
    path: edit/val-*
  - split: test
    path: edit/test-*
- config_name: generate
  data_files:
  - split: train
    path: generate/train-*
  - split: val
    path: generate/val-*
  - split: test
    path: generate/test-*
- config_name: generate_eval
  data_files:
  - split: test
    path: generate_eval/test-*
---
# ECCO

Dataset from the paper "ECCO: Can We Improve Model-Generated Code Efficiency Without Sacrificing Functional Correctness?"

![teaser](https://github.com/user-attachments/assets/44659b06-3676-4deb-affb-2ec5f02787f6)

The dataset consists of 2 subsets `edit` and `generate` each with 3 splits (`train`, `val` and `test`).

Code repository: [https://github.com/CodeEff/ECCO](https://github.com/CodeEff/ECCO)

### Loading the dataset / benchmark 
```python
dataset = load_dataset('CodeEff/ECCO', 'edit') # For history-based editing setting
dataset = load_dataset('CodeEff/ECCO', 'generate') # For nl-instructed generation setting
```
These are used to generate code by each model across the 2 paradigms. We use the `test` split for the evaluation/results and the `train` and `val` splits for finetuning and few-shot prompting.

### Download the test cases 
```sh
mkdir data && cd data
wget https://huggingface.co/datasets/CodeEff/ECCO/resolve/main/test_cases.zip
unzip test_cases.zip
```

### Evaluation dataset
The dataset also consists of an additional 3rd subset `generate_eval` which consists of the runtime and memory of a spectrum of user solutions for each problem in the `test` split.     
This is used for the percentile evaluation of the **NL-instructed generation** paradigm.

### Data Sources 
Dataset is sourced from [IBM CodeNet](https://github.com/IBM/Project_CodeNet) which consists of primarily competetive programming solutions. 
This is further filtered for efficiency and correctness as described in our paper. 


### Citation 
```bib
@article{waghjale2024ecco,
  title={ECCO: Can We Improve Model-Generated Code Efficiency Without Sacrificing Functional Correctness?},
  author={Waghjale, Siddhant and Veerendranath, Vishruth and Wang, Zora Zhiruo and Fried, Daniel},
  journal={arXiv preprint arXiv:2407.14044},
  year={2024}
}
```