Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
84e5352
·
1 Parent(s): bd7ebee

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,202 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - expert-generated
6
- language:
7
- - en
8
- license:
9
- - mit
10
- multilinguality:
11
- - monolingual
12
- pretty_name: OpenAI HumanEval
13
- size_categories:
14
- - n<1K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - text2text-generation
19
- task_ids: []
20
- tags:
21
- - code-generation
22
- paperswithcode_id: humaneval
23
- dataset_info:
24
- features:
25
- - name: task_id
26
- dtype: string
27
- - name: prompt
28
- dtype: string
29
- - name: canonical_solution
30
- dtype: string
31
- - name: test
32
- dtype: string
33
- - name: entry_point
34
- dtype: string
35
- config_name: openai_humaneval
36
- splits:
37
- - name: test
38
- num_bytes: 194414
39
- num_examples: 164
40
- download_size: 44877
41
- dataset_size: 194414
42
- ---
43
-
44
- # Dataset Card for OpenAI HumanEval
45
-
46
- ## Table of Contents
47
- - [OpenAI HumanEval](#openai-humaneval)
48
- - [Table of Contents](#table-of-contents)
49
- - [Dataset Description](#dataset-description)
50
- - [Dataset Summary](#dataset-summary)
51
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
52
- - [Languages](#languages)
53
- - [Dataset Structure](#dataset-structure)
54
- - [Data Instances](#data-instances)
55
- - [Data Fields](#data-fields)
56
- - [Data Splits](#data-splits)
57
- - [Dataset Creation](#dataset-creation)
58
- - [Curation Rationale](#curation-rationale)
59
- - [Source Data](#source-data)
60
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
61
- - [Who are the source language producers?](#who-are-the-source-language-producers)
62
- - [Annotations](#annotations)
63
- - [Annotation process](#annotation-process)
64
- - [Who are the annotators?](#who-are-the-annotators)
65
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
66
- - [Considerations for Using the Data](#considerations-for-using-the-data)
67
- - [Social Impact of Dataset](#social-impact-of-dataset)
68
- - [Discussion of Biases](#discussion-of-biases)
69
- - [Other Known Limitations](#other-known-limitations)
70
- - [Additional Information](#additional-information)
71
- - [Dataset Curators](#dataset-curators)
72
- - [Licensing Information](#licensing-information)
73
- - [Citation Information](#citation-information)
74
- - [Contributions](#contributions)
75
-
76
- ## Dataset Description
77
-
78
- - **Repository:** [GitHub Repository](https://github.com/openai/human-eval)
79
- - **Paper:** [Evaluating Large Language Models Trained on Code](https://arxiv.org/abs/2107.03374)
80
-
81
- ### Dataset Summary
82
-
83
- The HumanEval dataset released by OpenAI includes 164 programming problems with a function sig- nature, docstring, body, and several unit tests. They were handwritten to ensure not to be included in the training set of code generation models.
84
-
85
- ### Supported Tasks and Leaderboards
86
-
87
- ### Languages
88
- The programming problems are written in Python and contain English natural text in comments and docstrings.
89
-
90
- ## Dataset Structure
91
-
92
- ```python
93
- from datasets import load_dataset
94
- load_dataset("openai_humaneval")
95
-
96
- DatasetDict({
97
- test: Dataset({
98
- features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point'],
99
- num_rows: 164
100
- })
101
- })
102
- ```
103
-
104
- ### Data Instances
105
-
106
- An example of a dataset instance:
107
-
108
- ```
109
- {
110
- "task_id": "test/0",
111
- "prompt": "def return1():\n",
112
- "canonical_solution": " return 1",
113
- "test": "def check(candidate):\n assert candidate() == 1",
114
- "entry_point": "return1"
115
- }
116
- ```
117
-
118
- ### Data Fields
119
-
120
- - `task_id`: identifier for the data sample
121
- - `prompt`: input for the model containing function header and docstrings
122
- - `canonical_solution`: solution for the problem in the `prompt`
123
- - `test`: contains function to test generated code for correctness
124
- - `entry_point`: entry point for test
125
-
126
-
127
- ### Data Splits
128
-
129
- The dataset only consists of a test split with 164 samples.
130
-
131
- ## Dataset Creation
132
-
133
- ### Curation Rationale
134
-
135
- Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
136
-
137
- ### Source Data
138
-
139
- The dataset was handcrafted by engineers and researchers at OpenAI.
140
-
141
- #### Initial Data Collection and Normalization
142
-
143
- [More Information Needed]
144
-
145
- #### Who are the source language producers?
146
-
147
- [More Information Needed]
148
-
149
- ### Annotations
150
-
151
- [More Information Needed]
152
-
153
- #### Annotation process
154
-
155
- [More Information Needed]
156
-
157
- #### Who are the annotators?
158
-
159
- [More Information Needed]
160
-
161
- ### Personal and Sensitive Information
162
-
163
- None.
164
-
165
- ## Considerations for Using the Data
166
- Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
167
-
168
- ### Social Impact of Dataset
169
- With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
170
-
171
- ### Discussion of Biases
172
-
173
- [More Information Needed]
174
-
175
- ### Other Known Limitations
176
-
177
- [More Information Needed]
178
-
179
- ## Additional Information
180
-
181
- ### Dataset Curators
182
- OpenAI
183
-
184
- ### Licensing Information
185
-
186
- MIT License
187
-
188
- ### Citation Information
189
- ```
190
- @misc{chen2021evaluating,
191
- title={Evaluating Large Language Models Trained on Code},
192
- author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
193
- year={2021},
194
- eprint={2107.03374},
195
- archivePrefix={arXiv},
196
- primaryClass={cs.LG}
197
- }
198
- ```
199
-
200
- ### Contributions
201
-
202
- Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1,65 +0,0 @@
1
- {
2
- "openai_humaneval": {
3
- "description": "The HumanEval dataset released by OpenAI contains 164 handcrafted programming challenges together with unittests to very the viability of a proposed solution.\n",
4
- "citation": "@misc{chen2021evaluating,\n title={Evaluating Large Language Models Trained on Code},\n author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},\n year={2021},\n eprint={2107.03374},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}",
5
- "homepage": "https://github.com/openai/human-eval",
6
- "license": "MIT",
7
- "features": {
8
- "task_id": {
9
- "dtype": "string",
10
- "id": null,
11
- "_type": "Value"
12
- },
13
- "prompt": {
14
- "dtype": "string",
15
- "id": null,
16
- "_type": "Value"
17
- },
18
- "canonical_solution": {
19
- "dtype": "string",
20
- "id": null,
21
- "_type": "Value"
22
- },
23
- "test": {
24
- "dtype": "string",
25
- "id": null,
26
- "_type": "Value"
27
- },
28
- "entry_point": {
29
- "dtype": "string",
30
- "id": null,
31
- "_type": "Value"
32
- }
33
- },
34
- "post_processed": null,
35
- "supervised_keys": null,
36
- "task_templates": null,
37
- "builder_name": "openai_humaneval",
38
- "config_name": "openai_humaneval",
39
- "version": {
40
- "version_str": "1.0.0",
41
- "description": null,
42
- "major": 1,
43
- "minor": 0,
44
- "patch": 0
45
- },
46
- "splits": {
47
- "test": {
48
- "name": "test",
49
- "num_bytes": 194414,
50
- "num_examples": 164,
51
- "dataset_name": "openai_humaneval"
52
- }
53
- },
54
- "download_checksums": {
55
- "https://raw.githubusercontent.com/openai/human-eval/master/data/HumanEval.jsonl.gz": {
56
- "num_bytes": 44877,
57
- "checksum": "b796127e635a67f93fb35c04f4cb03cf06f38c8072ee7cee8833d7bee06979ef"
58
- }
59
- },
60
- "download_size": 44877,
61
- "post_processing_size": null,
62
- "dataset_size": 194414,
63
- "size_in_bytes": 239291
64
- }
65
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
openai_humaneval.py DELETED
@@ -1,78 +0,0 @@
1
- import json
2
-
3
- import datasets
4
-
5
-
6
- _DESCRIPTION = """\
7
- The HumanEval dataset released by OpenAI contains 164 handcrafted programming challenges together with unittests to very the viability of a proposed solution.
8
- """
9
- _URL = "https://raw.githubusercontent.com/openai/human-eval/master/data/HumanEval.jsonl.gz"
10
-
11
- _CITATION = """\
12
- @misc{chen2021evaluating,
13
- title={Evaluating Large Language Models Trained on Code},
14
- author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
15
- year={2021},
16
- eprint={2107.03374},
17
- archivePrefix={arXiv},
18
- primaryClass={cs.LG}
19
- }"""
20
-
21
- _HOMEPAGE = "https://github.com/openai/human-eval"
22
-
23
- _LICENSE = "MIT"
24
-
25
-
26
- class OpenaiHumaneval(datasets.GeneratorBasedBuilder):
27
- """HumanEval: A benchmark for code generation."""
28
-
29
- VERSION = datasets.Version("1.0.0")
30
-
31
- BUILDER_CONFIGS = [
32
- datasets.BuilderConfig(
33
- name="openai_humaneval",
34
- version=datasets.Version("1.0.0"),
35
- description=_DESCRIPTION,
36
- )
37
- ]
38
-
39
- def _info(self):
40
- features = datasets.Features(
41
- {
42
- "task_id": datasets.Value("string"),
43
- "prompt": datasets.Value("string"),
44
- "canonical_solution": datasets.Value("string"),
45
- "test": datasets.Value("string"),
46
- "entry_point": datasets.Value("string"),
47
- }
48
- )
49
-
50
- return datasets.DatasetInfo(
51
- description=_DESCRIPTION,
52
- features=features,
53
- supervised_keys=None,
54
- homepage=_HOMEPAGE,
55
- license=_LICENSE,
56
- citation=_CITATION,
57
- )
58
-
59
- def _split_generators(self, dl_manager):
60
- """Returns SplitGenerators."""
61
- data_dir = dl_manager.download_and_extract(_URL)
62
- return [
63
- datasets.SplitGenerator(
64
- name=datasets.Split.TEST,
65
- gen_kwargs={
66
- "filepath": data_dir,
67
- },
68
- )
69
- ]
70
-
71
- def _generate_examples(self, filepath):
72
- """Yields examples."""
73
- with open(filepath, encoding="utf-8") as file:
74
- data = [json.loads(line) for line in file]
75
- id_ = 0
76
- for sample in data:
77
- yield id_, sample
78
- id_ += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
openai_humaneval/openai_humaneval-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08b23cd7b0fa5db8b6139b6035b77d32a57406a02db7f7edc22ec79acb649a04
3
+ size 83919