zhwang01 commited on
Commit
92a6554
·
verified ·
1 Parent(s): dc77a64

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -72,3 +72,42 @@ configs:
72
  - split: test
73
  path: v1_2025/test-*
74
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
  - split: test
73
  path: v1_2025/test-*
74
  ---
75
+ <div align="center">
76
+ <h1>AetherCode: Evaluating LLMs' Ability to Win In Premier Programming Competitions</h1>
77
+ </div>
78
+
79
+ <div align="center" style="line-height: 1;">
80
+ <a href="https://arxiv.org/" target="_blank" style="margin: 2px;">
81
+ <img alt="Comming Soon" src="https://img.shields.io/badge/arXiv-Comming%20Soon-red?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
82
+ </a>
83
+ <a href="https://huggingface.co/datasets/m-a-p" target="_blank" style="margin: 2px;">
84
+ <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-m--a--p-536af5" style="display: inline-block; vertical-align: middle;"/>
85
+ </a>
86
+ <a href="https://huggingface.co/datasets/ByteDance-Seed/Code-Contests-Plus/blob/main/LICENSE" style="margin: 2px;">
87
+ <img alt="Dataset License" src="https://img.shields.io/badge/Dataset_License-CC--BY--4.0-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
88
+ </a>
89
+ </div>
90
+
91
+ ## Introduction
92
+
93
+ Competitive programming has emerged as a critical benchmark for evaluating the reasoning and coding capabilities of Large Language Models (LLMs). Despite impressive progress on existing benchmarks, we argue that current evaluations overstate model proficiency, masking a substantial gap between LLMs and elite human programmers. This gap arises from two key limitations: insufficient difficulty and scope of benchmark problems, and evaluation bias from low-quality test cases. To address these shortcomings, we present AetherCode, a new benchmark that draws problems from premier programming competitions such as IOI and ICPC, offering broader coverage and higher difficulty. AetherCode further incorporates comprehensive, expert-validated test suites built through a hybrid of automated generation and human curation, ensuring rigorous and reliable assessment. By combining challenging problem design with robust evaluation, AetherCode provides a more faithful measure of LLM capabilities and sets a new standard for future research in code reasoning.
94
+
95
+ ## Highlights
96
+
97
+ **Problem Curation from Top-Tier Competitions**: AetherCode is the first benchmark to systematically collect problems from premier programming competitions worldwide, including the Olympiad in Informatics (OI) and the International Collegiate Programming Contest (ICPC). Our process involved a comprehensive collection, meticulous cleaning, and format conversion of problems from PDF to a Markdown+LaTeX structure. Each problem statement was manually proofread for correctness, and a team of competitive programming experts mannotated each problem with classification tags.
98
+
99
+ **High-Quality Test Case Generation**: We developed a hybrid methodology, combining automated generation with expert annotation, to create high-quality test cases for every problem. We evaluated the correctness and comprehensiveness of our test cases by validating them against a large corpus of collected solutions, enforcing a standard of zero false positives and zero false negatives.
100
+
101
+ ## Quickstart
102
+
103
+ Load dataset without test cases:
104
+ ```python
105
+ from datasets import load_dataset
106
+
107
+ # Login using e.g. `huggingface-cli login` to access this dataset
108
+ ds = load_dataset("m-a-p/AetherCode", "v1_2024")
109
+ ```
110
+
111
+ ## License
112
+
113
+ This project is licensed under CC-BY-4.0. See the [LICENSE file](https://huggingface.co/datasets/m-a-p/AetherCode/blob/main/LICENSE) for details.