Upload folder using huggingface_hub
Browse files- README.md +2 -2
- README.zh_CN.md +3 -2
README.md
CHANGED
@@ -11,10 +11,10 @@ English | [中文 README](README.zh_CN.md)
|
|
11 |
**Web-Bench** is a benchmark designed to evaluate the performance of LLMs in actual Web development. Web-Bench contains 50 projects, each consisting of 20 tasks with sequential dependencies. The tasks implement project features in sequence, simulating real-world human development workflows. When designing Web-Bench, we aim to cover the foundational elements of Web development: Web Standards and Web Frameworks. Given the scale and complexity of these projects, which were designed by engineers with 5-10 years of experience, each presents a significant challenge. On average, a single project takes 4–8 hours for a senior engineer to complete. On our given benchmark agent (Web-Agent), SOTA (Claude 3.7 Sonnet) achieves only 25.1\% Pass@1.
|
12 |
|
13 |
The distribution of the experimental data aligns well with the current code generation capabilities of mainstream LLMs.
|
14 |
-
|
15 |
|
16 |
HumanEval and MBPP have approached saturation. APPS and EvalPlus are approaching saturation. The SOTA for Web-Bench is 25.1\%, which is lower (better) than that of the SWE-bench Full and Verified sets.
|
17 |
-
|
18 |
|
19 |
## 🏅 Leaderboard
|
20 |
|
|
|
11 |
**Web-Bench** is a benchmark designed to evaluate the performance of LLMs in actual Web development. Web-Bench contains 50 projects, each consisting of 20 tasks with sequential dependencies. The tasks implement project features in sequence, simulating real-world human development workflows. When designing Web-Bench, we aim to cover the foundational elements of Web development: Web Standards and Web Frameworks. Given the scale and complexity of these projects, which were designed by engineers with 5-10 years of experience, each presents a significant challenge. On average, a single project takes 4–8 hours for a senior engineer to complete. On our given benchmark agent (Web-Agent), SOTA (Claude 3.7 Sonnet) achieves only 25.1\% Pass@1.
|
12 |
|
13 |
The distribution of the experimental data aligns well with the current code generation capabilities of mainstream LLMs.
|
14 |
+
<img width="500" alt="pass@1" src="./docs/assets/pass-1.png" />
|
15 |
|
16 |
HumanEval and MBPP have approached saturation. APPS and EvalPlus are approaching saturation. The SOTA for Web-Bench is 25.1\%, which is lower (better) than that of the SWE-bench Full and Verified sets.
|
17 |
+
<img width="500" alt="SOTAs" src="./docs/assets/sotas.png" />
|
18 |
|
19 |
## 🏅 Leaderboard
|
20 |
|
README.zh_CN.md
CHANGED
@@ -11,10 +11,11 @@ license: cc-by-4.0
|
|
11 |
**Web-Bench** 是一个用于评估 LLM 在真实 Web 项目上表现的基准。Web-Bench 包含 50 个项目,每个项目包含 20 个有时序依赖关系的任务,逼真模拟了人类开发项目的过程。Web-Bench 在设计时考虑了如何覆盖 Web 应用开发所依赖的基础:Web Standards 和 Web Frameworks。由于它们的庞大规模和复杂度,以及设计项目的工程师具备 5-10 年开发经验,最终设计出来的项目对于人类资深工程师而言都具有一定的复杂度(单项目平均 4-8 小时完成)。并且在我们给定的基准 Agent 上,SOTA(Claude 3.7 Sonnet)Pass@1 仅有 25.1%。
|
12 |
|
13 |
实验数据的分布和当前主流 LLM 代码生成能力也较匹配。
|
14 |
-
|
15 |
|
16 |
HumanEval 和 MBPP 已趋于饱和,APPS 和 EvalPlus 也正在接近饱和状态。Web-Bench 的 SOTA 为 25.1%,低于 (低更好) SWE-bench Full 和 Verified。
|
17 |
-
|
|
|
18 |
|
19 |
## 🏅 Leaderboard
|
20 |
|
|
|
11 |
**Web-Bench** 是一个用于评估 LLM 在真实 Web 项目上表现的基准。Web-Bench 包含 50 个项目,每个项目包含 20 个有时序依赖关系的任务,逼真模拟了人类开发项目的过程。Web-Bench 在设计时考虑了如何覆盖 Web 应用开发所依赖的基础:Web Standards 和 Web Frameworks。由于它们的庞大规模和复杂度,以及设计项目的工程师具备 5-10 年开发经验,最终设计出来的项目对于人类资深工程师而言都具有一定的复杂度(单项目平均 4-8 小时完成)。并且在我们给定的基准 Agent 上,SOTA(Claude 3.7 Sonnet)Pass@1 仅有 25.1%。
|
12 |
|
13 |
实验数据的分布和当前主流 LLM 代码生成能力也较匹配。
|
14 |
+
<img width="500" alt="pass@1" src="./docs/assets/pass-1.png" />
|
15 |
|
16 |
HumanEval 和 MBPP 已趋于饱和,APPS 和 EvalPlus 也正在接近饱和状态。Web-Bench 的 SOTA 为 25.1%,低于 (低更好) SWE-bench Full 和 Verified。
|
17 |
+
<img width="500" alt="SOTAs" src="./docs/assets/sotas.png" />
|
18 |
+
|
19 |
|
20 |
## 🏅 Leaderboard
|
21 |
|