Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
guxiaowu commited on
Commit
90903c8
·
verified ·
1 Parent(s): ca6759e

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .DS_Store +0 -0
  2. .gitattributes +29 -1
  3. README.md +35 -39
.DS_Store CHANGED
Binary files a/.DS_Store and b/.DS_Store differ
 
.gitattributes CHANGED
@@ -8,6 +8,8 @@
8
  *.h5 filter=lfs diff=lfs merge=lfs -text
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
10
  *.lfs.* filter=lfs diff=lfs merge=lfs -text
 
 
11
  *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
  *.model filter=lfs diff=lfs merge=lfs -text
13
  *.msgpack filter=lfs diff=lfs merge=lfs -text
@@ -25,6 +27,7 @@
25
  *.safetensors filter=lfs diff=lfs merge=lfs -text
26
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
  *.tar.* filter=lfs diff=lfs merge=lfs -text
 
28
  *.tflite filter=lfs diff=lfs merge=lfs -text
29
  *.tgz filter=lfs diff=lfs merge=lfs -text
30
  *.wasm filter=lfs diff=lfs merge=lfs -text
@@ -32,4 +35,29 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
35
- scale-hf-logo.png filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  *.h5 filter=lfs diff=lfs merge=lfs -text
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
10
  *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
  *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
  *.model filter=lfs diff=lfs merge=lfs -text
15
  *.msgpack filter=lfs diff=lfs merge=lfs -text
 
27
  *.safetensors filter=lfs diff=lfs merge=lfs -text
28
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
  *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
  *.tflite filter=lfs diff=lfs merge=lfs -text
32
  *.tgz filter=lfs diff=lfs merge=lfs -text
33
  *.wasm filter=lfs diff=lfs merge=lfs -text
 
35
  *.zip filter=lfs diff=lfs merge=lfs -text
36
  *.zst filter=lfs diff=lfs merge=lfs -text
37
  *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
60
+ DOUBAO_ENDPOINT="ep-20250117120525-pp8fp"
61
+ DOUBAO_1_5_ENDPOINT="ep-20250122173512-4tqwl"
62
+ DOUBAO_API_KEY="43e9209b-5c60-478e-8f2f-1b6077f5dc57"
63
+ DOUBAO_1_5_256K_ENDPOINT="ep-20250123113810-mxjq2"
README.md CHANGED
@@ -1,46 +1,42 @@
1
  ---
2
- title: Web Bench Leaderboard
3
- emoji: 🥇
4
- colorFrom: green
5
- colorTo: indigo
6
- sdk: gradio
7
- app_file: app.py
8
- pinned: true
9
- license: apache-2.0
10
- short_description: Duplicate this leaderboard to initialize your own!
11
- sdk_version: 5.19.0
12
  ---
13
 
14
- # Start the configuration
15
-
16
- Most of the variables to change for a default leaderboard are in `src/env.py` (replace the path for your leaderboard) and `src/about.py` (for tasks).
17
-
18
- Results files should have the following format and be stored as json files:
19
- ```json
20
- {
21
- "config": {
22
- "model_dtype": "torch.float16", # or torch.bfloat16 or 8bit or 4bit
23
- "model_name": "path of the model on the hub: org/model",
24
- "model_sha": "revision on the hub",
25
- },
26
- "results": {
27
- "task_name": {
28
- "metric_name": score,
29
- },
30
- "task_name2": {
31
- "metric_name": score,
32
- }
33
- }
34
- }
35
- ```
36
 
37
- Request files are created automatically by this tool.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
- If you encounter problem on the space, don't hesitate to restart it to remove the create eval-queue, eval-queue-bk, eval-results and eval-results-bk created folder.
40
 
41
- # Code logic for more complex edits
42
 
43
- You'll find
44
- - the main table' columns names and properties in `src/display/utils.py`
45
- - the logic to read all results and request files, then convert them in dataframe lines, in `src/leaderboard/read_evals.py`, and `src/populate.py`
46
- - the logic to allow or filter submissions in `src/submission/submit.py` and `src/submission/check_validity.py`
 
1
  ---
2
+ license: cc-by-4.0
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
+ # Web-Bench
6
+
7
+ English | [中文 README](README.zh_CN.md)
8
+
9
+ ## 📖 Overview
10
+
11
+ **Web-Bench** is a benchmark designed to evaluate the performance of LLMs in actual Web development. Web-Bench contains 50 projects, each consisting of 20 tasks with sequential dependencies. The tasks implement project features in sequence, simulating real-world human development workflows. When designing Web-Bench, we aim to cover the foundational elements of Web development: Web Standards and Web Frameworks. Given the scale and complexity of these projects, which were designed by engineers with 5-10 years of experience, each presents a significant challenge. On average, a single project takes 4–8 hours for a senior engineer to complete. On our given benchmark agent (Web-Agent), SOTA (Claude 3.7 Sonnet) achieves only 25.1\% Pass@1.
12
+
13
+ The distribution of the experimental data aligns well with the current code generation capabilities of mainstream LLMs.
14
+ <img width="500" alt="pass@1" src="./docs/assets/pass-1.png" />
15
+
16
+ HumanEval and MBPP have approached saturation. APPS and EvalPlus are approaching saturation. The SOTA for Web-Bench is 25.1\%, which is lower (better) than that of the SWE-bench Full and Verified sets.
17
+ <img width="500" alt="SOTAs" src="./docs/assets/sotas.png" />
18
+
19
+ ## Web-Bench: A LLM Code Benchmark Based on Web Standards and Frameworks
20
+ The datasets was presented in the paper [Web-Bench: A LLM Code Benchmark Based on Web Standards and Frameworks](https://huggingface.co/papers/2505.07473).
 
 
 
 
 
 
21
 
22
+ ## 🏅 Leaderboard
23
+
24
+ [Leaderboard](https://huggingface.co/spaces/bytedance-research/Web-Bench-Leaderboard)
25
+
26
+
27
+ ## Dataset Structure
28
+
29
+ An example of a Web-Bench datum is as follows:
30
+
31
+ ```
32
+ id: (str) Task id, init | task-n
33
+ project: (str) Task project name
34
+ description: (str) Task details description
35
+ date: (str) Task publish date, filter contaminated model
36
+ level: (str) Task level: easy | moderate | challenging
37
+ ```
38
 
 
39
 
40
+ ## 📘 Usage
41
 
42
+ [GitHub](https://github.com/bytedance/web-bench)