clefourrier HF Staff commited on
Commit
f561a5a
·
1 Parent(s): 8a7785b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -4
README.md CHANGED
@@ -1,10 +1,35 @@
1
  ---
2
  title: README
3
- emoji: 😻
4
- colorFrom: purple
5
- colorTo: pink
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: README
3
+ emoji: 🔥
4
+ colorFrom: yellow
5
+ colorTo: purple
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
+ # What is this?
11
+ This repository is a demo leaderboard template.
12
+ You can copy the leaderboard space and the two datasets (results and requests) to your org to get started with your own leaderboard!
13
+
14
+ The space does 3 things:
15
+ - stores users submissions, and sends them to the `requests` dataset
16
+ - reads the submissions depending on their status/date of creation.
17
+ - reads the results (results of running evaluations should be sent to `results`) and displays them in a leaderboard.
18
+
19
+ You should use this leaderboard if you have your own backend and plan to run it elsewhere.
20
+
21
+ # Getting started
22
+ ## Defining environment variables
23
+ To get started on your own leaderboard, you will need to edit 2 files:
24
+ - `src/envs.py` to define your own environment variable (like the org name in which this has been copied)
25
+ - `src/about.py` with the tasks and number of few_shots you want for your tasks
26
+
27
+ ## Setting up fake results to initialize the leaderboard
28
+ Once this is done, you need to edit the "fake results" file to fit the format of your tasks: in the sub dictionary `results`, replace task_name1 and metric_name by the correct values you defined in Tasks above.
29
+ ```
30
+ "results": {
31
+ "task_name1": {
32
+ "metric_name": 0
33
+ }
34
+ }
35
+ ```