Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
from dataclasses import dataclass | |
from enum import Enum | |
class Task: | |
benchmark: str | |
metric: str | |
col_name: str | |
# Init: to update with your specific keys | |
class Tasks(Enum): | |
# task_key in the json file, metric_key in the json file, name to display in the leaderboard | |
task0 = Task("logiqa", "delta_abs", "LogiQA Ξ") | |
task1 = Task("logiqa2", "delta_abs", "LogiQA2 Ξ") | |
task2 = Task("lsat-ar", "delta_abs", "LSAT-ar Ξ") | |
task3 = Task("lsat-lr", "delta_abs", "LSAT-lr Ξ") | |
task4 = Task("lsat-rc", "delta_abs", "LSAT-rc Ξ") | |
#METRICS = list(set([task.value.metric for task in Tasks])) | |
# Your leaderboard name | |
TITLE = """<h1 align="center" id="space-title"><code>/\/</code> Open CoT Leaderboard</h1>""" | |
# What does your leaderboard evaluate? | |
INTRODUCTION_TEXT = """ | |
The `/\/` Open CoT Leaderboard tracks the reasoning skills of LLMs, measured as their ability to generate **effective chain-of-thought reasoning traces**. | |
The leaderboard reports **accuracy gains** achieved by using CoT, i.e.: _accuracy gain Ξ_ = _CoT accuracy_ β _baseline accuracy_. | |
See the "About" tab for more details and motivation. | |
""" | |
# Which evaluations are you running? how can people reproduce what you have? | |
LLM_BENCHMARKS_TEXT = """ | |
## How it works | |
To assess the reasoning skill of a given `model`, we carry out the following steps for each `task` (test dataset) and different CoT `regimes`. (A CoT `regime` consists in a prompt chain and decoding parameters used to generate a reasoning trace.) | |
1. Let the `model` generate CoT reasoning traces for all problems in the test dataset according to `regime`. | |
2. Let the `model` answer the test dataset problems, and record the resulting _baseline accuracy_. | |
3. Let the `model` answer the test dataset problems _with the reasoning traces appended_ to the prompt, and record the resulting _CoT accuracy_. | |
4. Compute the _accuracy gain Ξ_ = _CoT accuracy_ β _baseline accuracy_ for the given `model`, `task`, and `regime`. | |
Each `regime` has a different accuracy gain Ξ, and the leaderboard reports the best Ξ achieved by any regime. | |
## How is it different from other leaderboards? | |
... | |
## Test dataset selection (`tasks`) | |
## Reproducibility | |
To reproduce our results, check out the repository [cot-eval](https://github.com/logikon-ai/cot-eval). | |
""" | |
EVALUATION_QUEUE_TEXT = """ | |
## Some good practices before submitting a model | |
### 1) Make sure you can load your model and tokenizer with `vLLM`: | |
```python | |
from vllm import LLM, SamplingParams | |
prompts = [ | |
"Hello, my name is", | |
"The president of the United States is", | |
"The capital of France is", | |
"The future of AI is", | |
] | |
sampling_params = SamplingParams(temperature=0.8, top_p=0.95) | |
llm = LLM(model="<USER>/<MODEL>") | |
outputs = llm.generate(prompts, sampling_params) | |
``` | |
If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded. | |
Note: make sure your model is public! | |
### 2) Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index) | |
It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`! | |
### 3) Make sure your model has an open license! | |
This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model π€ | |
### 4) Fill up your model card | |
When we add extra information about models to the leaderboard, it will be automatically taken from the model card | |
## Your model is stuck in the pending queue? | |
We're populating the Open CoT Leaderboard step by step. The idea is to grow a diverse and informative sample of the LLM space. Plus, with limited compute, we're currently prioritizing models that are popular, promising, and relatively small. | |
""" | |
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" | |
CITATION_BUTTON_TEXT = r""" | |
Logikon AI Team. (2024). Open CoT Leaderboard. Retrieved from https://huggingface.co/spaces/logikon/open_cot_leaderboard | |
""" | |