File size: 919 Bytes
370453e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# LM Harness Evaluation

The evaluation harness from EleutherAI is integrated a submodule. We use a fork on [HF's Github](https://github.com/huggingface/lm-evaluation-harness).
To initialize the submodule, run:
```bash
git submodule init
git submodule update
```

Make sure you have the requirements in `lm-evaluation-harness`:
```bash
cd lm-evaluation-harness
pip install -r requirements.txt
```

To launch an evaluation, run:
```bash
python lm-evaluation-harness/main.py \
    --model gpt2 \
    --model_args pretrained=gpt2-xl \
    --tasks cola,mrpc,rte,qnli,qqp,sst,boolq,cb,copa,multirc,record,wic,wsc,coqa,drop,lambada,lambada_cloze,piqa,pubmedqa,sciq \
    --provide_description \ # Whether to provide the task description
    --num_fewshot 3 \ # Number of priming pairs
    --batch_size 2 \
    --output_path eval-gpt2-xl
```

Please note:
- As of now, only single GPU is supported in `lm-evaluation-harness`.