PeterKruger commited on
Commit
a5bd1a1
·
verified ·
1 Parent(s): 0f654e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -14,7 +14,7 @@ short_description: Many-Model-As-Judge LLM Benchmark
14
 
15
  # AutoBench 1.0 Demo
16
 
17
- This Space runs a Many-Model-As-Judge LLM benchmark to compare different language models using Hugging Face's Inference API. This is a simplified version of Autobench 1.0 which relies on multiple inference providers to manage request load and a wider range of models (Anthropic, Grok, Nebius, OpenAI, Together AI, Vertex AI). For more advanced use, please use refer to the AutoBench 1.0 repository.
18
 
19
  ## Features
20
 
@@ -51,4 +51,5 @@ The benchmark supports any model available through Hugging Face's Inference API,
51
 
52
  ## Note
53
 
54
- Running a full benchmark might take some time depending on the number of models and iterations. Make sure you have sufficient Hugging Face credits to run the benchmark, especially when employing numerous models for long iteration duration.
 
 
14
 
15
  # AutoBench 1.0 Demo
16
 
17
+ This Space runs a Many-Model-As-Judge LLM benchmark to compare different language models using Hugging Face's Inference API. This is a simplified version of Autobench 1.0 which relies on multiple inference providers to manage request load and a wider range of models (Anthropic, Grok, Nebius, OpenAI, Together AI, Vertex AI). For more advanced use, please refer to the AutoBench 1.0 repository.
18
 
19
  ## Features
20
 
 
51
 
52
  ## Note
53
 
54
+ - In order to properly follow real-time the process of question generation, question ranking, answer generation, and answer ranking, check the container logs (above to the right of the "running" button).
55
+ - Running a full benchmark might take some time depending on the number of models and iterations. Make sure you have sufficient Hugging Face credits to run the benchmark, especially when employing numerous models for long iteration duration.