cjpais commited on
Commit
5777dda
·
verified ·
1 Parent(s): 884f5ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -3
README.md CHANGED
@@ -1,3 +1,73 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - llamafile
5
+ ---
6
+
7
+ # LocalScore - Local LLM Benchmark
8
+
9
+ **LocalScore** is an open-source tool that both benchmarks how fast Large Language Models (LLMs) run on your specific hardware and serves as a repository for these results. We created LocalScore to provide a simple, portable way to evaluate computer performance across various LLMs while making it easy to share and browse hardware performance data.
10
+
11
+ We believe strongly in the power of local AI systems, especially as smaller models become more powerful. In addition we expect computer hardware to become more powerful and cheaper to run these models. We hope this will create an opportunity for accessible and private AI systems, and that LocalScore will help you navigate this.
12
+
13
+ Check out the website: https://localscore.ai
14
+
15
+ This repo contains the 'official models' for LocalScore, which will get you and your GPU on the leaderboard if you choose to submit your results.
16
+
17
+ | | Tiny | Small | Medium |
18
+ |----------|------:|------:|-------:|
19
+ | # Params | 1B | 8B | 14B |
20
+ | Model Family | LLama 3.2| LLama 3.1 | Qwen 2.5|
21
+ | Quantization | Q4_K_M | Q4_K_M | Q4_K_M|
22
+ | Approx VRAM Required| 2GB | 6GB| 10GB|
23
+
24
+ To run LocalScore you can download any of the models from this repo
25
+
26
+ ### Linux
27
+
28
+ ```
29
+ wget https://huggingface.co/Mozilla/LocalScore/resolve/main/localscore-tiny-1b
30
+ chmod +x localscore-tiny-1b
31
+ ./localscore-tiny-1b
32
+ ```
33
+
34
+ ### Windows
35
+
36
+ 1. Download [localscore-tiny-1b](https://huggingface.co/Mozilla/LocalScore/resolve/main/localscore-tiny-1b)
37
+ 2. Change the filename to `localscore-tiny-1b.exe`
38
+ 3. Open cmd.exe and run `localscore-tiny-1b.exe`
39
+
40
+ ## What is a LocalScore?
41
+
42
+ A LocalScore is a measure of three key performance metrics that matter for local LLM performance:
43
+
44
+ 1. **Prompt Processing Speed**: How quickly your system processes input text (tokens per second)
45
+ 2. **Generation Speed**: How fast your system generates new text (tokens per second)
46
+ 3. **Time to First Token**: The latency before the first response appears (milliseconds)
47
+
48
+ These metrics are combined into a single LocalScore which gives you a straightforward way to compare different hardware configurations. A score of 1,000 is excellent, 250 is passable, and below 100 will likely be a poor user experience in some regard.
49
+
50
+ Under the hood, LocalScore leverages Llamafile to ensure portability across different systems, making benchmarking accessible regardless of your setup.
51
+
52
+ ### The Tests
53
+
54
+ The tests were designed to provide a realistic picture of how models will perform in everyday use. Instead of testing raw prompt processing and generation speeds, we wanted to emulate the kinds of tasks that users will actually be doing with these models. Below are a list of the tests we run and some of the use cases they are meant to emulate.
55
+
56
+ | Test Name | Prompt Tokens | Generated Tokens | Sample Use Cases |
57
+ |---------------|---------------|------------------|----------------------------------------------------------------------------------|
58
+ | pp1024+tg16 | 1024 | 16 | Classification, sentiment analysis, keyword extraction. |
59
+ | pp4096+tg256 | 4096 | 256 | Long document Q&A, RAG, short summary of extensive text. |
60
+ | pp2048+tg256 | 2048 | 256 | Article summarization, contextual paragraph generation. |
61
+ | pp2048+tg768 | 2048 | 768 | Drafting detailed replies, multi-paragraph generation, content sections. |
62
+ | pp1024+tg1024 | 1024 | 1024 | Balanced Q&A, content drafting, code generation based on long sample. |
63
+ | pp1280+tg3072 | 1280 | 3072 | Complex reasoning, chain-of-thought, long-form creative writing, code generation.|
64
+ | pp384+tg1152 | 384 | 1152 | Prompt expansion, explanation generation, creative writing, code generation. |
65
+ | pp64+tg1024 | 64 | 1024 | Short prompt creative generation (poetry/story), Q&A, code generation. |
66
+ | pp16+tg1536 | 16 | 1536 | Creative text writing/storytelling, Q&A, code generation. |
67
+
68
+ For more check out:
69
+ - Website: https://localscore.ai
70
+ - Demo video: https://youtu.be/De6pA1bQsHU
71
+ - Blog post: https://localscore.ai/blog
72
+ - CLI Github: https://github.com/Mozilla-Ocho/llamafile/tree/main/localscore
73
+ - Website Github: https://github.com/cjpais/localscore