Instructions to use Stopwolf/Tito-7B-slerp with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Stopwolf/Tito-7B-slerp with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Stopwolf/Tito-7B-slerp") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Stopwolf/Tito-7B-slerp") model = AutoModelForCausalLM.from_pretrained("Stopwolf/Tito-7B-slerp") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Stopwolf/Tito-7B-slerp with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Stopwolf/Tito-7B-slerp" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Stopwolf/Tito-7B-slerp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Stopwolf/Tito-7B-slerp
- SGLang
How to use Stopwolf/Tito-7B-slerp with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Stopwolf/Tito-7B-slerp" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Stopwolf/Tito-7B-slerp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Stopwolf/Tito-7B-slerp" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Stopwolf/Tito-7B-slerp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Stopwolf/Tito-7B-slerp with Docker Model Runner:
docker model run hf.co/Stopwolf/Tito-7B-slerp
Tito-7B-slerp
Tito-7B-slerp is a merge of the following models using mergekit:
π§© Configuration
slices:
- sources:
- model: gordicaleksa/YugoGPT
layer_range: [0, 32]
- model: mlabonne/AlphaMonarch-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/AlphaMonarch-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.6
dtype: bfloat16
Results
Evaluations on Serbian LLM eval suite (or rather, performance and knowledge of Serbian):
| ARC-E | ARC-C | Hellaswag | BoolQ | Winogrande | OpenbookQA | PiQA | NQ Open | TriviaQA | Avg. | |
|---|---|---|---|---|---|---|---|---|---|---|
| Zamfir-7B | 51.85 | 32.25 | 46.03 | 75.59 | 62.59 | 26.00 | 66.81 | 16.09 | 36.11 | 45.92 |
| Mustra-7B | 52.95 | 33.70 | 45.89 | 77.55 | 64.17 | 30.60 | 67.25 | 15.40 | 34.84 | 46.93 |
| Tito-7B | 55.43 | 34.73 | 48.19 | 77.37 | 65.27 | 30.00 | 67.30 | 16.7 | 35.38 | 47.82 |
| YugoGPT | 57.79 | 34.73 | 49.89 | 69.45 | 64.56 | 28.20 | 72.03 | 15.82 | 36.14 | 47.62 |
Here, all benchmarks were done 0-shot, on the exception of NQ Open and TriviaQA which were done in 5-shot manner, in order to be comparable to Mistral paper.
If we try to replicate OpenLLM Leaderboard results on available Serbian datasets (running an appropriate amount of shots instead of 0), we get:
| ARC | Hellaswag | Winogrande | TruthfulQA | Avg. | |
|---|---|---|---|---|---|
| Tito-7B | 47.27 | - | 69.93 | 57.48 | 58.23 |
| Perucac-7B | 49.74 | - | 71.98 | 56.03 | 59.25 |
| YugoGPT | 44.03 | - | 70.64 | 48.06 | 54.24 |
| Llama3-8B | 42.24 | - | 61.25 | 51.08 | 51.52 |
| SambaLingo | 37.88 | - | 61.48 | 47.23 | 48.86 |
Note that YugoGPT, Llama3 and SambaLingo are all base models, unlike Tito and Perucac.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Tito | YugoGPT |
|---|---|---|
| Avg. | 70.13 | 57.34 |
| AI2 Reasoning Challenge (25-Shot) | 68.09 | 58.10 |
| HellaSwag (10-Shot) | 86.38 | 81.44 |
| MMLU (5-Shot) | 64.01 | 60.68 |
| TruthfulQA (0-shot) | 57.01 | 36.60 |
| Winogrande (5-shot) | 81.69 | 76.56 |
| GSM8k (5-shot) | 63.61 | 30.70 |
- Downloads last month
- 86
Model tree for Stopwolf/Tito-7B-slerp
Spaces using Stopwolf/Tito-7B-slerp 15
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard68.090
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard86.380
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.010
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard57.010
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard81.690
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard63.610