zhoujun commited on
Commit
a28a79c
·
verified ·
1 Parent(s): a08fce4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md CHANGED
@@ -29,4 +29,20 @@ The leaderboard is evaluated with our evaluation [code](https://github.com/LLM36
29
  | | LiveBench♡ | 18.57 | 19.76 | 12.64 | 15.20 | 34.30 | 28.78 | 28.33 |
30
  | | **Average Score** | **43.29** | **33.76** | **35.42** | **33.97** | **54.24** | **47.53** | **46.25** |
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  Please refer to the [paper](https://arxiv.org/abs/2506.14965) for more details.
 
29
  | | LiveBench♡ | 18.57 | 19.76 | 12.64 | 15.20 | 34.30 | 28.78 | 28.33 |
30
  | | **Average Score** | **43.29** | **33.76** | **35.42** | **33.97** | **54.24** | **47.53** | **46.25** |
31
 
32
+
33
+
34
+ Example usage:
35
+ ```python
36
+ from transformers import AutoTokenizer, AutoModelForCausalLM
37
+
38
+ model = "LLM360/Guru-32B"
39
+ tokenizer = AutoTokenizer.from_pretrained(model)
40
+ model = AutoModelForCausalLM.from_pretrained(model, device_map="auto", torch_dtype="auto")
41
+
42
+ messages = [{"role": "user", "content": "What is reinforcement learning?"}]
43
+ prompt = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
44
+ outputs = model.generate(prompt, max_new_tokens=256, temperature=1.0, top_p=0.7)
45
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
46
+ ```
47
+
48
  Please refer to the [paper](https://arxiv.org/abs/2506.14965) for more details.