Xenova HF Staff commited on
Commit
259102c
·
verified ·
1 Parent(s): 44dedce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -11,6 +11,50 @@ tags: []
11
 
12
  ## Model Details
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ### Model Description
15
 
16
  <!-- Provide a longer summary of what this model is. -->
 
11
 
12
  ## Model Details
13
 
14
+ ### Code to generate
15
+
16
+ ```py
17
+ import torch
18
+ from transformers import LlamaForCausalLM, LlamaConfig, AutoTokenizer
19
+
20
+ # Set seed for reproducibility
21
+ torch.manual_seed(0)
22
+
23
+ # Initializing the configuration
24
+ configuration = LlamaConfig(
25
+ head_dim=16,
26
+ hidden_size=32,
27
+ intermediate_size=64,
28
+ max_position_embeddings=131072,
29
+ model_type="llama",
30
+ num_attention_heads=2,
31
+ num_hidden_layers=1,
32
+ num_key_value_heads=2,
33
+ rms_norm_eps=1e-05,
34
+ rope_scaling={
35
+ "factor": 32.0,
36
+ "high_freq_factor": 4.0,
37
+ "low_freq_factor": 1.0,
38
+ "original_max_position_embeddings": 8192,
39
+ "rope_type": "llama3"
40
+ },
41
+ rope_theta=500000.0,
42
+ tie_word_embeddings=True,
43
+ vocab_size=128256,
44
+ )
45
+
46
+ # Initializing a model from the configuration
47
+ model = LlamaForCausalLM(configuration)
48
+
49
+ # Re-use tokenizer
50
+ tokenizer = AutoTokenizer.from_pretrained("Xenova/Llama-3.2-Tokenizer")
51
+
52
+ # Upload to the HF Hub
53
+ model_id = 'onnx-community/tiny-random-LlamaForCausalLM-ONNX'
54
+ model.push_to_hub(model_id)
55
+ tokenizer.push_to_hub(model_id)
56
+ ```
57
+
58
  ### Model Description
59
 
60
  <!-- Provide a longer summary of what this model is. -->