File size: 3,844 Bytes
24bd6b5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
  - text: Hello!
    example_title: Hello world
    group: Python
base_model:
- tencent/Hunyuan-7B-Instruct
---

This tiny model is for debugging. It is randomly initialized with the config adapted from [tencent/Hunyuan-7B-Instruct](https://huggingface.co/tencent/Hunyuan-7B-Instruct).

### Example usage:

```python
import torch
from transformers.pipelines import pipeline

model_id = "tiny-random/hunyuan"
messages = [
    {
        "role": "user",
        "content": "hi",
    }
]
pipe = pipeline('text-generation', model_id, device='cuda', torch_dtype=torch.bfloat16, trust_remote_code=True,)
print(pipe(messages, max_new_tokens=32))
```

### Codes to create this repo:

```python
import json
from pathlib import Path

import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
    AutoConfig,
    AutoModelForCausalLM,
    AutoProcessor,
    GenerationConfig,
    set_seed,
)

source_model_id = "tencent/Hunyuan-7B-Instruct"
save_folder = "/tmp/tiny-random/hunyuan"

processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)

with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
    config_json = json.load(f)
config_json['hidden_size'] = 16
config_json['head_dim'] = 32
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 2
config_json['num_key_value_heads'] = 1
config_json['tie_word_embeddings'] = True
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
    json.dump(config_json, f, indent=2)

config = AutoConfig.from_pretrained(
    save_folder,
    trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
    model.generation_config = GenerationConfig.from_pretrained(
        source_model_id, trust_remote_code=True,
    )
set_seed(42)
model = model.cpu()  # cpu is more stable for random initialization across machines
with torch.no_grad():
    for name, p in sorted(model.named_parameters()):
        torch.nn.init.normal_(p, 0, 0.1)
        print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
```

### Printing the model:

```text
HunYuanDenseV1ForCausalLM(
  (model): HunYuanDenseV1Model(
    (embed_tokens): Embedding(128167, 16, padding_idx=127961)
    (layers): ModuleList(
      (0-1): 2 x HunYuanDenseV1DecoderLayer(
        (self_attn): HunYuanDenseV1Attention(
          (q_proj): Linear(in_features=16, out_features=64, bias=False)
          (k_proj): Linear(in_features=16, out_features=32, bias=False)
          (v_proj): Linear(in_features=16, out_features=32, bias=False)
          (o_proj): Linear(in_features=64, out_features=16, bias=False)
          (query_layernorm): HunYuanDenseV1RMSNorm((32,), eps=1e-05)
          (key_layernorm): HunYuanDenseV1RMSNorm((32,), eps=1e-05)
        )
        (mlp): HunYuanDenseV1MLP(
          (gate_proj): Linear(in_features=16, out_features=64, bias=False)
          (up_proj): Linear(in_features=16, out_features=64, bias=False)
          (down_proj): Linear(in_features=64, out_features=16, bias=False)
          (act_fn): SiLU()
        )
        (input_layernorm): HunYuanDenseV1RMSNorm((16,), eps=1e-05)
        (post_attention_layernorm): HunYuanDenseV1RMSNorm((16,), eps=1e-05)
      )
    )
    (norm): HunYuanDenseV1RMSNorm((16,), eps=1e-05)
    (rotary_emb): HunYuanDenseV1RotaryEmbedding()
  )
  (lm_head): Linear(in_features=16, out_features=128167, bias=False)
)
```