File size: 3,054 Bytes
ed5439a 867dec7 ed5439a 1c056e8 ed5439a f9f1c7c ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a 1c056e8 ed5439a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
---
library_name: transformers
pipeline_tag: text-generation
---
# Wasm-Coder-8B-Instruct-V1
**Wasm-Coder-8B-Instruct-V1** is an 8-billion parameter instruction-tuned language model developed by [wasmdashai](https://huggingface.co/wasmdashai), , code generation, and technical reasoning. It is designed to help developers working on edge computing, browser-based runtimes, and low-level systems programming.
---
## 🚀 Introduction
`Wasm-Coder-8B-Instruct-V1` is part of the Wasm-Coder family—models specifically tailored for tasks involving WebAssembly, Rust, C/C++, and embedded systems programming. The model has been instruction-tuned on a diverse dataset combining code, documentation, compiler logs, and structured code reasoning tasks.
### Key Features:
* Strong performance in **code synthesis**, **bug fixing**, and **code explanation**, especially for Rust and WebAssembly projects.
* Efficient for **edge devices**, **browsers**, and **serverless runtimes**.
* Based on a powerful transformer architecture with performance enhancements such as RoPE and SwiGLU.
* Trained with instruction-following datasets for natural conversations and multi-turn reasoning.
* Supports **long-context processing** (up to 32,768 tokens) with optional rope scaling.
---
## 🧠 Model Details
* **Architecture**: Decoder-only transformer
* **Parameters**: 8B
* **Training**: Pretrained + Instruction fine-tuning
* **Supported Context Length**: 32,768 tokens
* **Specialization**: WebAssembly, Rust, C/C++, Systems Programming
* **Components**:
* RoPE (Rotary Positional Embeddings)
* SwiGLU activation
* RMSNorm
* QKV Attention Bias
---
## 💻 Quickstart
Install dependencies:
```bash
pip install --upgrade transformers
```
Example code to load and run the model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "wasmdashai/Wasm-Coder-8B-Instruct-V1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
prompt = "Write a Rust function that compiles to WebAssembly and adds two numbers."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
---
## 📚 Long-Context Support
To process long inputs (e.g., full source files or compiler traces), use **YaRN-based RoPE scaling**:
Add this to `config.json`:
```json
{
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
---
## 🔧 Use Cases
* WebAssembly code generation and debugging
* Rust/C++ code explanation and transformation
* Embedded/IoT code support
* Smart contract logic for blockchain environments using Wasm
* Code agents and assistants running in browsers
---
## 📬 Contact
📧 For questions, collaborations, or commercial licensing:
**[[email protected]](mailto:[email protected])**
---
|