File size: 3,799 Bytes
c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e c36e5fc da2851e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
---
license: other
language:
- en
tags:
- causal-lm
- code
metrics:
- code_eval
library_name: transformers
model-index:
- name: dgtalbug/stable-code-instruct-3b
results:
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Python)
metrics:
- name: pass@1
type: pass@1
value: 32.4
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (C++)
metrics:
- name: pass@1
type: pass@1
value: 30.9
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Java)
metrics:
- name: pass@1
type: pass@1
value: 32.1
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 32.1
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (PHP)
metrics:
- name: pass@1
type: pass@1
value: 24.2
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Rust)
metrics:
- name: pass@1
type: pass@1
value: 23.0
---
# **Stable Code Instruct 3B — Base Model**
> This repository stores an **unchanged** copy of `stabilityai/stable-code-instruct-3b`
> for use as a **base model** in future fine‑tuning projects (including Stephen).
---
## 📌 About the Model
`stable-code-instruct-3b` is a **2.7B parameter decoder-only transformer** from Stability AI, tuned for multi‑language code generation and conversational coding assistance.
It is suitable as a **starting point** for specialized code assistants,
including fine‑tuned variants with domain‑specific datasets.
**Key Features:**
- General purpose code generation across multiple programming languages.
- Instruction‑tuned for better conversational performance.
- Strong performance on [MultiPL-E](https://github.com/nuprl/MultiPL-E) benchmarks.
---
## 📊 Performance (MultiPL-E Benchmark)
| Language | pass@1 |
|--------------|--------|
| Python | 32.4% |
| C++ | 30.9% |
| Java | 32.1% |
| JavaScript | 32.1% |
| PHP | 24.2% |
| Rust | 23.0% |
---
## 🚀 Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "dgtalbug/stable-code-instruct-3b"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id, torch_dtype=torch.bfloat16, trust_remote_code=True
).cuda().eval()
messages = [
{"role": "system", "content": "You are a helpful coding assistant."},
{"role": "user", "content": "Write a Python function to reverse a string."}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
tokens = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.5,
top_p=0.95,
top_k=100,
do_sample=True,
use_cache=True
)
output = tokenizer.batch_decode(tokens[:, inputs.input_ids.shape[-1]:], skip_special_tokens=True)[0]
print(output)
```
---
## 📜 License
This model follows the **[Stability AI Community License](https://huggingface.co/stabilityai/stable-code-instruct-3b/blob/main/LICENSE.md)**.
For commercial use, refer to [Stability AI licensing terms](https://stability.ai/license).
---
## 📌 Note for Fine‑Tuning
This repository is **not modified** — it is kept as a **clean base model** for derivative works.
Fine‑tuned versions (e.g., Stephen) will be released in **separate repositories**.
|