English
code
File size: 2,842 Bytes
37de25e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
214a316
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: mit
datasets:
- microsoft/rStar-Coder
- deepseek-ai/DeepSeek-ProverBench
language:
- en
metrics:
- accuracy
- bertscore
- character
- code_eval
base_model:
- deepseek-ai/deepseek-coder-6.7b-instruct
- stabilityai/stablecode-completion-alpha-3b-4k
tags:
- code
---
# Model Card for Lara — Hybrid Code Model (DeepSeek + StableCode)

Lara is a hybrid fine‑tuned **code generation & completion model** built from  
**DeepSeek‑Coder 6.7B** and **StableCode Alpha 3B‑4K**.  
Designed for **general‑purpose programming** — from quick completions to multi‑file scaffolding —  
and optionally capable of **Chandler Bing‑style sarcastic commentary** for developer amusement.

MIT licensed — free to use, modify, and redistribute.

---

## Model Details

- **Developed by:** [@dgtalbug](https://huggingface.co/dgtalbug)
- **Funded by:** Self‑funded
- **Shared by:** [@dgtalbug](https://huggingface.co/dgtalbug)
- **Model type:** Causal Language Model for code generation & completion
- **Language(s):** English (primary), multilingual code comments possible
- **License:** MIT
- **Finetuned from:**  
  - [`deepseek-ai/deepseek-coder-6.7b-instruct`](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct)  
  - [`stabilityai/stablecode-completion-alpha-3b-4k`](https://huggingface.co/stabilityai/stablecode-completion-alpha-3b-4k)

---

## Model Sources
- **Repository:** [https://huggingface.co/dgtalbug/lara](https://huggingface.co/dgtalbug/lara)
- **Paper:** N/A (based on open‑source models)
- **Demo:** Coming soon

---

## Uses

### Direct Use
- Code completion in IDEs
- Script & function generation
- Annotated code examples for learning
- Humorous coding commentary (optional, via prompt)

### Downstream Use
- Fine‑tune for a single language (e.g., Java‑only bot)
- Integrate into AI coding assistants
- Educational & training platforms

### Out‑of‑Scope Use
- Malicious code generation
- Non‑code general chat
- Security‑critical code without review

---

## Bias, Risks, and Limitations
- May hallucinate APIs or syntax  
- Humor mode may inject irrelevant lines  
- Biases from public code sources may appear in output

### Recommendations
- Always review generated code before deployment
- Use sarcasm mode in casual or learning contexts, not production
- Test code in sandbox environments

---

## How to Get Started with the Model

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "dgtalbug/lara"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto")

prompt = "Write a Python function to reverse a string"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))