| Quantization made by Richard Erkhov. | |
| [Github](https://github.com/RichardErkhov) | |
| [Discord](https://discord.gg/pvy7H8DZMG) | |
| [Request more models](https://github.com/RichardErkhov/quant_request) | |
| deepseek-coder-33b-base - bnb 8bits | |
| - Model creator: https://huggingface.co/deepseek-ai/ | |
| - Original model: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/ | |
| Original model description: | |
| --- | |
| license: other | |
| license_name: deepseek-license | |
| license_link: LICENSE | |
| --- | |
| <p align="center"> | |
| <img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true"> | |
| </p> | |
| <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p> | |
| <hr> | |
| ### 1. Introduction of Deepseek Coder | |
| Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks. | |
| - **Massive Training Data**: Trained from scratch on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages. | |
| - **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements. | |
| - **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks. | |
| - **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks. | |
| ### 2. Model Summary | |
| deepseek-coder-33b-base is a 33B parameter model with Grouped-Query Attention trained on 2 trillion tokens. | |
| - **Home Page:** [DeepSeek](https://deepseek.com/) | |
| - **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder) | |
| - **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/) | |
| ### 3. How to Use | |
| Here give some examples of how to use our model. | |
| #### 1)Code Completion | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| import torch | |
| tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-33b-base", trust_remote_code=True) | |
| model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-33b-base", trust_remote_code=True).cuda() | |
| input_text = "#write a quick sort algorithm" | |
| inputs = tokenizer(input_text, return_tensors="pt").cuda() | |
| outputs = model.generate(**inputs, max_length=128) | |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) | |
| ``` | |
| #### 2)Code Insertion | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| import torch | |
| tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-33b-base", trust_remote_code=True) | |
| model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-33b-base", trust_remote_code=True).cuda() | |
| input_text = """<|fim▁begin|>def quick_sort(arr): | |
| if len(arr) <= 1: | |
| return arr | |
| pivot = arr[0] | |
| left = [] | |
| right = [] | |
| <|fim▁hole|> | |
| if arr[i] < pivot: | |
| left.append(arr[i]) | |
| else: | |
| right.append(arr[i]) | |
| return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>""" | |
| inputs = tokenizer(input_text, return_tensors="pt").cuda() | |
| outputs = model.generate(**inputs, max_length=128) | |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):]) | |
| ``` | |
| #### 3)Repository Level Code Completion | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-33b-base", trust_remote_code=True) | |
| model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-33b-base", trust_remote_code=True).cuda() | |
| input_text = """#utils.py | |
| import torch | |
| from sklearn import datasets | |
| from sklearn.model_selection import train_test_split | |
| from sklearn.preprocessing import StandardScaler | |
| from sklearn.metrics import accuracy_score | |
| def load_data(): | |
| iris = datasets.load_iris() | |
| X = iris.data | |
| y = iris.target | |
| # Standardize the data | |
| scaler = StandardScaler() | |
| X = scaler.fit_transform(X) | |
| X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) | |
| # Convert numpy data to PyTorch tensors | |
| X_train = torch.tensor(X_train, dtype=torch.float32) | |
| X_test = torch.tensor(X_test, dtype=torch.float32) | |
| y_train = torch.tensor(y_train, dtype=torch.int64) | |
| y_test = torch.tensor(y_test, dtype=torch.int64) | |
| return X_train, X_test, y_train, y_test | |
| def evaluate_predictions(y_test, y_pred): | |
| return accuracy_score(y_test, y_pred) | |
| #model.py | |
| import torch | |
| import torch.nn as nn | |
| import torch.optim as optim | |
| from torch.utils.data import DataLoader, TensorDataset | |
| class IrisClassifier(nn.Module): | |
| def __init__(self): | |
| super(IrisClassifier, self).__init__() | |
| self.fc = nn.Sequential( | |
| nn.Linear(4, 16), | |
| nn.ReLU(), | |
| nn.Linear(16, 3) | |
| ) | |
| def forward(self, x): | |
| return self.fc(x) | |
| def train_model(self, X_train, y_train, epochs, lr, batch_size): | |
| criterion = nn.CrossEntropyLoss() | |
| optimizer = optim.Adam(self.parameters(), lr=lr) | |
| # Create DataLoader for batches | |
| dataset = TensorDataset(X_train, y_train) | |
| dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True) | |
| for epoch in range(epochs): | |
| for batch_X, batch_y in dataloader: | |
| optimizer.zero_grad() | |
| outputs = self(batch_X) | |
| loss = criterion(outputs, batch_y) | |
| loss.backward() | |
| optimizer.step() | |
| def predict(self, X_test): | |
| with torch.no_grad(): | |
| outputs = self(X_test) | |
| _, predicted = outputs.max(1) | |
| return predicted.numpy() | |
| #main.py | |
| from utils import load_data, evaluate_predictions | |
| from model import IrisClassifier as Classifier | |
| def main(): | |
| # Model training and evaluation | |
| """ | |
| inputs = tokenizer(input_text, return_tensors="pt").to(model.device) | |
| outputs = model.generate(**inputs, max_new_tokens=140) | |
| print(tokenizer.decode(outputs[0])) | |
| ``` | |
| ### 4. License | |
| This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use. | |
| See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details. | |
| ### 5. Contact | |
| If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]). | |