QuixiAI/dolphin-coder
Viewer • Updated • 109k • 4.65k • 61
How to use monsterapi/mistral_7b_HalfEpoch_DolphinCoder with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
model = PeftModel.from_pretrained(base_model, "monsterapi/mistral_7b_HalfEpoch_DolphinCoder")Model Used: mistralai/Mistral-7B-v0.1
Dataset: cognitivecomputations/dolphin-coder
Dolphin-Coder dataset – a high-quality collection of 100,000+ coding questions and responses. It's perfect for supervised fine-tuning (SFT), and teaching language models to improve on coding-based tasks.
With the utilization of MonsterAPI's no-code LLM finetuner, this finetuning:
$15.2 for the entire runlicense: apache-2.0
Base model
mistralai/Mistral-7B-v0.1