Quick Overview:
The deployed model aiswaryards/deepseek1.3B-coder-dora-finetuned
is a fine-tuned version of DeepSeek-Coder 1.3B, adapted using the DoRA (Decoupled Low-Rank Adaptation) method.
It was trained as part of a graduate-level academic project focused on simulating expert question generation from domain-specific Knowledge Transfer (KT) documents.
To fine-tune a compact and capable LLM to generate high-quality, context-specific technical questions based on internal handover documents in a Retrieval-Augmented Generation (RAG) pipeline.
Use Case
- Designed to simulate domain expert handover, where the model generates precise and context-aware questions to extract undocumented insights from KT documents.
- Create prompts that makes the
Model Information:
Base: deepseek-ai/deepseek-coder-1.3b-base
Fine-tuning: (DoRA) Dynamically Optimized Rank Adaptation
Format: Instruction → Input (context) → Response (question)
Fine-Tuning Details
Base Model:
deepseek-ai/deepseek-coder-1.3b-base
Adapter Method: DoRA using
AdaLoraConfig
from PEFTr
: 8lora_alpha
: 16lora_dropout
: 0.05bias
: nonetarget_modules
: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]- use_dora=True
Quantization: bfloat16
Data Source: Webscrapped dataset (~100+ high-quality Q&A pairs from Knowledge Transfer docs)
Training:
- Epochs: 5
- Batch Size: 1 (gradient accumulation: 8)
- Optimized using Hugging Face
Trainer
- Platform: Google Colab (A100)
- Quantization: 16-bit (bfloat16)
- Hardware: Google Colab - (A100 GPU) - cuda
- Frameworks: HuggingFace Transformers, PEFT, Datasets, Trainer
- Tokenization length: 1024
- Trained on : high-quality Q&A pairs collected from various branches of data science domain KT handovers.
Training Performance:
Step | Train Loss | Validation Loss
400 | 1.780 | 1.795
- This exhibits a strong convergence for a fine-tuned instruction.
- The validation loss curve flattens under 2.0, which is a great sign for generative quality.
Use Cases
- Domain Expert Simulation (Agentic RAG)
- Knowledge Transfer Automation
- Multiple roles - currently experimented on datascience domain (for eg. BI / Data Engineering role transitions)
- Question synthesis for downstream QA chains
Challenges:
- It requires instruction-style prompting.
- Might hallucinate if context is vague or unrelated
- It is best suited within structured RAG pipelines
License
license: deepseek
DeepSeek-Coder 1.3B - DoRA Fine-tuned Version
This model is a fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base
using the DoRA method on academic Q&A data from Knowledge Transfer documents.
...
This model inherits the original license from DeepSeek-AI, which can be found here.
For academic use only. Please refer to the original license terms for reuse, distribution, or commercial applications.
- Downloads last month
- 33
Model tree for aiswaryards/deepseek1.3B-coder-dora-finetuned
Base model
deepseek-ai/deepseek-coder-1.3b-base