aiswaryards commited on
Commit
c357eae
·
verified ·
1 Parent(s): cf8c1c2

Updated Model Card

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - deepseek-ai/deepseek-coder-1.3b-base
4
+ finetuned version: aiswaryards/deepseek1.3B-coder-dora-finetuned
5
+
6
+ tags:
7
+ - '`DoRA`'
8
+ - '`question-generation`'
9
+ - '`data-science`'
10
+ - '`knowledge-transfer`'
11
+ - '`rag`'
12
+ - '`llm`'
13
+ - '`agents`'
14
+ - '`deepseek`'
15
+
16
+ model_type: causal-lm
17
+ ---
18
+ ## Quick Overview:
19
+
20
+ The deployed model `aiswaryards/deepseek1.3B-coder-dora-finetuned` is a fine-tuned version of [DeepSeek-Coder 1.3B](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base), adapted using the **DoRA (Decoupled Low-Rank Adaptation)** method.
21
+
22
+ It was trained as part of a **graduate-level academic project** focused on simulating expert question generation from domain-specific Knowledge Transfer (KT) documents.
23
+
24
+ To fine-tune a compact and capable LLM to generate **high-quality, context-specific technical questions** based on internal handover documents in a Retrieval-Augmented Generation (RAG) pipeline.
25
+
26
+
27
+
28
+ ## Use Case
29
+ - Designed to simulate domain expert handover, where the model generates precise and context-aware questions to extract undocumented insights from KT documents.
30
+ - Create prompts that makes the
31
+
32
+ ## Model Information:
33
+ Base: deepseek-ai/deepseek-coder-1.3b-base
34
+
35
+ Fine-tuning: (DoRA) Dynamically Optimized Rank Adaptation
36
+
37
+ Format: Instruction → Input (context) → Response (question)
38
+
39
+ ## Fine-Tuning Details
40
+
41
+ - **Base Model**: `deepseek-ai/deepseek-coder-1.3b-base`
42
+
43
+ - **Adapter Method**: DoRA using `AdaLoraConfig` from PEFT
44
+ - `r`: 8
45
+ - `lora_alpha`: 16
46
+ - `lora_dropout`: 0.05
47
+ - `bias`: none
48
+ - `target_modules`: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
49
+ - use_dora=True
50
+
51
+ - **Quantization**: bfloat16
52
+
53
+ - **Data Source**: Webscrapped dataset (~100+ high-quality Q&A pairs from Knowledge Transfer docs)
54
+
55
+ - **Training**:
56
+ - Epochs: 5
57
+ - Batch Size: 1 (gradient accumulation: 8)
58
+ - Optimized using Hugging Face `Trainer`
59
+ - Platform: Google Colab (A100)
60
+ - Quantization: 16-bit (bfloat16)
61
+ - Hardware: Google Colab - (A100 GPU) - cuda
62
+ - Frameworks: HuggingFace Transformers, PEFT, Datasets, Trainer
63
+ - Tokenization length: 1024
64
+ - Trained on : high-quality Q&A pairs collected from various branches of data science domain KT handovers.
65
+
66
+
67
+ ## Training Performance:
68
+ Step Train Loss Validation Loss
69
+ 400 1.780 1.795
70
+
71
+ - This exhibits a strong convergence for a fine-tuned instruction.
72
+ - The validation loss curve flattens under 2.0, which is a great sign for generative quality.
73
+
74
+
75
+ ## Use Cases
76
+
77
+ - Domain Expert Simulation (Agentic RAG)
78
+ - Knowledge Transfer Automation
79
+ - Multiple roles - currently experimented on datascience domain (for eg. BI / Data Engineering role transitions)
80
+ - Question synthesis for downstream QA chains
81
+
82
+ ## Challenges:
83
+
84
+ - It requires instruction-style prompting.
85
+ - Might hallucinate if context is vague or unrelated
86
+ - It is best suited within structured RAG pipelines
87
+
88
+ ## License
89
+
90
+ ---
91
+ license: deepseek
92
+ ---
93
+
94
+ ### DeepSeek-Coder 1.3B - DoRA Fine-tuned Version
95
+
96
+ This model is a fine-tuned version of `deepseek-ai/deepseek-coder-1.3b-base` using the DoRA method on academic Q&A data from Knowledge Transfer documents.
97
+
98
+ ...
99
+
100
+ This model inherits the original license from [DeepSeek-AI](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base), which can be found [here](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base/blob/main/LICENSE).
101
+
102
+ For academic use only. Please refer to the original license terms for reuse, distribution, or commercial applications.