File size: 4,899 Bytes
2326234
fb40f7e
 
 
 
 
 
 
2326234
dfad5b9
2326234
fb40f7e
2326234
fb40f7e
 
 
 
c99b067
2326234
fb40f7e
2326234
fb40f7e
2326234
fb40f7e
2326234
fb40f7e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: openrail
datasets:
- bugdaryan/spider-natsql-wikisql-instruct
language:
- en
tags:
- cod
---
# Wizard Coder SQL-Generation Model

## Overview

- **Model Name**: WizardCoderSQL-15B-V1.0
- **Repository**: [GitHub Repository](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder)
- **License**: [OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
- **Fine-Tuned Model Name**: WizardCoderSQL-15B-V1.0
- **Fine-Tuned Dataset**: [bugdaryan/spider-natsql-wikisql-instruct](https://huggingface.co/datasets/bugdaryan/spider-natsql-wikisql-instruct)

## Description

This is a fine-tuned version of the Wizard Coder 15B model specifically designed for SQL generation tasks. The model has been fine-tuned on the [bugdaryan/spider-natsql-wikisql-instruct](https://huggingface.co/dataset/bugdaryan/spider-natsql-wikisql-instruct) dataset to empower it with the ability to generate SQL queries based on natural language instructions.

## Model Details

- **Base Model**: Wizard Coder 15B
- **Fine-Tuned Model Name**: WizardCoderSQL-15B-V1.0
- **Fine-Tuning Parameters**:
  - QLoRA Parameters:
    - LoRA Attention Dimension (lora_r): 64
    - LoRA Alpha Parameter (lora_alpha): 16
    - LoRA Dropout Probability (lora_dropout): 0.1
  - bitsandbytes Parameters:
    - Use 4-bit Precision Base Model (use_4bit): True
    - Compute Dtype for 4-bit Base Models (bnb_4bit_compute_dtype): float16
    - Quantization Type (bnb_4bit_quant_type): nf4
    - Activate Nested Quantization (use_nested_quant): False
  - TrainingArguments Parameters:
    - Number of Training Epochs (num_train_epochs): 1
    - Enable FP16/BF16 Training (fp16/bf16): False/True
    - Batch Size per GPU for Training (per_device_train_batch_size): 48
    - Batch Size per GPU for Evaluation (per_device_eval_batch_size): 4
    - Gradient Accumulation Steps (gradient_accumulation_steps): 1
    - Enable Gradient Checkpointing (gradient_checkpointing): True
    - Maximum Gradient Norm (max_grad_norm): 0.3
    - Initial Learning Rate (learning_rate): 2e-4
    - Weight Decay (weight_decay): 0.001
    - Optimizer (optim): paged_adamw_32bit
    - Learning Rate Scheduler Type (lr_scheduler_type): cosine
    - Maximum Training Steps (max_steps): -1
    - Warmup Ratio (warmup_ratio): 0.03
    - Group Sequences into Batches with Same Length (group_by_length): True
    - Save Checkpoint Every X Update Steps (save_steps): 0
    - Log Every X Update Steps (logging_steps): 25
  - SFT Parameters:
    - Maximum Sequence Length (max_seq_length): 500

## Performance

- **Fine-Tuned Model Metrics**: (Provide any relevant evaluation metrics if available)

## Dataset

- **Fine-Tuned Dataset**: [bugdaryan/spider-natsql-wikisql-instruct](https://huggingface.co/dataset/bugdaryan/spider-natsql-wikisql-instruct)
- **Dataset Description**: This dataset contains natural language instructions paired with SQL queries. It serves as the training data for fine-tuning the Wizard Coder model for SQL generation tasks.

## Model Card Information

- **Maintainer**: Spartak Bughdaryan
- **Contact**: [email protected]
- **Date Created**: September 15, 2023
- **Last Updated**: September 15, 2023

## Usage

To use this fine-tuned model for SQL generation tasks, you can load it using the Hugging Face Transformers library in Python. Here's an example of how to use it:

```python
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    pipeline
)
import torch

model_name = 'bugdaryan/WizardCoderSQL-15B-V1.0'

model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(model_name)

pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)

tables = "CREATE TABLE sales ( sale_id number PRIMARY KEY, product_id number, customer_id number, salesperson_id number, sale_date DATE, quantity number, FOREIGN KEY (product_id) REFERENCES products(product_id), FOREIGN KEY (customer_id) REFERENCES customers(customer_id), FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id)); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number, FOREIGN KEY (product_id) REFERENCES products(product_id)); CREATE TABLE customers ( customer_id number PRIMARY KEY, name text, address text ); CREATE TABLE salespeople ( salesperson_id number PRIMARY KEY, name text, region text ); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number );"

question = 'Find the salesperson who made the most sales.'

prompt = f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Convert text to SQLite query: {question} {tables} ### Response:"

ans = pipe(prompt, max_new_tokens=200)
print(ans[0]['generated_text'])

```