YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

mistral-7b-text-to-sql_full-model

Model description

  • Model type: Language model
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model : Mistral-7B-v0.1

How to get started with the model

import torch

from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load model directly

tokenizer = AutoTokenizer.from_pretrained("delayedkarma/mistral-7b-text-to-sql_full-model")
model = AutoModelForCausalLM.from_pretrained("delayedkarma/mistral-7b-text-to-sql_full-model")

text = "How many matched scored 3โ€“6, 7โ€“6(5), 6โ€“3?"
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 3
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 6
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 3

Framework versions

  • PEFT 0.7.2.dev0
  • Transformers 4.36.2
  • Pytorch 2.2.2
  • Datasets 2.16.1
  • Tokenizers 0.15.2
Downloads last month
24
Safetensors
Model size
7.24B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for delayedkarma/mistral-7b-text-to-sql_full-model

Finetuned
(978)
this model

Dataset used to train delayedkarma/mistral-7b-text-to-sql_full-model