Uses this dataset: mpasila/BadVibesV1-16k-context
Details about the dataset:
It is a combination of these datasets (which have been filtered/processed for ShareGPT format and made sure they don't exceed 16k context length based on unsloth/Ministral-3-8B-Base-2512's tokenizer):
- 3216 entries from adamo1139/4chan_archive_ShareGPT_fixed_newlines_unfiltered
- 19962 entries from Fizzarolli/fse-raw-dump
- 11547 entries from R-Arfin/Depression
- 5060 entries from ShiniChien/creepypasta
The data was also combined and shuffled. Total entries: 39785
Prompt format: ChatML (may be messed up by Unsloth atm)
LoRA: mpasila/BadVibesNemo-LoRA-12B
Training params
Trained at 16384 context window in 4-bit.
model = FastLanguageModel.get_peft_model(
model,
r = 128, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 32,
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
)
from trl import SFTTrainer, SFTConfig
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
eval_dataset = None, # Can set up evaluation!
args = SFTConfig(
dataset_text_field = "text",
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4, # Use GA to mimic batch size!
warmup_steps = 10,
num_train_epochs = 1, # Set this for 1 full training run.
#max_steps = 60,
learning_rate = 2e-4, # Reduce to 2e-5 for long training runs
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.001,
lr_scheduler_type = "linear",
seed = 3407,
report_to = "none", # Use TrackIO/WandB etc
),
)
Uploaded finetuned BadVibesNemo-12B model
- Developed by: mpasila
- License: apache-2.0
- Finetuned from model : unsloth/mistral-nemo-base-2407-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 41
