__~a~_
                        ~~;  ~_
          _                ~  ~_                _
         '_\;__._._._._._._]   ~_._._._._._.__;/_`
         '(/'/'/'/'|'|'|'| (    )|'|'|'|'\'\'\'\)'
         (/ / / /, | | | |(/    \) | | | ,\ \ \ \)
        (/ / / / / | | | ~(/    \) ~ | | \ \ \ \ \)
       (/ / / / /  ~ ~ ~   (/  \)    ~ ~  \ \ \ \ \)
      (/ / / / ~          / (||)|          ~ \ \ \ \)
      ~ / / ~            M  /||\M             ~ \ \ ~
       ~ ~                  /||\                 ~ ~
                           //||\\
                           //||\\
                           //||\\
                           '/||\'        "Archaeopteryx"

A series of Merges made for Roleplaying & Creative Writing, This model is a RL train ontop of Archaeo. A merge using Hamanasu-Magnum & Kunou, Trained with Axolotl on 8xH200s.

ChatML formatting

"""<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
        

Axolotl Configuration

base_model: ./model

plugins:

  • axolotl.integrations.liger.LigerPlugin
  • axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin liger_rope: true liger_rms_norm: true liger_layer_norm: true liger_glu_activation: true liger_fused_linear_cross_entropy: true cut_cross_entropy: false

load_in_8bit: false load_in_4bit: false strict: false

rl: kto kto_undesirable_weight: 1.0

datasets:

  • path: Delta-Vector/Tauri-Opus-Accepted-GPT-Rejected-Opus-Writing-Prompts split: train type: chatml.argilla
  • path: Delta-Vector/Tauri-IFeval-Dans-Tulu-KTO split: train type: chatml.argilla
  • path: Delta-Vector/Tauri-KTO-Instruct-Mix split: train type: chatml.argilla
  • path: Delta-Vector/Tauri-Purpura-Arkhaios-CC-KTO split: train type: chatml.argilla dataset_prepared_path: last_run_prepared val_set_size: 0.0 output_dir: ./archaeo-kto-v2 remove_unused_columns: false

#@lora_mlp_kernel: true #lora_qkv_kernel: true #lora_o_kernel: true

adapter: lora lora_model_dir:

sequence_len: 8192 pad_to_sequence_len: false

lora_r: 64 lora_alpha: 32 lora_dropout: 0.0 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules:

  • gate_proj
  • down_proj
  • up_proj
  • q_proj
  • v_proj
  • k_proj
  • o_proj

wandb_project: Francois-V2 wandb_entity: wandb_watch: wandb_name: Archaeo-32b-KTO wandb_log_model:

gradient_accumulation_steps: 4 micro_batch_size: 4 num_epochs: 1 optimizer: paged_ademamix_8bit lr_scheduler: constant_with_warmup learning_rate: 5e-6 max_grad_norm: 0.001

train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true

gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true

warmup_steps: 100 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: ./deepspeed_configs/zero3_bf16.json weight_decay: 0.0025 fsdp: fsdp_config:

Quants:

Credits

Thank you to: Kubernetes-bad, LucyKnada, Kalomaze, Alicat, Intervitens, Samantha Twinkman, Tav, Trappu & The rest of Anthracite

Downloads last month
23
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Delta-Vector/Archaeo-32B-KTO

Finetuned
(1)
this model
Quantizations
6 models

Datasets used to train Delta-Vector/Archaeo-32B-KTO

Collection including Delta-Vector/Archaeo-32B-KTO