llamabase-r-16 / README.md
kevin009's picture
Update README.md
1b3b5f5 verified
metadata
base_model: meta-llama/meta-llama-3.1-8b-instruct
tags:
  - llama adapter
  - trl
  - llama3.1 8b
license: apache-2.0
language:
  - en

Model Overview

A LoRA (Low-Rank Adaptation) decomposed from base model from base to instruct

Model Details

  • Base Model: (meta-llama/Llama-3.1-8B) - (meta-llama/Llama-3.1-8B-instruct)
  • Adaptation Method: LoRA

Training Configuration

Training Hyperparameters

  • Rank (r): 16 -> 16
  • Alpha: 1 -> 16

LoRA Configuration

  • Rank (r): 16
  • Alpha: 16
  • Target Modules:
    • q_proj (Query projection)
    • k_proj (Key projection)
    • v_proj (Value projection)
    • o_proj (Output projection)
    • up_proj (Upsampling projection)
    • down_proj (Downsampling projection)
    • gate_proj (Gate projection)

Usage

This adapter must be used in conjunction with the base Llama-3.1-8B-instruct model.

Loading the Model

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-instruct")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "path_to_adapter")

Limitations and Biases

  • This adapter might inherits some limitations and biases present in the base Llama-3.1-8B-instruct model