Error Converting PEFT Fine-Tuned CodeLlama-7B to GGUF: Gated Repo Access Issue

#1
by AdnanRiaz107 - opened

I have finetuned "meta-llama/CodeLlama-7b-hf" using peft lora, when i try to convert it into gguf format, I have granted access of this gated repo already,
I get this error "

āŒ ERROR

Error converting to GGUF F16: b'INFO:lora-to-gguf:Loading base model from Hugging Face: meta-llama/CodeLlama-7b-hf\nERROR:lora-to-gguf:Failed to load base model config: You are trying to access a gated repo.\nMake sure to have access to it at https://huggingface.co/meta-llama/CodeLlama-7b-hf.\n403 Client Error. (Request ID: Root=1-67ceb578-05a2d96328788566370b119b;2f99b6f0-ddbb-43dd-b0bf-7b130b154b16)\n\nCannot access gated repo for url https://huggingface.co/meta-llama/CodeLlama-7b-hf/resolve/main/config.json.\nAccess to model meta-llama/CodeLlama-7b-hf is restricted and you are not in the authorized list. Visit https://huggingface.co/meta-llama/CodeLlama-7b-hf to ask for access.\nERROR:lora-to-gguf:Please try downloading the base model and add its path to --base\n'"

Sam issue here> I used AutoTrain to do some refinement in Bielik 4.5B and 1.5B, got the files but can't convert it neither using llama.cpp nor this tool here.

Sign up or log in to comment