--- language: - code license: llama2 tags: - llama-2 - mlx pipeline_tag: text-generation --- ![Alt text](https://media.discordapp.net/attachments/989904887330521099/1201633964444626944/semper0669_a_cute_lama_coding_on_a_macbook_illustration_8a63cb66-1259-4a24-aa90-b4a490f0f66e.png?ex=65ca87d6&is=65b812d6&hm=3499e0dc13481d6533bb9ed75e5c66fa49db67f7fa7027dca07de1261fba3903&=&format=webp&quality=lossless&width=1840&height=1840) # mlx-community/CodeLlama-7b-Python-4bit This model was converted to MLX format from [`codellama/CodeLlama-7b-Python-hf`](). Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/CodeLlama-7b-Python-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```