MLX Models
Collection
Models converted to run with MLX • 225 items • Updated • 5
How to use alexgusevski/LLaMA-Mesh-q4-mlx with Transformers:
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("alexgusevski/LLaMA-Mesh-q4-mlx")
model = AutoModelForCausalLM.from_pretrained("alexgusevski/LLaMA-Mesh-q4-mlx")How to use alexgusevski/LLaMA-Mesh-q4-mlx with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir LLaMA-Mesh-q4-mlx alexgusevski/LLaMA-Mesh-q4-mlx
The Model alexgusevski/LLaMA-Mesh-q4-mlx was converted to MLX format from Zhengyi/LLaMA-Mesh using mlx-lm version 0.21.4.
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("alexgusevski/LLaMA-Mesh-q4-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
4-bit
Base model
Zhengyi/LLaMA-Mesh