You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

license model size quantized optimized Hugging Face Model

CodeLlama-Edge-1.5B

CodeLlama-Edge-1.5B is an edge-optimized variant of the CodeLlama series, designed to run efficiently on mobile and embedded devices using quantized or distilled formats.

Model Description

  • Model Type: Causal Language Model
  • Base Model: CodeLlama
  • Optimizations: Quantization-aware training, pruning, and edge-device compatibility
  • Parameters: 1.5 Billion
  • Intended Use: On-device coding assistance, embedded systems, low-power environments

Features

  • Token-efficient for code generation
  • Ideal for IDEs, mobile apps, IoT dev tools
  • Low memory and compute footprint

Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("tommytracx/CodeLlama-Edge-1.5B")
model = AutoModelForCausalLM.from_pretrained("tommytracx/CodeLlama-Edge-1.5B")

input_text = "def quicksort(arr):"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

License

Apache 2.0

Author

Downloads last month
3
GGUF
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support