Access Llama-3.2-3B-Instruct on Hugging Face
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
To access Llama-3.2-3B-Instruct on Hugging Face, you are required to review and agree to the llama3.2 license. To do this, please ensure you are logged in to Hugging Face and click below. Requests are processed immediately.
Log in or Sign Up to review the conditions and access this model content.
litert-community/Llama-3.2-3B-Instruct
This model provides a few variants of meta-llama/Llama-3.2-3B-Instruct that are ready for deployment on Android using the LiteRT (fka TFLite) stack and MediaPipe LLM Inference API.
Use the models
Colab
Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.
Android
- Download and install the apk.
- Follow the instructions in the app.
To build the demo app from source, please follow the instructions from the GitHub repository.
Performance
Android
Note that all benchmark stats are from a Samsung S24 Ultra with 1280 KV cache size with multiple prefill signatures enabled.
Backend | Prefill (tokens/sec) | Decode (tokens/sec) | Time-to-first-token (sec) | Memory (RSS in MB) | Model size (MB) | |
---|---|---|---|---|---|---|
dynamic_int8 | cpu | 67.47 tk/s |
7.70 tk/s |
10.72 s |
6,241 MB |
3,150 MB |
- Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models)
- Memory: indicator of peak RAM usage
- The inference on CPU is accelerated via the LiteRT XNNPACK delegate with 4 threads
- Benchmark is done assuming XNNPACK cache is enabled
- dynamic_int8: quantized model with int8 weights and float activations.
Model tree for litert-community/Llama-3.2-3B-Instruct
Base model
meta-llama/Llama-3.2-3B-Instruct