litert-community/SmolLM-135M-Instruct

This model provides a few variants of HuggingFaceTB/SmolLM-135M-Instruct that are ready for deployment on Android using the LiteRT (fka TFLite) stack and MediaPipe LLM Inference API.

Use the models

Colab

Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.

Open In Colab

Android

  • Download and install the apk.
  • Follow the instructions in the app.

To build the demo app from source, please follow the instructions from the GitHub repository.

Performance

Android

Note that all benchmark stats are from a Samsung S24 Ultra with 1280 KV cache size with multiple prefill signatures enabled.

Backend Context length Prefill (tokens/sec) Decode (tokens/sec) Time-to-first-token (sec) Memory (RSS in MB) Model size (MB)
fp32 (baseline) cpu

1280

498.05 tk/s

47.96 tk/s

0.78 s

931 MB

527 MB

dynamic_int8 cpu

1280

1084.75 tk/s

43.50 tk/s

0.46 s

579 MB

159 MB

  • Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models)
  • Memory: indicator of peak RAM usage
  • The inference on CPU is accelerated via the LiteRT XNNPACK delegate with 4 threads
  • Benchmark is run with cache enabled and initialized. During the first run, the time to first token may differ.
  • dynamic_int4: quantized model with int4 weights and float activations.
  • dynamic_int8: quantized model with int8 weights and float activations.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for litert-community/SmolLM-135M-Instruct

Finetuned
(150)
this model