Ollama Support

#2
by yqchen-sci - opened

This model is an excellent choice for local deployment. It would be highly beneficial to have official support from Ollama for vision and tool calling capabilities, enabling seamless operation of the model across various devices.

Liquid AI org

ollama integration will be available soon!

Commenting here to (hopefully) get notified when support is added. (Hoping the Liquid AI team will respond here once added!)

Commenting here to (hopefully) get notified when support is added. (Hoping the Liquid AI team will respond here once added!)

I really hope there is a high-performance local LLM, under 2B parameters, that supports 128k context, vision, and tools, this would be very helpful for my daily work.

Liquid AI org

LFM2.5 models will be supported after vendor sync is merged in ollama, see https://github.com/ollama/ollama/pull/13570

Currently, Ollama's official lfm2.5-thinking does not support tools. I sincerely hope that the next release will include an instruct version with tool support and a VL version supporting both tools and vision.

Liquid AI org

We're still waiting on the vendor sync to update llama.cpp dependency in Ollama.

Sign up or log in to comment