| # EmbedNeural | |
| *On-device multimodal embedding model enabling instant, private NPU-powered visual search.* | |
| --- | |
| ## Quickstart | |
| [Instruction](https://sdk.nexa.ai/model/EmbedNeural) | |
| ## Model Description | |
| **EmbedNeural** is the world’s first multimodal embedding model purpose-built for **Qualcomm Hexagon NPU** devices. It enables **instant, private, battery-efficient** natural-language image search directly on laptops, phones, XR, and edge devices — with no cloud and no uploads. | |
| The model continuously indexes local images using NPU acceleration, turning unorganized photo folders into a fully searchable visual database that runs entirely on-device. | |
| --- | |
| ## Key Features | |
| ### ⚡ NPU-accelerated multimodal embeddings | |
| Optimized for Qualcomm NPUs to deliver sub-second search and dramatically lower power consumption. | |
| ### 🔍 Natural-language visual search | |
| Query thousands of images instantly using everyday language (e.g., “green bedroom aesthetic”, “cat wearing sunglasses”). | |
| ### 🔒 100% local and private | |
| All computation stays on-device. No cloud. No upload. No tracking. | |
| ### 🔋 Ultra-low power | |
| Continuous background indexing uses ~10× less power than CPU/GPU methods, enabling true always-on search. | |
| --- | |
| ## Why It Matters | |
| People save thousands of images — memes, screenshots, design inspo, photos — but struggle to find them when needed. Cloud solutions compromise privacy; CPU/GPU search drains battery. | |
| EmbedNeural removes these tradeoffs by combining: | |
| - **Instant retrieval** (~0.03s across thousands of images) | |
| - **Continuous local indexing** | |
| - **Zero data upload** | |
| - **NPU-optimized efficiency for daily use** | |
| This makes visual search something you can actually use **every day**, not just when plugged in. | |
| --- | |
| ## Use Cases | |
| - **Personal image libraries:** Rediscover memes, screenshots, and old photos instantly. | |
| - **Creative workflows:** Search moodboards and visual references with natural language. | |
| - **Edge & embedded systems:** Efficient multimodal search for mobile, XR, IoT, and automotive. | |
| --- | |
| ## Performance Highlights | |
| - Sub-second search even across large image libraries | |
| - ~10× lower power consumption vs CPU/GPU search | |
| - Stable always-on indexing without thermal or battery issues | |
| ## License | |
| This model is released under the **Creative Commons Attribution–NonCommercial 4.0 (CC BY-NC 4.0)** license. | |
| Non-commercial use, modification, and redistribution are permitted with attribution. | |
| For commercial licensing, please contact **[email protected]**. |