Priyanshu Kumar
Switch to Docker SDK for FastAPI app
db251fb
metadata
title: OpenHermes Mistral API
emoji: 🧠
colorFrom: gray
colorTo: blue
pinned: false
short_description: This API is for AnalyDocs

🧠 Mistral-7B GGUF API – Hosted LLM Inference with FastAPI

This Hugging Face Space hosts a lightweight, quantized version of the Mistral-7B Instruct large language model in GGUF format (Q4_K_M) using llama-cpp-python and FastAPI.

It exposes a simple /generate endpoint to allow easy integration of high-quality local inference into any application – no OpenAI keys, no vendor lock-in, no GPU dependency.


πŸ”— Live Demo

πŸš€ API is deployed and accessible here:
https://Priyanshukr-1-openhermes_mistral_API.hf.space


πŸ“„ Used In: AnalyDocs – AI-Powered Report Generator

AnalyDocs is a smart document and data report generation tool that uses this API as its core language generation engine.

🧠 What AnalyDocs Does:

AnalyDocs takes structured or unstructured business data (tables, charts, KPIs, raw CSVs) and transforms it into meaningful written insights using prompt-based LLM processing.

Features:

  • ✨ Natural language summaries of documents, reports, and spreadsheets
  • πŸ“Š Automatic generation of key insights from graphs and charts
  • πŸ“ˆ Time-series growth/decline analysis with possible reasons from news sources
  • πŸ“ Clean, editable paragraphs and bullet points for documentation

The LLM API hosted in this repo powers the natural language generation core of AnalyDocs.

πŸ›  Example use case:
β€œGenerate a 5-point executive summary comparing Q1 and Q2 performance, highlighting changes and probable causes.”


πŸ“‘ API Documentation

POST /generate

Request:

{
  "prompt": "Write a summary of the key growth areas in Q2 2024 for the dairy industry."
}