import gradio as gr def get_readme_content(): readme_html = """

📱 VLM Chat Helper - Download

A desktop assistant designed for the GLM series multimodal models (GLM-4.5V, compatible with GLM-4.1V), supporting interactive conversations with text, images, videos, PDFs, PPTs, and more. It connects to the GLM multimodal API to enable intelligent services across various scenarios.

⚠️ Special Notes

The current version only supports macOS Apple Silicon (M-series chips: M1/M2/M3, etc.)
Versions for Intel Macs, Windows, and Linux are not currently available.

✨ Main Features

🤖
Multimodal Chat: Supports intelligent conversations with text, images, videos, PDFs, and PPT files
📸
Screenshot: Quick full/region screenshots with a global hotkey
🎥
Screen Recording: Full-screen and region recording with automatic video compression
🪟
Floating Window Mode: Compact floating chat window for use anytime, anywhere
🎨
Themes: Multiple built-in code highlighting themes
📱
Drag-and-Drop Upload: Drag files directly into the chat interface
⌨️
Hotkeys: Rich set of global hotkeys
💾
Local Storage: Chat history stored in a local database

⚙️ Important Setup Instructions

Before using the application, please run the following command in Terminal:

xattr -rd com.apple.quarantine /Applications/vlm-helper.app

This command removes the quarantine attribute to allow the app to run properly on macOS.

📥 Download: Find vlm-helper-1.0.6.dmg in this repository.

""" return readme_html def create_interface(): """Create the Gradio interface""" with gr.Blocks( title="VLM Chat Helper - README", theme=gr.themes.Soft(), css=""" .container { max-width: 900px !important; margin: 0 auto !important; } .gradio-container { min-height: 100vh; } """ ) as demo: gr.HTML(get_readme_content()) return demo if __name__ == "__main__": demo = create_interface() demo.launch()