Multi-model-ai-demo / README.md
Amarthya7's picture
Upload README.md
6a51ba5 verified
|
raw
history blame
1.63 kB
metadata
title: Multi-Modal AI Demo
emoji: 🤖
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 3.50.2
app_file: app.py
pinned: false

Multi-Modal AI Demo

This project demonstrates the use of multi-modal AI capabilities using Hugging Face pretrained models. The application provides the following features:

  1. Image Captioning: Generate descriptive captions for images
  2. Visual Question Answering: Answer questions about the content of images
  3. Sentiment Analysis: Analyze the sentiment of text inputs

Requirements

  • Python 3.8+
  • Dependencies listed in requirements.txt

Installation

  1. Clone this repository
  2. Install dependencies and setup the application:
    python run.py
    
    Then select option 5 to perform full setup (install requirements, fix dependencies, and download sample images)

Known Issues and Solutions

If you encounter errors related to package compatibility (Pydantic, FastAPI, or Gradio errors), use:

python fix_dependencies.py

This will install compatible versions of all dependencies to ensure the application runs correctly.

Usage

Run the web interface:

python app.py

Then open your browser and navigate to the URL shown in the terminal (typically http://127.0.0.1:7860).

Models Used

This demo uses the following pretrained models from Hugging Face:

  • Image Captioning: nlpconnect/vit-gpt2-image-captioning
  • Visual Question Answering: nlpconnect/vit-gpt2-image-captioning (simplified)
  • Sentiment Analysis: distilbert-base-uncased-finetuned-sst-2-english