Hashii1729 commited on
Commit
d2ba52b
·
1 Parent(s): 3d9b5b1

Add initial project structure with Docker, FastAPI, and Ollama integration

Browse files

- Create Dockerfile for FastAPI and Ollama services
- Set up docker-compose for service orchestration
- Implement FastAPI application for fashion analysis
- Add Makefile for common development commands
- Include .gitignore for Python and Docker files
- Create README files for project and Docker setup
- Add startup scripts for Ollama and FastAPI
- Define requirements for Python dependencies

.gitignore ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ share/python-wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+ MANIFEST
28
+
29
+ # PyInstaller
30
+ # Usually these files are written by a python script from a template
31
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
32
+ *.manifest
33
+ *.spec
34
+
35
+ # Installer logs
36
+ pip-log.txt
37
+ pip-delete-this-directory.txt
38
+
39
+ # Unit test / coverage reports
40
+ htmlcov/
41
+ .tox/
42
+ .nox/
43
+ .coverage
44
+ .coverage.*
45
+ .cache
46
+ nosetests.xml
47
+ coverage.xml
48
+ *.cover
49
+ *.py,cover
50
+ .hypothesis/
51
+ .pytest_cache/
52
+ cover/
53
+
54
+ # Translations
55
+ *.mo
56
+ *.pot
57
+
58
+ # Django stuff:
59
+ *.log
60
+ local_settings.py
61
+ db.sqlite3
62
+ db.sqlite3-journal
63
+
64
+ # Flask stuff:
65
+ instance/
66
+ .webassets-cache
67
+
68
+ # Scrapy stuff:
69
+ .scrapy
70
+
71
+ # Sphinx documentation
72
+ docs/_build/
73
+
74
+ # PyBuilder
75
+ .pybuilder/
76
+ target/
77
+
78
+ # Jupyter Notebook
79
+ .ipynb_checkpoints
80
+
81
+ # IPython
82
+ profile_default/
83
+ ipython_config.py
84
+
85
+ # pyenv
86
+ # For a library or package, you might want to ignore these files since the code is
87
+ # intended to run in multiple environments; otherwise, check them in:
88
+ # .python-version
89
+
90
+ # pipenv
91
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
+ # install all needed dependencies.
95
+ #Pipfile.lock
96
+
97
+ # poetry
98
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
100
+ # commonly ignored for libraries.
101
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102
+ #poetry.lock
103
+
104
+ # pdm
105
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106
+ #pdm.lock
107
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108
+ # in version control.
109
+ # https://pdm.fming.dev/#use-with-ide
110
+ .pdm.toml
111
+
112
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
113
+ __pypackages__/
114
+
115
+ # Celery stuff
116
+ celerybeat-schedule
117
+ celerybeat.pid
118
+
119
+ # SageMath parsed files
120
+ *.sage.py
121
+
122
+ # Environments
123
+ .env
124
+ .venv
125
+ env/
126
+ venv/
127
+ ENV/
128
+ env.bak/
129
+ venv.bak/
130
+
131
+ # Spyder project settings
132
+ .spyderproject
133
+ .spyproject
134
+
135
+ # Rope project settings
136
+ .ropeproject
137
+
138
+ # mkdocs documentation
139
+ /site
140
+
141
+ # mypy
142
+ .mypy_cache/
143
+ .dmypy.json
144
+ dmypy.json
145
+
146
+ # Pyre type checker
147
+ .pyre/
148
+
149
+ # pytype static type analyzer
150
+ .pytype/
151
+
152
+ # Cython debug symbols
153
+ cython_debug/
154
+
155
+ # PyCharm
156
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
157
+ # be added to the global gitignore or merged into this project gitignore. For a PyCharm
158
+ # project, it is generally recommended to include the cache and path variables, but
159
+ # exclude the .idea/ folder.
160
+ # https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
161
+ .idea/
162
+
163
+ # VS Code
164
+ .vscode/
165
+
166
+ # FastAPI specific
167
+ # Database files
168
+ *.db
169
+ *.sqlite
170
+ *.sqlite3
171
+
172
+ # Log files
173
+ *.log
174
+ logs/
175
+
176
+ # Temporary files
177
+ tmp/
178
+ temp/
179
+
180
+ # OS generated files
181
+ .DS_Store
182
+ .DS_Store?
183
+ ._*
184
+ .Spotlight-V100
185
+ .Trashes
186
+ ehthumbs.db
187
+ Thumbs.db
188
+
189
+ # Docker
190
+ .dockerignore
191
+ docker-compose.override.yml
192
+
193
+ # Alembic (database migrations)
194
+ alembic/versions/*.py
195
+ !alembic/versions/
196
+ alembic.ini
197
+
198
+ # Static files (if using)
199
+ static/
200
+ media/
201
+
202
+ # Configuration files with sensitive data
203
+ config.ini
204
+ settings.ini
205
+ .secrets
206
+
207
+ # API documentation generated files
208
+ docs/build/
209
+
210
+ #Virtual Environment
211
+ venv/
212
+ ollama/
DockerFile ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM ollama/ollama:0.9.2
2
+
3
+ # Install Python and dependencies with security updates
4
+ RUN apt-get update && apt-get install -y \
5
+ python3 \
6
+ python3-pip \
7
+ curl \
8
+ && apt-get upgrade -y \
9
+ && apt-get clean \
10
+ && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
11
+
12
+ # Create non-root user for security
13
+ RUN groupadd -r appuser && useradd -r -g appuser appuser
14
+
15
+ # Set working directory
16
+ WORKDIR /app
17
+
18
+ # Copy requirements and install Python packages
19
+ COPY requirements.txt .
20
+ RUN pip3 install --no-cache-dir -r requirements.txt
21
+
22
+ # Copy your FastAPI app
23
+ COPY fast.py .
24
+ COPY start.sh .
25
+ RUN chmod +x start.sh
26
+
27
+ # Change ownership to non-root user
28
+ RUN chown -R appuser:appuser /app
29
+
30
+ # Set environment variables
31
+ ENV OLLAMA_HOST=0.0.0.0:11434
32
+ ENV OLLAMA_ORIGINS=*
33
+
34
+ # Expose ports
35
+ EXPOSE 7860
36
+ EXPOSE 11434
37
+
38
+ # Switch to non-root user
39
+ USER appuser
40
+
41
+ # Start both Ollama and FastAPI
42
+ ENTRYPOINT []
43
+ CMD ["bash", "/app/start.sh"]
Dockerfile.fastapi ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ # Install system dependencies with security updates
4
+ RUN apt-get update && apt-get install -y \
5
+ curl \
6
+ && apt-get upgrade -y \
7
+ && apt-get clean \
8
+ && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
9
+
10
+ # Create non-root user for security
11
+ RUN groupadd -r appuser && useradd -r -g appuser appuser
12
+
13
+ # Set working directory
14
+ WORKDIR /app
15
+
16
+ # Copy requirements and install Python packages
17
+ COPY requirements.txt .
18
+ RUN pip3 install --no-cache-dir --upgrade pip && \
19
+ pip3 install --no-cache-dir -r requirements.txt
20
+
21
+ # Copy FastAPI application
22
+ COPY fast.py .
23
+
24
+ # Create logs directory
25
+ RUN mkdir -p /app/logs
26
+
27
+ # Change ownership to non-root user
28
+ RUN chown -R appuser:appuser /app
29
+
30
+ # Switch to non-root user
31
+ USER appuser
32
+
33
+ # Expose port
34
+ EXPOSE 7860
35
+
36
+ # Health check
37
+ HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
38
+ CMD curl -f http://localhost:7860/health || exit 1
39
+
40
+ # Start FastAPI
41
+ CMD ["python3", "-m", "uvicorn", "fast:app", "--host", "0.0.0.0", "--port", "7860"]
Makefile ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .PHONY: help build up down logs clean restart status health
2
+
3
+ # Default target
4
+ help:
5
+ @echo "Fashion Analyzer - Docker Compose Commands"
6
+ @echo ""
7
+ @echo "Available commands:"
8
+ @echo " build - Build all services"
9
+ @echo " up - Start all services"
10
+ @echo " down - Stop all services"
11
+ @echo " logs - View logs from all services"
12
+ @echo " clean - Stop services and remove volumes"
13
+ @echo " restart - Restart all services"
14
+ @echo " status - Show service status"
15
+ @echo " health - Check application health"
16
+ @echo " shell-api - Open shell in FastAPI container"
17
+ @echo " shell-ollama - Open shell in Ollama container"
18
+
19
+ # Build all services
20
+ build:
21
+ docker-compose build
22
+
23
+ # Start all services
24
+ up:
25
+ docker-compose up -d
26
+ @echo "Services starting... Check status with 'make status'"
27
+ @echo "Web interface will be available at: http://localhost:7860"
28
+
29
+ # Stop all services
30
+ down:
31
+ docker-compose down
32
+
33
+ # View logs
34
+ logs:
35
+ docker-compose logs -f
36
+
37
+ # Clean everything (including volumes)
38
+ clean:
39
+ docker-compose down -v --remove-orphans
40
+ docker system prune -f
41
+
42
+ # Restart all services
43
+ restart: down up
44
+
45
+ # Show service status
46
+ status:
47
+ docker-compose ps
48
+
49
+ # Check application health
50
+ health:
51
+ @echo "Checking Ollama health..."
52
+ @curl -s http://localhost:11434/api/tags > /dev/null && echo "✅ Ollama: Healthy" || echo "❌ Ollama: Unhealthy"
53
+ @echo "Checking FastAPI health..."
54
+ @curl -s http://localhost:7860/health > /dev/null && echo "✅ FastAPI: Healthy" || echo "❌ FastAPI: Unhealthy"
55
+
56
+ # Open shell in FastAPI container
57
+ shell-api:
58
+ docker-compose exec fastapi bash
59
+
60
+ # Open shell in Ollama container
61
+ shell-ollama:
62
+ docker-compose exec ollama bash
63
+
64
+ # Development commands
65
+ dev-build:
66
+ docker-compose build --no-cache
67
+
68
+ dev-logs-api:
69
+ docker-compose logs -f fastapi
70
+
71
+ dev-logs-ollama:
72
+ docker-compose logs -f ollama
README copy.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # OutFitly API
2
+
3
+ A FastAPI-based web application for outfit management and recommendations.
4
+
5
+ ## Features
6
+
7
+ - Fast and modern API built with FastAPI
8
+ - Image processing capabilities with Pillow
9
+ - Data validation with Pydantic
10
+ - HTTP client functionality with requests
11
+ - Automatic API documentation
12
+
13
+ ## Prerequisites
14
+
15
+ - Python 3.8 or higher
16
+ - pip (Python package installer)
17
+
18
+ ## Installation
19
+
20
+ 1. **Clone the repository**
21
+
22
+ ```bash
23
+ git clone <repository-url>
24
+ cd OutFitly
25
+ ```
26
+
27
+ 2. **Create a virtual environment**
28
+
29
+ ```bash
30
+ python -m venv ollama
31
+
32
+ # On Windows
33
+ ollama\Scripts\activate
34
+
35
+ # On macOS/Linux
36
+ source ollama/bin/activate
37
+ ```
38
+
39
+ 3. **Install dependencies**
40
+ ```bash
41
+ pip install -r fastapi_requirements.txt
42
+ ```
43
+
44
+ ## Usage
45
+
46
+ ### Running the Development Server
47
+
48
+ Start the FastAPI development server:
49
+
50
+ ```bash
51
+ uvicorn main:app --reload
52
+ ```
53
+
54
+ The API will be available at:
55
+
56
+ - **API**: http://localhost:8000
57
+ - **Interactive API docs (Swagger UI)**: http://localhost:8000/docs
58
+ - **Alternative API docs (ReDoc)**: http://localhost:8000/redoc
59
+
60
+ ### Running in Production
61
+
62
+ For production deployment:
63
+
64
+ ```bash
65
+ uvicorn main:app --host 0.0.0.0 --port 8000
66
+ ```
67
+
68
+ ## API Documentation
69
+
70
+ Once the server is running, you can access the interactive API documentation at:
71
+
72
+ - Swagger UI: http://localhost:8000/docs
73
+ - ReDoc: http://localhost:8000/redoc
74
+
75
+ ## Project Structure
76
+
77
+ ```
78
+ OutFitly/
79
+ ├── AI/
80
+ │ ├── fastapi_requirements.txt
81
+ │ └── [other AI-related files]
82
+ ├── main.py # FastAPI application entry point
83
+ ├── requirements.txt # Python dependencies
84
+ ├── .gitignore # Git ignore rules
85
+ └── README.md # This file
86
+ ```
87
+
88
+ ## Dependencies
89
+
90
+ - **FastAPI** (0.104.1): Modern, fast web framework for building APIs
91
+ - **Uvicorn** (0.24.0): ASGI server for running FastAPI applications
92
+ - **Pillow** (10.0.1): Python Imaging Library for image processing
93
+ - **Requests** (2.31.0): HTTP library for making API calls
94
+ - **Pydantic** (2.4.2): Data validation and settings management
95
+
96
+ ## Development
97
+
98
+ ### Setting up Development Environment
99
+
100
+ 1. Follow the installation steps above
101
+ 2. Install additional development dependencies (if any):
102
+ ```bash
103
+ pip install pytest pytest-asyncio httpx
104
+ ```
105
+
106
+ ### Running Tests
107
+
108
+ ```bash
109
+ pytest
110
+ ```
111
+
112
+ ### Code Style
113
+
114
+ This project follows PEP 8 style guidelines. You can check code style with:
115
+
116
+ ```bash
117
+ flake8 .
118
+ ```
119
+
120
+ ## Environment Variables
121
+
122
+ Create a `.env` file in the root directory for environment-specific configurations:
123
+
124
+ ```env
125
+ # Example environment variables
126
+ DEBUG=True
127
+ DATABASE_URL=sqlite:///./app.db
128
+ SECRET_KEY=your-secret-key-here
129
+ ```
130
+
131
+ ## Contributing
132
+
133
+ 1. Fork the repository
134
+ 2. Create a feature branch (`git checkout -b feature/amazing-feature`)
135
+ 3. Commit your changes (`git commit -m 'Add some amazing feature'`)
136
+ 4. Push to the branch (`git push origin feature/amazing-feature`)
137
+ 5. Open a Pull Request
138
+
139
+ ## License
140
+
141
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
142
+
143
+ ## Support
144
+
145
+ If you encounter any issues or have questions, please:
146
+
147
+ 1. Check the [documentation](http://localhost:8000/docs) when the server is running
148
+ 2. Search existing [issues](../../issues)
149
+ 3. Create a new issue if needed
150
+
151
+ ## Roadmap
152
+
153
+ - [ ] Add authentication and authorization
154
+ - [ ] Implement outfit recommendation algorithms
155
+ - [ ] Add image upload and processing features
156
+ - [ ] Create user management system
157
+ - [ ] Add database integration
158
+ - [ ] Implement caching layer
159
+
160
+ ---
161
+
162
+ **Built with ❤️ using FastAPI**
README-docker-compose.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Fashion Analyzer - Docker Compose Setup
2
+
3
+ This project provides a secure, containerized fashion analysis application using Ollama and FastAPI with Docker Compose.
4
+
5
+ ## 🏗️ Architecture
6
+
7
+ The application consists of three services:
8
+
9
+ 1. **Ollama Service**: Runs the Ollama server with LLaVA model for vision analysis
10
+ 2. **FastAPI Service**: Provides the web API and user interface
11
+ 3. **Model Loader**: One-time service to download the required LLaVA model
12
+
13
+ ## 🔒 Security Features
14
+
15
+ - ✅ **Pinned Ollama version** (0.9.2) - No critical vulnerabilities
16
+ - ✅ **Non-root user execution** - Enhanced container security
17
+ - ✅ **Security updates** - Latest package updates applied
18
+ - ✅ **Health checks** - Service monitoring and restart policies
19
+ - ✅ **Network isolation** - Services communicate via internal network
20
+
21
+ ## 🚀 Quick Start
22
+
23
+ ### Prerequisites
24
+ - Docker Engine 20.10+
25
+ - Docker Compose 2.0+
26
+
27
+ ### 1. Start the Application
28
+
29
+ ```bash
30
+ # Start all services
31
+ docker-compose up -d
32
+
33
+ # View logs
34
+ docker-compose logs -f
35
+
36
+ # Check service status
37
+ docker-compose ps
38
+ ```
39
+
40
+ ### 2. Access the Application
41
+
42
+ - **Web Interface**: http://localhost:7860
43
+ - **API Documentation**: http://localhost:7860/docs
44
+ - **Health Check**: http://localhost:7860/health
45
+ - **Ollama API**: http://localhost:11434
46
+
47
+ ### 3. Stop the Application
48
+
49
+ ```bash
50
+ # Stop all services
51
+ docker-compose down
52
+
53
+ # Stop and remove volumes (removes downloaded models)
54
+ docker-compose down -v
55
+ ```
56
+
57
+ ## 📁 Project Structure
58
+
59
+ ```
60
+ AI/
61
+ ├── docker-compose.yml # Main orchestration file
62
+ ├── Dockerfile.fastapi # FastAPI service Dockerfile
63
+ ├── .env # Environment variables
64
+ ├── .dockerignore # Docker build exclusions
65
+ ├── fast.py # FastAPI application
66
+ ├── requirements.txt # Python dependencies
67
+ └── logs/ # Application logs (created at runtime)
68
+ ```
69
+
70
+ ## ⚙️ Configuration
71
+
72
+ ### Environment Variables (.env)
73
+
74
+ ```env
75
+ OLLAMA_HOST=0.0.0.0:11434
76
+ OLLAMA_ORIGINS=*
77
+ OLLAMA_BASE_URL=http://ollama:11434
78
+ OLLAMA_PORT=11434
79
+ FASTAPI_PORT=7860
80
+ ```
81
+
82
+ ### Custom Configuration
83
+
84
+ To modify ports or other settings:
85
+
86
+ 1. Edit `.env` file
87
+ 2. Restart services: `docker-compose up -d`
88
+
89
+ ## 🔧 Development
90
+
91
+ ### Building Images
92
+
93
+ ```bash
94
+ # Build only FastAPI service
95
+ docker-compose build fastapi
96
+
97
+ # Build with no cache
98
+ docker-compose build --no-cache
99
+ ```
100
+
101
+ ### Viewing Logs
102
+
103
+ ```bash
104
+ # All services
105
+ docker-compose logs -f
106
+
107
+ # Specific service
108
+ docker-compose logs -f fastapi
109
+ docker-compose logs -f ollama
110
+ ```
111
+
112
+ ### Debugging
113
+
114
+ ```bash
115
+ # Execute commands in running containers
116
+ docker-compose exec fastapi bash
117
+ docker-compose exec ollama bash
118
+
119
+ # Check service health
120
+ docker-compose exec fastapi curl http://localhost:7860/health
121
+ ```
122
+
123
+ ## 📊 Monitoring
124
+
125
+ ### Health Checks
126
+
127
+ All services include health checks:
128
+ - **Ollama**: Checks API availability
129
+ - **FastAPI**: Checks application health and Ollama connectivity
130
+
131
+ ### Service Dependencies
132
+
133
+ - FastAPI waits for Ollama to be healthy before starting
134
+ - Model loader runs after Ollama is ready
135
+
136
+ ## 🛠️ Troubleshooting
137
+
138
+ ### Common Issues
139
+
140
+ 1. **Port conflicts**: Change ports in `.env` file
141
+ 2. **Model download fails**: Check internet connection and Ollama logs
142
+ 3. **FastAPI can't connect to Ollama**: Verify network configuration
143
+
144
+ ### Reset Everything
145
+
146
+ ```bash
147
+ # Stop and remove everything
148
+ docker-compose down -v --remove-orphans
149
+
150
+ # Remove images
151
+ docker-compose down --rmi all
152
+
153
+ # Start fresh
154
+ docker-compose up -d
155
+ ```
156
+
157
+ ## 📈 Scaling
158
+
159
+ To run multiple FastAPI instances:
160
+
161
+ ```bash
162
+ # Scale FastAPI service
163
+ docker-compose up -d --scale fastapi=3
164
+ ```
165
+
166
+ Note: You'll need a load balancer for multiple instances.
167
+
168
+ ## 🔐 Security Considerations
169
+
170
+ - Services run as non-root users
171
+ - Network isolation between services
172
+ - No sensitive data in environment variables
173
+ - Regular security updates applied
174
+ - Pinned dependency versions
175
+
176
+ ## 📝 API Usage
177
+
178
+ ### Upload and Analyze Image
179
+
180
+ ```bash
181
+ curl -X POST "http://localhost:7860/analyze-image" \
182
+ -H "accept: application/json" \
183
+ -H "Content-Type: multipart/form-data" \
184
185
+ ```
186
+
187
+ ### Health Check
188
+
189
+ ```bash
190
+ curl http://localhost:7860/health
191
+ ```
192
+
193
+ ## 🤝 Contributing
194
+
195
+ 1. Make changes to the code
196
+ 2. Test with `docker-compose up --build`
197
+ 3. Submit pull request
198
+
199
+ ## 📄 License
200
+
201
+ This project is licensed under the MIT License.
docker-compose.prod.yml ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: '3.8'
2
+
3
+ # Production overrides for docker-compose.yml
4
+ # Usage: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
5
+
6
+ services:
7
+ ollama:
8
+ restart: always
9
+ deploy:
10
+ resources:
11
+ limits:
12
+ memory: 8G
13
+ reservations:
14
+ memory: 4G
15
+ logging:
16
+ driver: "json-file"
17
+ options:
18
+ max-size: "10m"
19
+ max-file: "3"
20
+
21
+ fastapi:
22
+ restart: always
23
+ deploy:
24
+ resources:
25
+ limits:
26
+ memory: 1G
27
+ reservations:
28
+ memory: 512M
29
+ logging:
30
+ driver: "json-file"
31
+ options:
32
+ max-size: "10m"
33
+ max-file: "3"
34
+ environment:
35
+ - ENVIRONMENT=production
36
+ - LOG_LEVEL=info
37
+
38
+ # Add nginx reverse proxy for production
39
+ nginx:
40
+ image: nginx:alpine
41
+ container_name: fashion-analyzer-nginx
42
+ restart: always
43
+ ports:
44
+ - "80:80"
45
+ - "443:443"
46
+ volumes:
47
+ - ./nginx.conf:/etc/nginx/nginx.conf:ro
48
+ - ./ssl:/etc/nginx/ssl:ro
49
+ depends_on:
50
+ - fastapi
51
+ networks:
52
+ - fashion-analyzer
53
+ logging:
54
+ driver: "json-file"
55
+ options:
56
+ max-size: "10m"
57
+ max-file: "3"
docker-compose.yml ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: '3.8'
2
+
3
+ services:
4
+ ollama:
5
+ image: ollama/ollama:0.9.2
6
+ container_name: ollama-server
7
+ restart: unless-stopped
8
+ ports:
9
+ - "11434:11434"
10
+ volumes:
11
+ - ollama_data:/root/.ollama
12
+ environment:
13
+ - OLLAMA_HOST=0.0.0.0:11434
14
+ - OLLAMA_ORIGINS=*
15
+ healthcheck:
16
+ test: ["CMD", "curl", "-f", "http://localhost:11434/api/tags"]
17
+ interval: 30s
18
+ timeout: 10s
19
+ retries: 3
20
+ start_period: 30s
21
+ networks:
22
+ - fashion-analyzer
23
+
24
+ fastapi:
25
+ build:
26
+ context: .
27
+ dockerfile: Dockerfile.fastapi
28
+ container_name: fashion-analyzer-api
29
+ restart: unless-stopped
30
+ ports:
31
+ - "7860:7860"
32
+ environment:
33
+ - OLLAMA_BASE_URL=http://ollama:11434
34
+ depends_on:
35
+ ollama:
36
+ condition: service_healthy
37
+ networks:
38
+ - fashion-analyzer
39
+ volumes:
40
+ - ./logs:/app/logs
41
+
42
+ model-loader:
43
+ image: ollama/ollama:0.9.2
44
+ container_name: model-loader
45
+ restart: "no"
46
+ environment:
47
+ - OLLAMA_HOST=http://ollama:11434
48
+ depends_on:
49
+ ollama:
50
+ condition: service_healthy
51
+ networks:
52
+ - fashion-analyzer
53
+ command: >
54
+ sh -c "
55
+ echo 'Waiting for Ollama server to be ready...' &&
56
+ sleep 10 &&
57
+ echo 'Pulling LLaVA model for vision analysis...' &&
58
+ ollama pull llava:7b &&
59
+ echo 'Model pulled successfully!'
60
+ "
61
+
62
+ volumes:
63
+ ollama_data:
64
+ driver: local
65
+
66
+ networks:
67
+ fashion-analyzer:
68
+ driver: bridge
fast.py ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, HTTPException, UploadFile, File
2
+ from fastapi.responses import JSONResponse, HTMLResponse, PlainTextResponse
3
+ from pydantic import BaseModel
4
+ from typing import List, Optional
5
+ import requests
6
+ import json
7
+ import base64
8
+ from PIL import Image
9
+ import io
10
+ import os
11
+ import time
12
+ import uvicorn
13
+
14
+ app = FastAPI(title="Ollama Fashion Analyzer API", version="1.0.0")
15
+
16
+ class OllamaFashionAnalyzer:
17
+ def __init__(self, base_url=None):
18
+ """Initialize Ollama client"""
19
+ self.base_url = base_url or os.getenv("OLLAMA_BASE_URL", "http://localhost:11434")
20
+ self.model = "llava:7b" # Using LLaVA for vision analysis
21
+
22
+ def encode_image_from_bytes(self, image_bytes):
23
+ """Encode image bytes to base64 for Ollama"""
24
+ image = Image.open(io.BytesIO(image_bytes))
25
+
26
+ # Convert to RGB if necessary
27
+ if image.mode != 'RGB':
28
+ image = image.convert('RGB')
29
+
30
+ # Convert to base64
31
+ buffered = io.BytesIO()
32
+ image.save(buffered, format="JPEG")
33
+ img_str = base64.b64encode(buffered.getvalue()).decode()
34
+
35
+ return img_str
36
+
37
+ def analyze_clothing_from_bytes(self, image_bytes):
38
+ """Detailed clothing analysis using Ollama from image bytes"""
39
+
40
+ # Encode image
41
+ image_b64 = self.encode_image_from_bytes(image_bytes)
42
+
43
+ # Fashion analysis prompt
44
+ prompt = """Analyze this clothing item in detail and provide information about:
45
+
46
+ 1. GARMENT TYPE: What type of clothing is this?
47
+ 2. COLORS: Primary and secondary colors
48
+ 3. COLLAR/NECKLINE: Style of collar or neckline
49
+ 4. SLEEVES: Sleeve type and length
50
+ 5. PATTERN: Any patterns or designs
51
+ 6. FIT: How does it fit (loose, fitted, etc.)
52
+ 7. MATERIAL: Apparent fabric type
53
+ 8. FEATURES: Buttons, pockets, zippers, etc.
54
+ 9. STYLE: Fashion style category
55
+ 10. OCCASION: Suitable occasions for wearing
56
+
57
+ Be specific and detailed in your analysis."""
58
+
59
+ # Make request to Ollama
60
+ payload = {
61
+ "model": self.model,
62
+ "prompt": prompt,
63
+ "images": [image_b64],
64
+ "stream": False,
65
+ "options": {
66
+ "temperature": 0.2,
67
+ "num_predict": 500
68
+ }
69
+ }
70
+
71
+ try:
72
+ response = requests.post(
73
+ f"{self.base_url}/api/generate",
74
+ json=payload,
75
+ timeout=120 # Increased timeout for vision models
76
+ )
77
+ response.raise_for_status()
78
+
79
+ result = response.json()
80
+ return result.get('response', 'No response received')
81
+
82
+ except requests.exceptions.RequestException as e:
83
+ return f"Error: {str(e)}"
84
+
85
+ # Initialize analyzer
86
+ analyzer = OllamaFashionAnalyzer()
87
+
88
+ # Request/Response models
89
+ class AnalysisResponse(BaseModel):
90
+ analysis: str
91
+
92
+ # API Endpoints
93
+ @app.get("/", response_class=HTMLResponse)
94
+ async def root():
95
+ """Main page with file upload interface"""
96
+ return """
97
+ <!DOCTYPE html>
98
+ <html>
99
+ <head>
100
+ <title>Fashion Analyzer</title>
101
+ <style>
102
+ body { font-family: Arial, sans-serif; max-width: 800px; margin: 50px auto; padding: 20px; }
103
+ .upload-area { border: 2px dashed #ccc; padding: 50px; text-align: center; margin: 20px 0; }
104
+ .result { background: #f5f5f5; padding: 20px; margin: 20px 0; border-radius: 5px; }
105
+ </style>
106
+ </head>
107
+ <body>
108
+ <h1>🎽 Fashion Analyzer</h1>
109
+ <p>Upload an image of clothing to get detailed fashion analysis</p>
110
+
111
+ <div class="upload-area">
112
+ <input type="file" id="imageInput" accept="image/*" style="margin: 10px;">
113
+ <br>
114
+ <button onclick="analyzeImage()" style="padding: 10px 20px; margin: 10px;">Analyze Fashion</button>
115
+ </div>
116
+
117
+ <div id="result" class="result" style="display: none;">
118
+ <h3>Analysis Result:</h3>
119
+ <pre id="analysisText"></pre>
120
+ </div>
121
+
122
+ <script>
123
+ async function analyzeImage() {
124
+ const input = document.getElementById('imageInput');
125
+ const file = input.files[0];
126
+
127
+ if (!file) {
128
+ alert('Please select an image file');
129
+ return;
130
+ }
131
+
132
+ const formData = new FormData();
133
+ formData.append('file', file);
134
+
135
+ document.getElementById('analysisText').textContent = 'Analyzing... Please wait...';
136
+ document.getElementById('result').style.display = 'block';
137
+
138
+ try {
139
+ const response = await fetch('/analyze-image', {
140
+ method: 'POST',
141
+ body: formData
142
+ });
143
+
144
+ const result = await response.json();
145
+ document.getElementById('analysisText').textContent = result.analysis;
146
+ } catch (error) {
147
+ document.getElementById('analysisText').textContent = 'Error: ' + error.message;
148
+ }
149
+ }
150
+ </script>
151
+ </body>
152
+ </html>
153
+ """
154
+
155
+ @app.post("/analyze-image", response_model=AnalysisResponse)
156
+ async def analyze_image(file: UploadFile = File(...)):
157
+ """Analyze uploaded image"""
158
+ try:
159
+ # Read image bytes
160
+ image_bytes = await file.read()
161
+
162
+ # Analyze the clothing
163
+ analysis = analyzer.analyze_clothing_from_bytes(image_bytes)
164
+
165
+ return AnalysisResponse(analysis=analysis)
166
+
167
+ except Exception as e:
168
+ raise HTTPException(status_code=500, detail=f"Error analyzing image: {str(e)}")
169
+
170
+ @app.get("/health")
171
+ async def health_check():
172
+ """Health check endpoint"""
173
+ try:
174
+ # Test Ollama connection
175
+ response = requests.get(f"{analyzer.base_url}/api/tags", timeout=5)
176
+ if response.status_code == 200:
177
+ return {"status": "healthy", "ollama": "connected"}
178
+ else:
179
+ return {"status": "unhealthy", "ollama": "disconnected"}
180
+ except:
181
+ return {"status": "unhealthy", "ollama": "disconnected"}
182
+
183
+ if __name__ == "__main__":
184
+ uvicorn.run(app, host="0.0.0.0", port=7860)
fastapi_startup_script.sh ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Start Ollama server in background
4
+ ollama serve &
5
+
6
+ # Wait for Ollama to start
7
+ echo "Waiting for Ollama server to start..."
8
+ while ! curl -s http://localhost:11434 > /dev/null; do
9
+ sleep 1
10
+ done
11
+
12
+ echo "Ollama server started!"
13
+
14
+ # Pull the LLaVA model for vision analysis
15
+ echo "Pulling LLaVA model for vision analysis..."
16
+ ollama pull llava:7b
17
+
18
+ echo "Model pulled successfully!"
19
+
20
+ # Start FastAPI on port 7860 (HF Spaces requirement)
21
+ echo "Starting FastAPI server on port 7860..."
22
+ cd /app
23
+ python3 -m uvicorn fast:app --host 0.0.0.0 --port 7860
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ fastapi==0.104.1
2
+ uvicorn==0.24.0
3
+ requests==2.31.0
4
+ Pillow==10.0.1
5
+ pydantic==2.4.2
6
+ python-multipart
start.sh ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Start Ollama server in background
4
+ ollama serve &
5
+
6
+ # Wait for Ollama to start
7
+ echo "Waiting for Ollama server to start..."
8
+ while ! curl -s http://localhost:11434 > /dev/null; do
9
+ sleep 1
10
+ done
11
+
12
+ echo "Ollama server started!"
13
+
14
+ # Pull the LLaVA model for vision analysis
15
+ echo "Pulling LLaVA model for vision analysis..."
16
+ ollama pull llava:7b
17
+
18
+ echo "Model pulled successfully!"
19
+
20
+ # Start FastAPI on port 7860 (HF Spaces requirement)
21
+ echo "Starting FastAPI server on port 7860..."
22
+ cd /app
23
+ python3 -m uvicorn fast:app --host 0.0.0.0 --port 7860