s4um1l commited on
Commit
9a00962
Β·
0 Parent(s):

feat: Implement OpenAI-powered Chainlit chat app with Docker

Browse files

This commit adds:
- Initial Chainlit chat application with async OpenAI integration
- Streaming response support for real-time AI chat experience
- Full conversation context management
- Docker containerization with UV package management
- Environment variable handling for secure API key management
- Error handling and resilient architecture

The application provides a production-ready ChatGPT-like interface that
can be easily deployed using Docker.

Files changed (5) hide show
  1. .gitignore +4 -0
  2. Dockerfile +30 -0
  3. app.py +97 -0
  4. chainlit.md +14 -0
  5. requirements.txt +2 -0
.gitignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ .env
2
+ .venv
3
+ .chainlit
4
+ __pycache__
Dockerfile ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use Python 3.10 as the base image
2
+ FROM python:3.10-slim
3
+
4
+ # Set working directory
5
+ WORKDIR /app
6
+
7
+ # Install uv
8
+ RUN pip install uv
9
+
10
+ # Copy requirements file
11
+ COPY requirements.txt .
12
+
13
+ # Install dependencies system-wide
14
+ RUN uv pip install --system -r requirements.txt
15
+
16
+ # Copy the application code
17
+ COPY . .
18
+
19
+ # Expose the port Chainlit runs on
20
+ EXPOSE 8000
21
+
22
+ # Command to run the application
23
+ CMD ["chainlit", "run", "app.py", "--host", "0.0.0.0", "--port", "8000"]
24
+
25
+ # Add this line to your Dockerfile
26
+ RUN --mount=type=secret,id=openai_api_key \
27
+ cat /run/secrets/openai_api_key > /app/.env
28
+
29
+ ARG OPENAI_API_KEY
30
+ ENV OPENAI_API_KEY=$OPENAI_API_KEY
app.py ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from dotenv import load_dotenv
3
+ import chainlit as cl
4
+ from openai import AsyncOpenAI
5
+
6
+ # Load environment variables from .env file
7
+ load_dotenv()
8
+
9
+ # Default model settings
10
+ DEFAULT_SETTINGS = {
11
+ "model": "gpt-3.5-turbo",
12
+ "temperature": 0.7,
13
+ "max_tokens": 500,
14
+ "top_p": 1,
15
+ "frequency_penalty": 0,
16
+ "presence_penalty": 0,
17
+ }
18
+
19
+ SYSTEM_PROMPT = "You are a helpful, friendly AI assistant. Provide clear and concise responses."
20
+
21
+ @cl.on_chat_start
22
+ async def start():
23
+ """
24
+ Initialize the chat session:
25
+ - Create OpenAI client
26
+ - Set up message history with system prompt
27
+ - Configure model settings
28
+ - Send welcome message
29
+ """
30
+ # Initialize OpenAI client
31
+ client = AsyncOpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
32
+ cl.user_session.set("client", client)
33
+
34
+ # Initialize message history with system prompt
35
+ message_history = [{"role": "system", "content": SYSTEM_PROMPT}]
36
+ cl.user_session.set("message_history", message_history)
37
+
38
+ # Save model settings
39
+ cl.user_session.set("settings", DEFAULT_SETTINGS)
40
+
41
+ # Send welcome message
42
+ await cl.Message(
43
+ content="Hello! I'm your AI assistant powered by OpenAI. How can I help you today?"
44
+ ).send()
45
+
46
+
47
+ @cl.on_message
48
+ async def main(user_message: cl.Message):
49
+ """
50
+ Process user messages and generate AI responses:
51
+ - Update message history with user input
52
+ - Call OpenAI API with current conversation context
53
+ - Stream the response back to the user
54
+ - Update message history with AI response
55
+
56
+ Args:
57
+ user_message: The message sent by the user
58
+ """
59
+ # Retrieve session data
60
+ client = cl.user_session.get("client")
61
+ message_history = cl.user_session.get("message_history")
62
+ settings = cl.user_session.get("settings")
63
+
64
+ # Add user message to history
65
+ message_history.append({"role": "user", "content": user_message.content})
66
+
67
+ # Prepare response message with loading state
68
+ response_message = cl.Message(content="")
69
+ await response_message.send()
70
+
71
+ try:
72
+ # Call OpenAI API to get response
73
+ stream = await client.chat.completions.create(
74
+ messages=message_history,
75
+ stream=True,
76
+ **settings
77
+ )
78
+
79
+ # Stream the response
80
+ full_response = ""
81
+ async for chunk in stream:
82
+ if chunk.choices[0].delta.content:
83
+ content_chunk = chunk.choices[0].delta.content
84
+ full_response += content_chunk
85
+
86
+ # Update message in real-time
87
+ response_message.content = full_response
88
+ await response_message.update()
89
+
90
+ # Add AI response to message history
91
+ message_history.append({"role": "assistant", "content": full_response})
92
+ cl.user_session.set("message_history", message_history)
93
+
94
+ except Exception as e:
95
+ # Handle errors
96
+ response_message.content = f"Error: {str(e)}"
97
+ await response_message.update()
chainlit.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Welcome to Chainlit! πŸš€πŸ€–
2
+
3
+ Hi there, Developer! πŸ‘‹ We're excited to have you on board. Chainlit is a powerful tool designed to help you prototype, debug and share applications built on top of LLMs.
4
+
5
+ ## Useful Links πŸ”—
6
+
7
+ - **Documentation:** Get started with our comprehensive [Chainlit Documentation](https://docs.chainlit.io) πŸ“š
8
+ - **Discord Community:** Join our friendly [Chainlit Discord](https://discord.gg/k73SQ3FyUh) to ask questions, share your projects, and connect with other developers! πŸ’¬
9
+
10
+ We can't wait to see what you create with Chainlit! Happy coding! πŸ’»πŸ˜Š
11
+
12
+ ## Welcome screen
13
+
14
+ To modify the welcome screen, edit the `chainlit.md` file at the root of your project. If you do not want a welcome screen, just leave this file empty.
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ chainlit
2
+ openai>=1.0.0