aiqtech commited on
Commit
4be2884
·
verified ·
1 Parent(s): d886a90

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -336
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: DeployPythonicRAG
3
  emoji: 📉
4
  colorFrom: blue
5
  colorTo: purple
@@ -7,338 +7,3 @@ sdk: docker
7
  pinned: false
8
  license: apache-2.0
9
  ---
10
-
11
- # Deploying Pythonic Chat With Your Text File Application
12
-
13
- In today's breakout rooms, we will be following the process that you saw during the challenge.
14
-
15
- Today, we will repeat the same process - but powered by our Pythonic RAG implementation we created last week.
16
-
17
- You'll notice a few differences in the `app.py` logic - as well as a few changes to the `aimakerspace` package to get things working smoothly with Chainlit.
18
-
19
- > NOTE: If you want to run this locally - be sure to use `uv sync`, and then `uv run chainlit run app.py` to start the application outside of Docker.
20
-
21
- ## Reference Diagram (It's Busy, but it works)
22
-
23
- ![image](https://i.imgur.com/IaEVZG2.png)
24
-
25
- ### Anatomy of a Chainlit Application
26
-
27
- [Chainlit](https://docs.chainlit.io/get-started/overview) is a Python package similar to Streamlit that lets users write a backend and a front end in a single (or multiple) Python file(s). It is mainly used for prototyping LLM-based Chat Style Applications - though it is used in production in some settings with 1,000,000s of MAUs (Monthly Active Users).
28
-
29
- The primary method of customizing and interacting with the Chainlit UI is through a few critical [decorators](https://blog.hubspot.com/website/decorators-in-python).
30
-
31
- > NOTE: Simply put, the decorators (in Chainlit) are just ways we can "plug-in" to the functionality in Chainlit.
32
-
33
- We'll be concerning ourselves with three main scopes:
34
-
35
- 1. On application start - when we start the Chainlit application with a command like `chainlit run app.py`
36
- 2. On chat start - when a chat session starts (a user opens the web browser to the address hosting the application)
37
- 3. On message - when the users sends a message through the input text box in the Chainlit UI
38
-
39
- Let's dig into each scope and see what we're doing!
40
-
41
- ### On Application Start:
42
-
43
- The first thing you'll notice is that we have the traditional "wall of imports" this is to ensure we have everything we need to run our application.
44
-
45
- ```python
46
- import os
47
- from typing import List
48
- from chainlit.types import AskFileResponse
49
- from aimakerspace.text_utils import CharacterTextSplitter, TextFileLoader
50
- from aimakerspace.openai_utils.prompts import (
51
- UserRolePrompt,
52
- SystemRolePrompt,
53
- AssistantRolePrompt,
54
- )
55
- from aimakerspace.openai_utils.embedding import EmbeddingModel
56
- from aimakerspace.vectordatabase import VectorDatabase
57
- from aimakerspace.openai_utils.chatmodel import ChatOpenAI
58
- import chainlit as cl
59
- ```
60
-
61
- Next up, we have some prompt templates. As all sessions will use the same prompt templates without modification, and we don't need these templates to be specific per template - we can set them up here - at the application scope.
62
-
63
- ```python
64
- system_template = """\
65
- Use the following context to answer a users question. If you cannot find the answer in the context, say you don't know the answer."""
66
- system_role_prompt = SystemRolePrompt(system_template)
67
-
68
- user_prompt_template = """\
69
- Context:
70
- {context}
71
-
72
- Question:
73
- {question}
74
- """
75
- user_role_prompt = UserRolePrompt(user_prompt_template)
76
- ```
77
-
78
- > NOTE: You'll notice that these are the exact same prompt templates we used from the Pythonic RAG Notebook in Week 1 Day 2!
79
-
80
- Following that - we can create the Python Class definition for our RAG pipeline - or *chain*, as we'll refer to it in the rest of this walkthrough.
81
-
82
- Let's look at the definition first:
83
-
84
- ```python
85
- class RetrievalAugmentedQAPipeline:
86
- def __init__(self, llm: ChatOpenAI(), vector_db_retriever: VectorDatabase) -> None:
87
- self.llm = llm
88
- self.vector_db_retriever = vector_db_retriever
89
-
90
- async def arun_pipeline(self, user_query: str):
91
- ### RETRIEVAL
92
- context_list = self.vector_db_retriever.search_by_text(user_query, k=4)
93
-
94
- context_prompt = ""
95
- for context in context_list:
96
- context_prompt += context[0] + "\n"
97
-
98
- ### AUGMENTED
99
- formatted_system_prompt = system_role_prompt.create_message()
100
-
101
- formatted_user_prompt = user_role_prompt.create_message(question=user_query, context=context_prompt)
102
-
103
-
104
- ### GENERATION
105
- async def generate_response():
106
- async for chunk in self.llm.astream([formatted_system_prompt, formatted_user_prompt]):
107
- yield chunk
108
-
109
- return {"response": generate_response(), "context": context_list}
110
- ```
111
-
112
- Notice a few things:
113
-
114
- 1. We have modified this `RetrievalAugmentedQAPipeline` from the initial notebook to support streaming.
115
- 2. In essence, our pipeline is *chaining* a few events together:
116
- 1. We take our user query, and chain it into our Vector Database to collect related chunks
117
- 2. We take those contexts and our user's questions and chain them into the prompt templates
118
- 3. We take that prompt template and chain it into our LLM call
119
- 4. We chain the response of the LLM call to the user
120
- 3. We are using a lot of `async` again!
121
-
122
- Now, we're going to create a helper function for processing uploaded text files.
123
-
124
- First, we'll instantiate a shared `CharacterTextSplitter`.
125
-
126
- ```python
127
- text_splitter = CharacterTextSplitter()
128
- ```
129
-
130
- Now we can define our helper.
131
-
132
- ```python
133
- def process_file(file: AskFileResponse):
134
- import tempfile
135
- import shutil
136
-
137
- print(f"Processing file: {file.name}")
138
-
139
- # Create a temporary file with the correct extension
140
- suffix = f".{file.name.split('.')[-1]}"
141
- with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as temp_file:
142
- # Copy the uploaded file content to the temporary file
143
- shutil.copyfile(file.path, temp_file.name)
144
- print(f"Created temporary file at: {temp_file.name}")
145
-
146
- # Create appropriate loader
147
- if file.name.lower().endswith('.pdf'):
148
- loader = PDFLoader(temp_file.name)
149
- else:
150
- loader = TextFileLoader(temp_file.name)
151
-
152
- try:
153
- # Load and process the documents
154
- documents = loader.load_documents()
155
- texts = text_splitter.split_texts(documents)
156
- return texts
157
- finally:
158
- # Clean up the temporary file
159
- try:
160
- os.unlink(temp_file.name)
161
- except Exception as e:
162
- print(f"Error cleaning up temporary file: {e}")
163
- ```
164
-
165
- Simply put, this downloads the file as a temp file, we load it in with `TextFileLoader` and then split it with our `TextSplitter`, and returns that list of strings!
166
-
167
- #### ❓ QUESTION #1:
168
-
169
- Why do we want to support streaming? What about streaming is important, or useful?
170
-
171
- ### On Chat Start:
172
-
173
- The next scope is where "the magic happens". On Chat Start is when a user begins a chat session. This will happen whenever a user opens a new chat window, or refreshes an existing chat window.
174
-
175
- You'll see that our code is set-up to immediately show the user a chat box requesting them to upload a file.
176
-
177
- ```python
178
- while files == None:
179
- files = await cl.AskFileMessage(
180
- content="Please upload a Text or PDF file to begin!",
181
- accept=["text/plain", "application/pdf"],
182
- max_size_mb=2,
183
- timeout=180,
184
- ).send()
185
- ```
186
-
187
- Once we've obtained the text file - we'll use our processing helper function to process our text!
188
-
189
- After we have processed our text file - we'll need to create a `VectorDatabase` and populate it with our processed chunks and their related embeddings!
190
-
191
- ```python
192
- vector_db = VectorDatabase()
193
- vector_db = await vector_db.abuild_from_list(texts)
194
- ```
195
-
196
- Once we have that piece completed - we can create the chain we'll be using to respond to user queries!
197
-
198
- ```python
199
- retrieval_augmented_qa_pipeline = RetrievalAugmentedQAPipeline(
200
- vector_db_retriever=vector_db,
201
- llm=chat_openai
202
- )
203
- ```
204
-
205
- Now, we'll save that into our user session!
206
-
207
- > NOTE: Chainlit has some great documentation about [User Session](https://docs.chainlit.io/concepts/user-session).
208
-
209
- #### ❓ QUESTION #2:
210
-
211
- Why are we using User Session here? What about Python makes us need to use this? Why not just store everything in a global variable?
212
-
213
- ### On Message
214
-
215
- First, we load our chain from the user session:
216
-
217
- ```python
218
- chain = cl.user_session.get("chain")
219
- ```
220
-
221
- Then, we run the chain on the content of the message - and stream it to the front end - that's it!
222
-
223
- ```python
224
- msg = cl.Message(content="")
225
- result = await chain.arun_pipeline(message.content)
226
-
227
- async for stream_resp in result["response"]:
228
- await msg.stream_token(stream_resp)
229
- ```
230
-
231
- ### 🎉
232
-
233
- With that - you've created a Chainlit application that moves our Pythonic RAG notebook to a Chainlit application!
234
-
235
- ## Deploying the Application to Hugging Face Space
236
-
237
- Due to the way the repository is created - it should be straightforward to deploy this to a Hugging Face Space!
238
-
239
- > NOTE: If you wish to go through the local deployments using `uv run chainlit run app.py` and Docker - please feel free to do so!
240
-
241
- <details>
242
- <summary>Creating a Hugging Face Space</summary>
243
-
244
- 1. Navigate to the `Spaces` tab.
245
-
246
- ![image](https://i.imgur.com/aSMlX2T.png)
247
-
248
- 2. Click on `Create new Space`
249
-
250
- ![image](https://i.imgur.com/YaSSy5p.png)
251
-
252
- 3. Create the Space by providing values in the form. Make sure you've selected "Docker" as your Space SDK.
253
-
254
- ![image](https://i.imgur.com/6h9CgH6.png)
255
-
256
- </details>
257
-
258
- <details>
259
- <summary>Adding this Repository to the Newly Created Space</summary>
260
-
261
- 1. Collect the SSH address from the newly created Space.
262
-
263
- ![image](https://i.imgur.com/Oag0m8E.png)
264
-
265
- > NOTE: The address is the component that starts with `[email protected]:spaces/`.
266
-
267
- 2. Use the command:
268
-
269
- ```bash
270
- git remote add hf HF_SPACE_SSH_ADDRESS_HERE
271
- ```
272
-
273
- 3. Use the command:
274
-
275
- ```bash
276
- git pull hf main --no-rebase --allow-unrelated-histories -X ours
277
- ```
278
-
279
- 4. Use the command:
280
-
281
- ```bash
282
- git add .
283
- ```
284
-
285
- 5. Use the command:
286
-
287
- ```bash
288
- git commit -m "Deploying Pythonic RAG"
289
- ```
290
-
291
- 6. Use the command:
292
-
293
- ```bash
294
- git push hf main
295
- ```
296
-
297
- 7. The Space should automatically build as soon as the push is completed!
298
-
299
- > NOTE: The build will fail before you complete the following steps!
300
-
301
- </details>
302
-
303
- <details>
304
- <summary>Adding OpenAI Secrets to the Space</summary>
305
-
306
- 1. Navigate to your Space settings.
307
-
308
- ![image](https://i.imgur.com/zh0a2By.png)
309
-
310
- 2. Navigate to `Variables and secrets` on the Settings page and click `New secret`:
311
-
312
- ![image](https://i.imgur.com/g2KlZdz.png)
313
-
314
- 3. In the `Name` field - input `OPENAI_API_KEY` in the `Value (private)` field, put your OpenAI API Key.
315
-
316
- ![image](https://i.imgur.com/eFcZ8U3.png)
317
-
318
- 4. The Space will begin rebuilding!
319
-
320
- </details>
321
-
322
- ## 🎉
323
-
324
- You just deployed Pythonic RAG!
325
-
326
- Try uploading a text file and asking some questions!
327
-
328
- #### ❓ Discussion Question #1:
329
-
330
- Upload a PDF file of the recent DeepSeek-R1 paper and ask the following questions:
331
-
332
- 1. What is RL and how does it help reasoning?
333
- 2. What is the difference between DeepSeek-R1 and DeepSeek-R1-Zero?
334
- 3. What is this paper about?
335
-
336
- Does this application pass your vibe check? Are there any immediate pitfalls you're noticing?
337
-
338
- ## 🚧 CHALLENGE MODE 🚧
339
-
340
- For the challenge mode, please instead create a simple FastAPI backend with a simple React (or any other JS framework) frontend.
341
-
342
- You can use the same prompt templates and RAG pipeline as we did here - but you'll need to modify the code to work with FastAPI and React.
343
-
344
- Deploy this application to Hugging Face Spaces!
 
1
  ---
2
+ title: Python PDF RAG
3
  emoji: 📉
4
  colorFrom: blue
5
  colorTo: purple
 
7
  pinned: false
8
  license: apache-2.0
9
  ---