HanLee commited on
Commit
cb31088
·
1 Parent(s): 55f8797
Files changed (2) hide show
  1. README.md +4 -2
  2. app/app.py +1 -29
README.md CHANGED
@@ -2,8 +2,10 @@
2
  This is the repository for the LinkedIn Learning course `Hands-On AI: Building and Deploying LLM-Powered Apps`. The full course is available from [LinkedIn Learning][lil-course-url].
3
 
4
  _See the readme file in the main branch for updated instructions and information._
5
- ## Lab2: Adding LLM to Chainlit App
6
- Now we have a web interface working, we will now add an LLM to our Chainlit app to have our simplified version of ChatGPT. We will be using [Langchain](https://python.langchain.com/docs/get_started/introduction) as the framework for this course. It provides easy abstractions and a wide varieties of data connectors and interfaces for everything LLM app development.
 
 
7
 
8
  In this lab, we will be adding an Chat LLM to our Chainlit app using Langchain.
9
 
 
2
  This is the repository for the LinkedIn Learning course `Hands-On AI: Building and Deploying LLM-Powered Apps`. The full course is available from [LinkedIn Learning][lil-course-url].
3
 
4
  _See the readme file in the main branch for updated instructions and information._
5
+ ## Lab3: Enabling Load PDF to Chainlit App
6
+ Building on top of the current simplified version of ChatGPT using Chainlit, we now going to add loading PDF capabilities into the application.
7
+
8
+ NowNow we have a web interface working, we will now add an LLM to our Chainlit app to have our simplified version of ChatGPT. We will be using [Langchain](https://python.langchain.com/docs/get_started/introduction) as the framework for this course. It provides easy abstractions and a wide varieties of data connectors and interfaces for everything LLM app development.
9
 
10
  In this lab, we will be adding an Chat LLM to our Chainlit app using Langchain.
11
 
app/app.py CHANGED
@@ -4,30 +4,14 @@ from langchain.prompts import ChatPromptTemplate
4
  from langchain.schema import StrOutputParser
5
  from langchain.chains import LLMChain
6
 
 
7
  @cl.on_chat_start
8
  async def on_chat_start():
9
- ##########################################################################
10
- # Exercise 1a:
11
- # Our Chainlit app should initialize the LLM chat via Langchain at the
12
- # start of a chat session.
13
- #
14
- # First, we need to choose an LLM from OpenAI's list of models. Remember
15
- # to set streaming=True for streaming tokens
16
- ##########################################################################
17
  model = ChatOpenAI(
18
  model="gpt-3.5-turbo-1106",
19
  streaming=True
20
  )
21
 
22
- ##########################################################################
23
- # Exercise 1b:
24
- # Next, we will need to set the prompt template for chat. Prompt templates
25
- # is how we set prompts and then inject informations into the prompt.
26
- #
27
- # Please create the prompt template using ChatPromptTemplate. Use variable
28
- # name "question" as the variable in the template.
29
- # Refer to the documentation listed in the README.md file for reference.
30
- ##########################################################################
31
  prompt = ChatPromptTemplate.from_messages(
32
  [
33
  (
@@ -40,12 +24,6 @@ async def on_chat_start():
40
  ),
41
  ]
42
  )
43
- ##########################################################################
44
- # Exercise 1c:
45
- # Now we have model and prompt, let's build our Chain. A Chain is one or a
46
- # series of LLM calls.We will use the default StrOutputParser to parse the
47
- # LLM outputs.
48
- ##########################################################################
49
  chain = LLMChain(llm=model, prompt=prompt, output_parser=StrOutputParser())
50
 
51
  # We are saving the chain in user_session, so we do not have to rebuild
@@ -59,12 +37,6 @@ async def main(message: cl.Message):
59
  # Let's load the chain from user_session
60
  chain = cl.user_session.get("chain") # type: LLMChain
61
 
62
- ##########################################################################
63
- # Exercise 1d:
64
- # Everytime we receive a new user message, we will get the chain from
65
- # user_session. We will run the chain with user's question and return LLM
66
- # response to the user.
67
- ##########################################################################
68
  response = await chain.arun(
69
  question=message.content, callbacks=[cl.LangchainCallbackHandler()]
70
  )
 
4
  from langchain.schema import StrOutputParser
5
  from langchain.chains import LLMChain
6
 
7
+
8
  @cl.on_chat_start
9
  async def on_chat_start():
 
 
 
 
 
 
 
 
10
  model = ChatOpenAI(
11
  model="gpt-3.5-turbo-1106",
12
  streaming=True
13
  )
14
 
 
 
 
 
 
 
 
 
 
15
  prompt = ChatPromptTemplate.from_messages(
16
  [
17
  (
 
24
  ),
25
  ]
26
  )
 
 
 
 
 
 
27
  chain = LLMChain(llm=model, prompt=prompt, output_parser=StrOutputParser())
28
 
29
  # We are saving the chain in user_session, so we do not have to rebuild
 
37
  # Let's load the chain from user_session
38
  chain = cl.user_session.get("chain") # type: LLMChain
39
 
 
 
 
 
 
 
40
  response = await chain.arun(
41
  question=message.content, callbacks=[cl.LangchainCallbackHandler()]
42
  )