HanLee commited on
Commit
324b092
·
1 Parent(s): 3a5a4c6

feat: 01_07e

Browse files
Files changed (2) hide show
  1. README.md +18 -19
  2. app/app.py +66 -12
README.md CHANGED
@@ -2,37 +2,36 @@
2
  This is the repository for the LinkedIn Learning course `Hands-On AI: Building and Deploying LLM-Powered Apps`. The full course is available from [LinkedIn Learning][lil-course-url].
3
 
4
  _See the readme file in the main branch for updated instructions and information._
5
- ## Lab1: Introduction to Chainlit
6
- We will be using [Chainlit](https://docs.chainlit.io/get-started/overview) as the frontend framework to develop our LLM Powered applications. Chainlit is an open-source Python package that makes it incredibly fast to build Chat GPT like applications with your own business logic and data.
7
 
8
- In this lab, we will put up a very simple Chainlit application that echos a user's query.
9
 
10
- For example, if user says
11
-
12
- ```
13
- hello
14
- ```
15
 
16
- Our Chainlit app will respond with
17
 
18
- ```
19
- Received: hello
20
- ```
21
 
22
- The learning objective is to familiarize with Chainlit's framework and to launch the application.
23
 
24
- ## Exercises
 
 
25
 
26
- We have created some template code in `app/app.py` in the `app folder`.
27
 
28
- 1. Please go through [Chainlit's documentation](https://docs.chainlit.io/get-started/pure-python) and answer the questions in `app/app.py`
29
 
30
- 2. Please lanuch the application by running the following command on the Terminal:
31
 
32
  ```bash
33
  chainlit run app/app.py -w
34
  ```
35
 
36
- ## Solution
37
 
38
- Please see app/app.py.
 
 
 
 
 
2
  This is the repository for the LinkedIn Learning course `Hands-On AI: Building and Deploying LLM-Powered Apps`. The full course is available from [LinkedIn Learning][lil-course-url].
3
 
4
  _See the readme file in the main branch for updated instructions and information._
5
+ ## Lab2: Adding LLM to Chainlit App
6
+ Now we have a web interface working, we will now add an LLM to our Chainlit app to have our simplified version of ChatGPT. We will be using [Langchain](https://python.langchain.com/docs/get_started/introduction) as the framework for this course. It provides easy abstractions and a wide varieties of data connectors and interfaces for everything LLM app development.
7
 
8
+ In this lab, we will be adding an Chat LLM to our Chainlit app.
9
 
10
+ ## Exercises
 
 
 
 
11
 
12
+ We will build on top of our existing chainlit app code in `app/app.py` in the `app` folder. As in our previous app, we added some template code and instructions in `app/app.py`
13
 
14
+ 1. Please go through the exercises in `app/app.py`.
 
 
15
 
16
+ 2. Please lanuch the application by running the following command on the Terminal:
17
 
18
+ ```bash
19
+ chainlit run app/app.py -w
20
+ ```
21
 
22
+ ## Solution
23
 
24
+ Please see `app/app.py`.
25
 
26
+ Alternatively, to launch the application, please run the following command on the Terminal:
27
 
28
  ```bash
29
  chainlit run app/app.py -w
30
  ```
31
 
 
32
 
33
+ ## References
34
+
35
+ - [Langchain's Prompt Template](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/#chatprompttemplate)
36
+ - [Langchain documentation](https://python.langchain.com/docs/modules/chains/foundational/llm_chain#legacy-llmchain)
37
+ - [Chainlit's documentation](https://docs.chainlit.io/get-started/pure-python)
app/app.py CHANGED
@@ -1,19 +1,73 @@
1
  import chainlit as cl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
 
4
- ##############################################################################
5
- # Exercise 1a:
6
- # Please add the proper decorator to this main function so Chainlit will call
7
- # this function when it receives a message
8
- ##############################################################################
9
  @cl.on_message
10
  async def main(message: cl.Message):
11
 
12
- ##############################################################################
13
- # Exercise 1b:
14
- # Please get the content of the chainlit Message and send it back as a
15
- # response
16
- ##############################################################################
17
- response = message.content
 
 
 
 
 
 
 
 
18
 
19
- await cl.Message(content=f"Received: {response}").send()
 
1
  import chainlit as cl
2
+ from langchain.chat_models import ChatOpenAI
3
+ from langchain.prompts import ChatPromptTemplate
4
+ from langchain.schema import StrOutputParser
5
+ from langchain.chains import LLMChain
6
+
7
+ @cl.on_chat_start
8
+ async def on_chat_start():
9
+ ##########################################################################
10
+ # Exercise 1a:
11
+ # Our Chainlit app should initialize the LLM chat via Langchain at the
12
+ # start of a chat session.
13
+ #
14
+ # First, we need to choose an LLM from OpenAI's list of models. Remember
15
+ # to set streaming=True for streaming tokens
16
+ ##########################################################################
17
+ model = ChatOpenAI(
18
+ model="gpt-4-1106-preview",
19
+ streaming=True
20
+ )
21
+
22
+ ##########################################################################
23
+ # Exercise 1b:
24
+ # Next, we will need to set the prompt template for chat. Prompt templates
25
+ # is how we set prompts and then inject informations into the prompt.
26
+ #
27
+ # Please create the prompt template using ChatPromptTemplate. Use variable
28
+ # name "question" as the variable in the template.
29
+ # Refer to the documentation listed in the README.md file for reference.
30
+ ##########################################################################
31
+ prompt = ChatPromptTemplate.from_messages(
32
+ [
33
+ (
34
+ "system",
35
+ "You are Chainlit GPT, a helpful assistant.",
36
+ ),
37
+ (
38
+ "human",
39
+ "{question}"
40
+ ),
41
+ ]
42
+ )
43
+ ##########################################################################
44
+ # Exercise 1c:
45
+ # Now we have model and prompt, let's build our Chain. A Chain is one or a
46
+ # series of LLM calls.We will use the default StrOutputParser to parse the
47
+ # LLM outputs.
48
+ ##########################################################################
49
+ chain = LLMChain(llm=model, prompt=prompt, output_parser=StrOutputParser())
50
+
51
+ # We are saving the chain in user_session, so we do not have to rebuild
52
+ # it every single time.
53
+ cl.user_session.set("chain", chain)
54
 
55
 
 
 
 
 
 
56
  @cl.on_message
57
  async def main(message: cl.Message):
58
 
59
+ # Let's load the chain from user_session
60
+ chain = cl.user_session.get("chain") # type: LLMChain
61
+
62
+ ##########################################################################
63
+ # Exercise 1d:
64
+ # Everytime we receive a new user message, we will get the chain from
65
+ # user_session. We will run the chain with user's question and return LLM
66
+ # response to the user.
67
+ ##########################################################################
68
+ response = await chain.arun(
69
+ question=message.content, callbacks=[cl.LangchainCallbackHandler()]
70
+ )
71
+
72
+ await cl.Message(content=response).send()
73