Spaces:
Runtime error
Runtime error
Update prompts.yaml
Browse files- prompts.yaml +41 -0
prompts.yaml
CHANGED
|
@@ -21,6 +21,47 @@
|
|
| 21 |
```<end_code>
|
| 22 |
Observation: "The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland."
|
| 23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
Thought: I will now generate an image showcasing the oldest person.
|
| 25 |
Code:
|
| 26 |
```py
|
|
|
|
| 21 |
```<end_code>
|
| 22 |
Observation: "The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland."
|
| 23 |
|
| 24 |
+
|
| 25 |
+
Task: "hi, provide the snippet code to launch RAG system using langhain"
|
| 26 |
+
|
| 27 |
+
Thought: I need to provide a Python code snippet that demonstrates a basic RAG (Retrieval Augmented Generation) system using Langchain. This system will employ DuckDuckGo for the retrieval part, as indicated by search_duckduckgo(), and a language model to generate an answer based on the search results. The code will show the necessary imports, initialization of components (LLM, DuckDuckGo tool), and how to create and run an agent that uses these components to answer a query.
|
| 28 |
+
Code:
|
| 29 |
+
```py
|
| 30 |
+
result = search_duckduckgo("LangChain RAG system example code")
|
| 31 |
+
print(result)
|
| 32 |
+
```<end_code>
|
| 33 |
+
Observation:
|
| 34 |
+
```py
|
| 35 |
+
from langchain.chains import RetrievalQA
|
| 36 |
+
from langchain.llms import OpenAI
|
| 37 |
+
from langchain.vectorstores import FAISS
|
| 38 |
+
from langchain.embeddings import OpenAIEmbeddings
|
| 39 |
+
from langchain.document_loaders import TextLoader
|
| 40 |
+
from langchain.text_splitter import CharacterTextSplitter
|
| 41 |
+
|
| 42 |
+
# Load and split documents
|
| 43 |
+
loader = TextLoader("data.txt")
|
| 44 |
+
documents = loader.load()
|
| 45 |
+
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=100)
|
| 46 |
+
texts = text_splitter.split_documents(documents)
|
| 47 |
+
|
| 48 |
+
# Create vector store
|
| 49 |
+
embeddings = OpenAIEmbeddings()
|
| 50 |
+
vectorstore = FAISS.from_documents(texts, embeddings)
|
| 51 |
+
|
| 52 |
+
# Set up RAG chain
|
| 53 |
+
qa_chain = RetrievalQA.from_chain_type(
|
| 54 |
+
llm=OpenAI(),
|
| 55 |
+
chain_type="stuff",
|
| 56 |
+
retriever=vectorstore.as_retriever()
|
| 57 |
+
)
|
| 58 |
+
|
| 59 |
+
# Run the RAG system
|
| 60 |
+
query = "What does the document say about John Doe?"
|
| 61 |
+
response = qa_chain.run(query)
|
| 62 |
+
print(response)
|
| 63 |
+
```<end_code>
|
| 64 |
+
|
| 65 |
Thought: I will now generate an image showcasing the oldest person.
|
| 66 |
Code:
|
| 67 |
```py
|