abishekcodes commited on
Commit
0fa3d60
·
verified ·
1 Parent(s): ff2a310

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -54
README.md CHANGED
@@ -1,54 +1,8 @@
1
- # LLM x Website URLs
2
-
3
- Most of the language models like GPT 4 and even Gemini Pro have the issue of not having the latest information considering they were trained with a set of data for pretraining.
4
-
5
- In cases ,We may need to access real-time information from websites to get up-to-date information. LLMs do not have this functionality by default. But through Retrieval Augmented Generation(RAG), we can allow them to access the data that we provide.
6
-
7
- Here, we use Langchain's Webloader and get the data from the site and store the data into a vectorstore, from where we can get the necessary answers for our queries.
8
-
9
- ## Usage Instructions:
10
-
11
- 1. **Install Dependencies:**
12
- ```bash
13
- pip install -r requirements.txt
14
- ```
15
-
16
- 2. **Run the Application:**
17
- ```bash
18
- python app.py
19
- ```
20
-
21
- 3. **Access the Gradio Interface:**
22
- - Open your web browser and navigate to the provided URL.
23
-
24
- 4. **Input Parameters:**
25
- - Query: Enter your question or input.
26
- - URL: Provide the website URL for context.
27
- - OpenAI Key: Enter your OpenAI key for authentication.
28
-
29
- 5. **Interact with the Chatbot:**
30
- - Receive responses based on your input and website context.
31
-
32
- ## Code Overview:
33
-
34
- - **main.py:** Python script containing the main application code.
35
- - **Functions:**
36
- - `getvecstore(url)`: Retrieves a vector store from the given website URL.
37
- - `getcontext(vector_store)`: Obtains a context-aware retriever chain.
38
- - `getragchain(retriever_chain)`: Retrieves a conversational RAG (Retrieval-Augmented Generation) chain.
39
- - `getresp(user_input, website_url)`: Gets a response from the chatbot based on user input and website URL.
40
- - `interact(Query, URL, OpenAI_Key)`: Function to interact with the chatbot through Gradio interface.
41
-
42
- ## Future Improvements:
43
-
44
- For future enhancements, I am considering incorporating a complete local and private workflow using the following technologies:
45
-
46
- - **Ollama for Embeddings:** Explore the use of Ollama for embeddings to enhance local processing and privacy.
47
- - **LLM (Language Model):** The use of local LLM inference through LM Studio and Ollama can be added to make it completely private to our system alone.
48
-
49
- ## Important Note:
50
-
51
- - Keep your OpenAI key secure and avoid sharing it publicly.
52
- - Implement additional security measures for handling sensitive information.
53
-
54
- Feel free to explore and contribute to the codebase for further improvements and customization.
 
1
+ title: QueryURL
2
+ emoji: 🔗
3
+ colorFrom: red
4
+ colorTo: blue
5
+ sdk: gradio
6
+ sdk_version: 3.8.2
7
+ app_file: app.py
8
+ pinned: false