Spaces:
Build error
Build error
Update prompts.py
Browse files- prompts.py +33 -30
prompts.py
CHANGED
|
@@ -1,55 +1,57 @@
|
|
| 1 |
# RAG prompt
|
| 2 |
-
rag_prompt = """
|
| 3 |
-
a clear concise and meaningful answer for the QUERY
|
| 4 |
-
|
|
|
|
| 5 |
QUERY:
|
| 6 |
{query}
|
| 7 |
-
|
|
|
|
| 8 |
{context}
|
|
|
|
| 9 |
ANSWER:
|
| 10 |
"""
|
| 11 |
|
|
|
|
| 12 |
# Context Relevancy Checker Prompt
|
| 13 |
-
relevancy_prompt = """You are an expert judge tasked with evaluating whether
|
| 14 |
-
Analyze the provided QUERY
|
| 15 |
|
| 16 |
Guidelines:
|
| 17 |
-
1. The content must not introduce new information beyond what
|
| 18 |
2. Pay close attention to the subject of statements. Ensure that attributes, actions, or dates are correctly associated with the right entities (e.g., a person vs. a TV show they star in).
|
| 19 |
-
|
| 20 |
-
|
| 21 |
|
| 22 |
-
|
| 23 |
-
- 0: The content has all the necessary information to answer the QUERY
|
| 24 |
-
- 1: The content does not
|
| 25 |
|
| 26 |
-
```
|
| 27 |
EXAMPLE:
|
| 28 |
|
| 29 |
INPUT (for context only, not to be used for faithfulness evaluation):
|
| 30 |
-
What is the capital of France?
|
| 31 |
-
|
| 32 |
CONTEXT:
|
| 33 |
-
|
| 34 |
-
|
| 35 |
|
| 36 |
OUTPUT:
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
```
|
| 42 |
-
|
| 43 |
-
CONTENT LIST:
|
| 44 |
-
{context}
|
| 45 |
|
| 46 |
QUERY:
|
| 47 |
{retriever_query}
|
| 48 |
-
Provide your verdict in JSON format with a single key 'score' and no preamble or explanation:
|
| 49 |
-
[{{"content:1,"score": <your score either 0 or 1>,"Reasoning":<why you have chose the score as 0 or 1>}},
|
| 50 |
-
{{"content:2,"score": <your score either 0 or 1>,"Reasoning":<why you have chose the score as 0 or 1>}},
|
| 51 |
-
...]
|
| 52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
"""
|
| 54 |
|
| 55 |
# Relevant Context Picker Prompt
|
|
@@ -82,4 +84,5 @@ Provide your verdict in JSON format with a two key 'relevant_content' and 'cont
|
|
| 82 |
[{{"context_number":<content1>,"relevant_content":<content corresponing to that element 1 in the Content List>}},
|
| 83 |
{{"context_number":<content4>,"relevant_content":<content corresponing to that element 4 in the Content List>}},
|
| 84 |
...
|
| 85 |
-
]"""
|
|
|
|
|
|
| 1 |
# RAG prompt
|
| 2 |
+
rag_prompt = """You are a helpful assistant proficient in formulating clear and meaningful answers from the provided context.
|
| 3 |
+
Based on the CONTEXT, please formulate a clear, concise, and meaningful answer for the QUERY.
|
| 4 |
+
Do not make up an answer if the CONTEXT is insufficient. Instead, respond with: "I do not know."
|
| 5 |
+
|
| 6 |
QUERY:
|
| 7 |
{query}
|
| 8 |
+
|
| 9 |
+
CONTEXT:
|
| 10 |
{context}
|
| 11 |
+
|
| 12 |
ANSWER:
|
| 13 |
"""
|
| 14 |
|
| 15 |
+
|
| 16 |
# Context Relevancy Checker Prompt
|
| 17 |
+
relevancy_prompt = """You are an expert judge tasked with evaluating whether EACH CONTEXT in the CONTEXT LIST is sufficient to answer the QUERY.
|
| 18 |
+
Analyze the provided QUERY and CONTEXT carefully.
|
| 19 |
|
| 20 |
Guidelines:
|
| 21 |
+
1. The content must not introduce new information beyond what is provided in the QUERY.
|
| 22 |
2. Pay close attention to the subject of statements. Ensure that attributes, actions, or dates are correctly associated with the right entities (e.g., a person vs. a TV show they star in).
|
| 23 |
+
3. Be vigilant for subtle misattributions or conflations of information, even if the date or other details are correct.
|
| 24 |
+
4. Check that the content in the CONTEXT LIST does not oversimplify or generalize information in a way that changes the meaning of the QUERY.
|
| 25 |
|
| 26 |
+
Assign a relevancy score:
|
| 27 |
+
- 0: The content has all the necessary information to answer the QUERY.
|
| 28 |
+
- 1: The content does not have the necessary information to answer the QUERY.
|
| 29 |
|
|
|
|
| 30 |
EXAMPLE:
|
| 31 |
|
| 32 |
INPUT (for context only, not to be used for faithfulness evaluation):
|
| 33 |
+
QUERY: What is the capital of France?
|
|
|
|
| 34 |
CONTEXT:
|
| 35 |
+
1. "France is a country in Western Europe. Its capital is Paris."
|
| 36 |
+
2. "Mr. Naveen Patnaik has been the Chief Minister of Odisha for five terms."
|
| 37 |
|
| 38 |
OUTPUT:
|
| 39 |
+
[
|
| 40 |
+
{"content": 1, "score": 0, "reasoning": "Paris is correctly mentioned as the capital."},
|
| 41 |
+
{"content": 2, "score": 1, "reasoning": "Unrelated to the query."}
|
| 42 |
+
]
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
QUERY:
|
| 45 |
{retriever_query}
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
+
CONTEXT LIST:
|
| 48 |
+
{context}
|
| 49 |
+
|
| 50 |
+
Provide your verdict in JSON format with a single key 'score' and no preamble or explanation:
|
| 51 |
+
[
|
| 52 |
+
{"content": <content_number>, "score": <0 or 1>, "reasoning": "<explanation>"},
|
| 53 |
+
{"content": <content_number>, "score": <0 or 1>, "reasoning": "<explanation>"}
|
| 54 |
+
]
|
| 55 |
"""
|
| 56 |
|
| 57 |
# Relevant Context Picker Prompt
|
|
|
|
| 84 |
[{{"context_number":<content1>,"relevant_content":<content corresponing to that element 1 in the Content List>}},
|
| 85 |
{{"context_number":<content4>,"relevant_content":<content corresponing to that element 4 in the Content List>}},
|
| 86 |
...
|
| 87 |
+
]"""
|
| 88 |
+
|