Spaces:
Build error
Build error
cnt_agents: &cnt_agents 4 | |
max_turn: &max_turn 2 | |
max_criticizing_rounds: 2 | |
prompts: | |
role_assigner_prepend_prompt: &role_assigner_prepend_prompt |- | |
# Role Description | |
You are the leader of a group of experts, now you need to recruit a small group of experts with diverse identity to correctly write the code to solve the given problems: | |
${task_description} | |
You can recruit ${cnt_critic_agents} expert in different fields. What experts will you recruit to better generate an accurate solution? | |
Here are some suggestion: | |
${advice} | |
role_assigner_append_prompt: &role_assigner_append_prompt |- | |
# Response Format Guidance | |
You should respond with a list of expert description. For example: | |
1. an electrical engineer specified in the filed of xxx. | |
2. an economist who is good at xxx. | |
3. a lawyer with a good knowledge of xxx. | |
... | |
Only respond with the description of each role. Do not include your reason. | |
solver_prepend_prompt: &solver_prepend_prompt |- | |
Can you complete the following code? | |
```python | |
${task_description} | |
``` | |
# # Previous Solution | |
# The solution you gave in the last step is: | |
# ${former_solution} | |
# # Evaluation | |
# The unit testing gave the following suggestion: | |
# ${advice} | |
# # Critics | |
# The following messages are the critics on the provided solution: | |
solver_append_prompt: &solver_append_prompt |- | |
You are ${role_description}. Provide a correct completion of the code. Explain your reasoning. Your response should contain only Python code. Do not give any additional information. Use ```python to put the completed Python code in markdown quotes. When responding, please include the given code and the completion. | |
critic_prepend_prompt: &critic_prepend_prompt |- | |
You are in a discussion group, aiming to complete the following code function: | |
```python | |
${task_description} | |
``` | |
# Below is a possible code completion: | |
# ``` | |
# ${preliminary_solution} | |
# ``` | |
critic_append_prompt: &critic_append_prompt |- | |
You are ${role_description}. Based on your knowledge, can you check the functional correctness of the latest completion given above? When responding, you should follow the following rules: | |
1. If the latest provided solution is correct, end your response with a special token "[Agree]". | |
2. If the solution is incorrect, give your comment and end your response with a special token "[Disagree]". | |
manager_prompt: &manager_prompt |- | |
According to the Previous Solution and the Previous Sentences, select the most appropriate Critic from a specific Role and output the Role. | |
```python | |
${task_description} | |
``` | |
# Previous Solution | |
The solution you gave in the last step is: | |
${former_solution} | |
# Critics | |
There are some critics on the above solution: | |
``` | |
${critic_opinions} | |
``` | |
# Previous Sentences | |
The previous sentences in the previous rounds is: | |
${previous_sentence} | |
executor_prepend_prompt: &executor_prepend_prompt |- | |
You are an experienced program tester. Now your team is trying to solve the problem: | |
''' | |
Complete the Python function: | |
${task_description} | |
''' | |
executor_append_prompt: &executor_append_prompt |- | |
The solution has been written to `tmp/main.py`. Your are going to write the unit testing code for the solution. You should respond in the following format: | |
Thought: your thought | |
Reasoning: your reasoning on the testing cases | |
Criticism: constructive self-criticism | |
File Path: the path to write your testing code | |
Code: the testing code with explaination in docstring. make sure to write the input in the assertion to make it appear in the unit test report, and make sure the expected answer is correct | |
Command: the command to change directory and execute your testing code | |
evaluator_prepend_prompt: &evaluator_prepend_prompt |- | |
# Problem | |
Complete the following function | |
```python | |
${task_description} | |
``` | |
# Experts | |
The experts recruited in this turn includes: | |
${all_role_description} | |
# Writer's Solution: | |
${solution} | |
# Tester's Feedback: | |
${result} | |
evaluator_append_prompt: &evaluator_append_prompt |- | |
# Response Format Guidance | |
You must respond in the following format: | |
Score: (0 or 1, 0 for incorrect and 1 for correct) | |
Response: (give your advice on how to correct the solution, and your suggestion on what experts should recruit in the next round) | |
name: pipeline | |
environment: | |
env_type: task-basic | |
max_turn: | |
rule: | |
role_assigner: | |
type: role_description | |
cnt_agents: | |
decision_maker: | |
type: vertical-solver-first | |
executor: | |
type: code-test | |
evaluator: | |
type: basic | |
agents: | |
- #role_assigner_agent: | |
agent_type: role_assigner | |
name: role assigner | |
max_retry: 1000 | |
prepend_prompt_template: | |
append_prompt_template: | |
memory: | |
memory_type: chat_history | |
llm: | |
llm_type: gpt-4 | |
model: "gpt-4" | |
temperature: 0 | |
max_tokens: 512 | |
output_parser: | |
type: role_assigner | |
- #solver_agent: | |
agent_type: solver | |
name: Planner | |
max_retry: 1000 | |
prepend_prompt_template: | |
append_prompt_template: | |
memory: | |
memory_type: chat_history | |
llm: | |
llm_type: gpt-4 | |
model: "gpt-4" | |
temperature: 0 | |
max_tokens: 1024 | |
output_parser: | |
type: humaneval-solver | |
# stop: | |
# - "\ndef " | |
# - "\nclass " | |
# - "\nif " | |
# - "\n\n#" | |
- #critic_agents: | |
agent_type: critic | |
name: Critic 1 | |
max_retry: 1000 | |
role_description: |- | |
Waiting to be assigned. | |
prepend_prompt_template: | |
append_prompt_template: | |
memory: | |
memory_type: chat_history | |
llm: | |
llm_type: gpt-4 | |
model: "gpt-4" | |
temperature: 0 | |
max_tokens: 1024 | |
output_parser: | |
type: mgsm-critic-agree | |
- #executor_agent: | |
agent_type: executor | |
name: Executor | |
max_retry: 1000 | |
prepend_prompt_template: | |
append_prompt_template: | |
memory: | |
memory_type: chat_history | |
llm: | |
llm_type: gpt-4 | |
model: gpt-4 | |
temperature: 0 | |
max_tokens: 1024 | |
output_parser: | |
type: humaneval-executor | |
- #evaluator_agent: | |
agent_type: evaluator | |
name: Evaluator | |
max_retry: 1000 | |
role_description: |- | |
Evaluator | |
prepend_prompt_template: | |
append_prompt_template: | |
memory: | |
memory_type: chat_history | |
llm: | |
llm_type: gpt-4 | |
model: gpt-4 | |
temperature: 0.3 | |
max_tokens: 1024 | |
output_parser: | |
type: mgsm-evaluator | |
dimensions: | |
- Score | |
- #manager_agent: | |
agent_type: manager | |
name: Manager | |
max_retry: 1000 | |
prompt_template: | |
memory: | |
memory_type: chat_history | |
llm: | |
llm_type: gpt-4 | |
model: "gpt-4" | |
temperature: 0 | |
max_tokens: 1024 | |
output_parser: | |
type: humaneval-manager | |