SpencerCPurdy commited on
Commit
df5964c
·
verified ·
1 Parent(s): 0b06edf

Create app.py

Browse files
Files changed (1) hide show
  1. app.py +1720 -0
app.py ADDED
@@ -0,0 +1,1720 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Multi-Agent AI Collaboration System
2
+ # Author: Spencer Purdy
3
+ # Description: Enterprise-grade multi-agent system with specialized AI agents collaborating
4
+ # to solve complex problems through task decomposition and parallel processing.
5
+
6
+ # Installation (uncomment for Google Colab)
7
+ # !pip install gradio langchain langchain-openai openai networkx matplotlib asyncio aiohttp pandas numpy plotly python-dotenv pydantic
8
+
9
+ import os
10
+ import json
11
+ import time
12
+ import asyncio
13
+ import hashlib
14
+ import logging
15
+ from datetime import datetime
16
+ from typing import Dict, List, Tuple, Optional, Any, Union, Set
17
+ from dataclasses import dataclass, field
18
+ from enum import Enum
19
+ import warnings
20
+ warnings.filterwarnings('ignore')
21
+
22
+ # Core libraries
23
+ import gradio as gr
24
+ import pandas as pd
25
+ import numpy as np
26
+ import networkx as nx
27
+ import matplotlib.pyplot as plt
28
+ from matplotlib.patches import FancyBboxPatch
29
+ import plotly.graph_objects as go
30
+ from plotly.subplots import make_subplots
31
+
32
+ # LangChain and AI libraries
33
+ from langchain.schema import BaseMessage, HumanMessage, AIMessage, SystemMessage
34
+ from langchain_openai import ChatOpenAI
35
+ from langchain.callbacks.base import BaseCallbackHandler
36
+ from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
37
+ from langchain.memory import ConversationBufferMemory
38
+ from pydantic import BaseModel, Field
39
+
40
+ # Async libraries
41
+ import aiohttp
42
+ from concurrent.futures import ThreadPoolExecutor
43
+
44
+ # Configure logging
45
+ logging.basicConfig(
46
+ level=logging.INFO,
47
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
48
+ )
49
+ logger = logging.getLogger(__name__)
50
+
51
+ class Config:
52
+ """Configuration settings for the multi-agent system."""
53
+
54
+ # Model settings
55
+ DEFAULT_MODEL = "gpt-4"
56
+ TEMPERATURE = 0.7
57
+ MAX_TOKENS = 1500
58
+
59
+ # Agent settings
60
+ MAX_ITERATIONS = 10
61
+ COLLABORATION_TIMEOUT = 300 # seconds
62
+
63
+ # Visualization settings
64
+ GRAPH_UPDATE_INTERVAL = 0.5 # seconds
65
+ NODE_COLORS = {
66
+ 'Researcher': '#3498db',
67
+ 'Analyst': '#e74c3c',
68
+ 'Critic': '#f39c12',
69
+ 'Synthesizer': '#2ecc71',
70
+ 'Coordinator': '#9b59b6'
71
+ }
72
+
73
+ # Report settings
74
+ CONFIDENCE_THRESHOLD = 0.7
75
+ MAX_REPORT_SECTIONS = 10
76
+
77
+ class AgentRole(Enum):
78
+ """Enumeration of agent roles in the system."""
79
+ RESEARCHER = "Researcher"
80
+ ANALYST = "Analyst"
81
+ CRITIC = "Critic"
82
+ SYNTHESIZER = "Synthesizer"
83
+ COORDINATOR = "Coordinator"
84
+
85
+ class TaskStatus(Enum):
86
+ """Task execution status."""
87
+ PENDING = "pending"
88
+ IN_PROGRESS = "in_progress"
89
+ COMPLETED = "completed"
90
+ FAILED = "failed"
91
+
92
+ @dataclass
93
+ class Task:
94
+ """Represents a task to be executed by agents."""
95
+ id: str
96
+ description: str
97
+ assigned_to: Optional[str] = None
98
+ status: TaskStatus = TaskStatus.PENDING
99
+ dependencies: List[str] = field(default_factory=list)
100
+ result: Optional[Any] = None
101
+ confidence: float = 0.0
102
+ created_at: datetime = field(default_factory=datetime.now)
103
+ completed_at: Optional[datetime] = None
104
+ metadata: Dict[str, Any] = field(default_factory=dict)
105
+
106
+ @dataclass
107
+ class AgentMessage:
108
+ """Message passed between agents."""
109
+ sender: str
110
+ recipient: str
111
+ content: str
112
+ message_type: str = "task"
113
+ metadata: Dict[str, Any] = field(default_factory=dict)
114
+ timestamp: datetime = field(default_factory=datetime.now)
115
+
116
+ class AgentMemory:
117
+ """Manages agent conversation history and context."""
118
+
119
+ def __init__(self, max_messages: int = 50):
120
+ self.messages: List[AgentMessage] = []
121
+ self.max_messages = max_messages
122
+ self.context: Dict[str, Any] = {}
123
+
124
+ def add_message(self, message: AgentMessage):
125
+ """Add a message to memory."""
126
+ self.messages.append(message)
127
+ if len(self.messages) > self.max_messages:
128
+ self.messages.pop(0)
129
+
130
+ def get_recent_messages(self, n: int = 10) -> List[AgentMessage]:
131
+ """Get n most recent messages."""
132
+ return self.messages[-n:]
133
+
134
+ def get_messages_by_sender(self, sender: str) -> List[AgentMessage]:
135
+ """Get all messages from a specific sender."""
136
+ return [msg for msg in self.messages if msg.sender == sender]
137
+
138
+ def update_context(self, key: str, value: Any):
139
+ """Update context information."""
140
+ self.context[key] = value
141
+
142
+ def get_context(self, key: str) -> Any:
143
+ """Get context information."""
144
+ return self.context.get(key)
145
+
146
+ class BaseAgent:
147
+ """Base class for all AI agents in the system."""
148
+
149
+ def __init__(self, name: str, role: AgentRole, llm: ChatOpenAI):
150
+ self.name = name
151
+ self.role = role
152
+ self.llm = llm
153
+ self.memory = AgentMemory()
154
+ self.active = True
155
+ self.current_task: Optional[Task] = None
156
+ self.completed_tasks: List[Task] = []
157
+
158
+ async def process_task(self, task: Task) -> Task:
159
+ """Process a task and return the result."""
160
+ self.current_task = task
161
+ task.status = TaskStatus.IN_PROGRESS
162
+ task.assigned_to = self.name
163
+
164
+ try:
165
+ # Execute task based on agent role
166
+ result = await self._execute_task(task)
167
+
168
+ task.result = result
169
+ task.status = TaskStatus.COMPLETED
170
+ task.completed_at = datetime.now()
171
+ task.confidence = self._calculate_confidence(result)
172
+
173
+ self.completed_tasks.append(task)
174
+
175
+ except Exception as e:
176
+ logger.error(f"Agent {self.name} failed to process task {task.id}: {str(e)}")
177
+ task.status = TaskStatus.FAILED
178
+ task.result = f"Error: {str(e)}"
179
+ task.confidence = 0.0
180
+
181
+ finally:
182
+ self.current_task = None
183
+
184
+ return task
185
+
186
+ async def _execute_task(self, task: Task) -> Any:
187
+ """Execute the task - to be implemented by subclasses."""
188
+ raise NotImplementedError("Subclasses must implement _execute_task")
189
+
190
+ def _calculate_confidence(self, result: Any) -> float:
191
+ """Calculate confidence score for the result."""
192
+ # Basic confidence calculation - can be overridden by subclasses
193
+ if result and isinstance(result, str) and len(result) > 50:
194
+ return min(0.9, 0.5 + len(result) / 1000)
195
+ return 0.5
196
+
197
+ async def collaborate(self, other_agent: 'BaseAgent', message: AgentMessage):
198
+ """Handle collaboration with another agent."""
199
+ self.memory.add_message(message)
200
+
201
+ # Process collaboration request
202
+ response = await self._process_collaboration(message)
203
+
204
+ # Send response back
205
+ response_message = AgentMessage(
206
+ sender=self.name,
207
+ recipient=other_agent.name,
208
+ content=response,
209
+ message_type="response"
210
+ )
211
+
212
+ other_agent.memory.add_message(response_message)
213
+
214
+ return response_message
215
+
216
+ async def _process_collaboration(self, message: AgentMessage) -> str:
217
+ """Process collaboration message - to be implemented by subclasses."""
218
+ return f"Acknowledged message from {message.sender}"
219
+
220
+ class ResearcherAgent(BaseAgent):
221
+ """Agent specialized in researching and gathering information."""
222
+
223
+ def __init__(self, name: str, llm: ChatOpenAI):
224
+ super().__init__(name, AgentRole.RESEARCHER, llm)
225
+ self.research_sources: List[str] = []
226
+
227
+ async def _execute_task(self, task: Task) -> Any:
228
+ """Execute research task."""
229
+ prompt = ChatPromptTemplate.from_messages([
230
+ SystemMessage(content="""You are a Research Agent specializing in gathering comprehensive information.
231
+ Your role is to:
232
+ 1. Break down complex topics into research questions
233
+ 2. Identify key information sources and data points
234
+ 3. Provide detailed, factual information with citations where possible
235
+ 4. Flag areas requiring further investigation"""),
236
+ HumanMessage(content=f"Research the following: {task.description}")
237
+ ])
238
+
239
+ response = await self.llm.ainvoke(prompt.format_messages())
240
+
241
+ # Extract research findings
242
+ research_result = {
243
+ "findings": response.content,
244
+ "sources": self._extract_sources(response.content),
245
+ "key_points": self._extract_key_points(response.content),
246
+ "areas_for_investigation": self._identify_gaps(response.content)
247
+ }
248
+
249
+ return research_result
250
+
251
+ def _extract_sources(self, content: str) -> List[str]:
252
+ """Extract potential sources from research content."""
253
+ # Simplified source extraction
254
+ sources = []
255
+ lines = content.split('\n')
256
+ for line in lines:
257
+ if any(keyword in line.lower() for keyword in ['source:', 'reference:', 'based on:', 'according to']):
258
+ sources.append(line.strip())
259
+ return sources[:5] # Limit to top 5 sources
260
+
261
+ def _extract_key_points(self, content: str) -> List[str]:
262
+ """Extract key points from research."""
263
+ key_points = []
264
+ lines = content.split('\n')
265
+ for line in lines:
266
+ if line.strip() and (line.strip()[0].isdigit() or line.strip().startswith('-')):
267
+ key_points.append(line.strip())
268
+ return key_points[:10] # Limit to top 10 points
269
+
270
+ def _identify_gaps(self, content: str) -> List[str]:
271
+ """Identify areas needing more research."""
272
+ gaps = []
273
+ keywords = ['unclear', 'requires further', 'need more', 'investigate', 'unknown']
274
+ lines = content.split('\n')
275
+ for line in lines:
276
+ if any(keyword in line.lower() for keyword in keywords):
277
+ gaps.append(line.strip())
278
+ return gaps[:5]
279
+
280
+ class AnalystAgent(BaseAgent):
281
+ """Agent specialized in analyzing data and identifying patterns."""
282
+
283
+ def __init__(self, name: str, llm: ChatOpenAI):
284
+ super().__init__(name, AgentRole.ANALYST, llm)
285
+ self.analysis_methods: List[str] = ["statistical", "comparative", "trend", "causal"]
286
+
287
+ async def _execute_task(self, task: Task) -> Any:
288
+ """Execute analysis task."""
289
+ prompt = ChatPromptTemplate.from_messages([
290
+ SystemMessage(content="""You are an Analyst Agent specializing in data analysis and pattern recognition.
291
+ Your role is to:
292
+ 1. Analyze information systematically and objectively
293
+ 2. Identify patterns, trends, and correlations
294
+ 3. Provide quantitative insights where possible
295
+ 4. Draw logical conclusions based on evidence"""),
296
+ HumanMessage(content=f"Analyze the following: {task.description}")
297
+ ])
298
+
299
+ response = await self.llm.ainvoke(prompt.format_messages())
300
+
301
+ # Structure analysis results
302
+ analysis_result = {
303
+ "analysis": response.content,
304
+ "patterns": self._identify_patterns(response.content),
305
+ "insights": self._extract_insights(response.content),
306
+ "recommendations": self._generate_recommendations(response.content),
307
+ "confidence_metrics": self._calculate_analysis_confidence(response.content)
308
+ }
309
+
310
+ return analysis_result
311
+
312
+ def _identify_patterns(self, content: str) -> List[Dict[str, str]]:
313
+ """Identify patterns in the analysis."""
314
+ patterns = []
315
+ pattern_keywords = ['pattern', 'trend', 'correlation', 'relationship', 'consistent']
316
+
317
+ lines = content.split('\n')
318
+ for line in lines:
319
+ if any(keyword in line.lower() for keyword in pattern_keywords):
320
+ patterns.append({
321
+ "description": line.strip(),
322
+ "type": self._classify_pattern(line)
323
+ })
324
+
325
+ return patterns[:5]
326
+
327
+ def _classify_pattern(self, description: str) -> str:
328
+ """Classify the type of pattern."""
329
+ description_lower = description.lower()
330
+ if 'trend' in description_lower:
331
+ return 'trend'
332
+ elif 'correlation' in description_lower:
333
+ return 'correlation'
334
+ elif 'cycle' in description_lower or 'periodic' in description_lower:
335
+ return 'cyclical'
336
+ else:
337
+ return 'general'
338
+
339
+ def _extract_insights(self, content: str) -> List[str]:
340
+ """Extract key insights from analysis."""
341
+ insights = []
342
+ insight_keywords = ['shows', 'indicates', 'suggests', 'reveals', 'demonstrates']
343
+
344
+ sentences = content.split('.')
345
+ for sentence in sentences:
346
+ if any(keyword in sentence.lower() for keyword in insight_keywords):
347
+ insights.append(sentence.strip() + '.')
348
+
349
+ return insights[:7]
350
+
351
+ def _generate_recommendations(self, content: str) -> List[str]:
352
+ """Generate recommendations based on analysis."""
353
+ recommendations = []
354
+ rec_keywords = ['recommend', 'suggest', 'should', 'consider', 'advise']
355
+
356
+ sentences = content.split('.')
357
+ for sentence in sentences:
358
+ if any(keyword in sentence.lower() for keyword in rec_keywords):
359
+ recommendations.append(sentence.strip() + '.')
360
+
361
+ return recommendations[:5]
362
+
363
+ def _calculate_analysis_confidence(self, content: str) -> Dict[str, float]:
364
+ """Calculate confidence metrics for the analysis."""
365
+ word_count = len(content.split())
366
+ evidence_count = content.lower().count('evidence') + content.lower().count('data') + content.lower().count('shows')
367
+ uncertainty_count = content.lower().count('may') + content.lower().count('might') + content.lower().count('possibly')
368
+
369
+ confidence = min(0.95, 0.5 + (evidence_count * 0.1) - (uncertainty_count * 0.05) + (word_count / 1000))
370
+
371
+ return {
372
+ "overall_confidence": confidence,
373
+ "evidence_strength": min(1.0, evidence_count / 10),
374
+ "certainty_level": max(0.0, 1.0 - (uncertainty_count / 10))
375
+ }
376
+
377
+ class CriticAgent(BaseAgent):
378
+ """Agent specialized in critical evaluation and quality assurance."""
379
+
380
+ def __init__(self, name: str, llm: ChatOpenAI):
381
+ super().__init__(name, AgentRole.CRITIC, llm)
382
+ self.evaluation_criteria = [
383
+ "accuracy", "completeness", "logic", "evidence",
384
+ "clarity", "relevance", "consistency"
385
+ ]
386
+
387
+ async def _execute_task(self, task: Task) -> Any:
388
+ """Execute critical evaluation task."""
389
+ prompt = ChatPromptTemplate.from_messages([
390
+ SystemMessage(content="""You are a Critic Agent specializing in rigorous evaluation and quality assurance.
391
+ Your role is to:
392
+ 1. Critically evaluate arguments and conclusions
393
+ 2. Identify weaknesses, gaps, and potential biases
394
+ 3. Verify logical consistency and evidence quality
395
+ 4. Suggest improvements and alternative perspectives
396
+ 5. Ensure high standards of analysis"""),
397
+ HumanMessage(content=f"Critically evaluate the following: {task.description}")
398
+ ])
399
+
400
+ response = await self.llm.ainvoke(prompt.format_messages())
401
+
402
+ # Structure critique results
403
+ critique_result = {
404
+ "evaluation": response.content,
405
+ "strengths": self._identify_strengths(response.content),
406
+ "weaknesses": self._identify_weaknesses(response.content),
407
+ "gaps": self._identify_gaps(response.content),
408
+ "improvements": self._suggest_improvements(response.content),
409
+ "quality_score": self._calculate_quality_score(response.content)
410
+ }
411
+
412
+ return critique_result
413
+
414
+ def _identify_strengths(self, content: str) -> List[str]:
415
+ """Identify strengths in the evaluated content."""
416
+ strengths = []
417
+ strength_keywords = ['strong', 'excellent', 'well', 'good', 'effective', 'solid']
418
+
419
+ sentences = content.split('.')
420
+ for sentence in sentences:
421
+ if any(keyword in sentence.lower() for keyword in strength_keywords):
422
+ strengths.append(sentence.strip() + '.')
423
+
424
+ return strengths[:5]
425
+
426
+ def _identify_weaknesses(self, content: str) -> List[str]:
427
+ """Identify weaknesses in the evaluated content."""
428
+ weaknesses = []
429
+ weakness_keywords = ['weak', 'lack', 'insufficient', 'poor', 'inadequate', 'missing']
430
+
431
+ sentences = content.split('.')
432
+ for sentence in sentences:
433
+ if any(keyword in sentence.lower() for keyword in weakness_keywords):
434
+ weaknesses.append(sentence.strip() + '.')
435
+
436
+ return weaknesses[:5]
437
+
438
+ def _identify_gaps(self, content: str) -> List[str]:
439
+ """Identify gaps in the analysis."""
440
+ gaps = []
441
+ gap_keywords = ['gap', 'missing', 'overlook', 'fail to', 'does not address', 'ignores']
442
+
443
+ sentences = content.split('.')
444
+ for sentence in sentences:
445
+ if any(keyword in sentence.lower() for keyword in gap_keywords):
446
+ gaps.append(sentence.strip() + '.')
447
+
448
+ return gaps[:5]
449
+
450
+ def _suggest_improvements(self, content: str) -> List[str]:
451
+ """Suggest improvements based on critique."""
452
+ improvements = []
453
+ improvement_keywords = ['could', 'should', 'improve', 'enhance', 'strengthen', 'add']
454
+
455
+ sentences = content.split('.')
456
+ for sentence in sentences:
457
+ if any(keyword in sentence.lower() for keyword in improvement_keywords):
458
+ improvements.append(sentence.strip() + '.')
459
+
460
+ return improvements[:5]
461
+
462
+ def _calculate_quality_score(self, content: str) -> Dict[str, float]:
463
+ """Calculate quality scores for different criteria."""
464
+ scores = {}
465
+
466
+ for criterion in self.evaluation_criteria:
467
+ # Simplified scoring based on keyword presence
468
+ positive_count = content.lower().count(criterion) + content.lower().count('good') + content.lower().count('strong')
469
+ negative_count = content.lower().count('poor') + content.lower().count('weak') + content.lower().count('lacking')
470
+
471
+ score = min(1.0, max(0.0, 0.5 + (positive_count * 0.1) - (negative_count * 0.15)))
472
+ scores[criterion] = score
473
+
474
+ scores['overall'] = sum(scores.values()) / len(scores)
475
+
476
+ return scores
477
+
478
+ class SynthesizerAgent(BaseAgent):
479
+ """Agent specialized in synthesizing information and creating coherent narratives."""
480
+
481
+ def __init__(self, name: str, llm: ChatOpenAI):
482
+ super().__init__(name, AgentRole.SYNTHESIZER, llm)
483
+ self.synthesis_strategies = ["integrate", "summarize", "reconcile", "consolidate"]
484
+
485
+ async def _execute_task(self, task: Task) -> Any:
486
+ """Execute synthesis task."""
487
+ prompt = ChatPromptTemplate.from_messages([
488
+ SystemMessage(content="""You are a Synthesizer Agent specializing in integrating diverse information.
489
+ Your role is to:
490
+ 1. Combine multiple perspectives into coherent narratives
491
+ 2. Resolve contradictions and find common ground
492
+ 3. Create comprehensive summaries that capture key insights
493
+ 4. Generate actionable conclusions and recommendations
494
+ 5. Ensure clarity and accessibility of complex information"""),
495
+ HumanMessage(content=f"Synthesize the following information: {task.description}")
496
+ ])
497
+
498
+ response = await self.llm.ainvoke(prompt.format_messages())
499
+
500
+ # Structure synthesis results
501
+ synthesis_result = {
502
+ "synthesis": response.content,
503
+ "key_themes": self._extract_themes(response.content),
504
+ "consensus_points": self._identify_consensus(response.content),
505
+ "contradictions": self._identify_contradictions(response.content),
506
+ "final_recommendations": self._generate_final_recommendations(response.content),
507
+ "executive_summary": self._create_executive_summary(response.content)
508
+ }
509
+
510
+ return synthesis_result
511
+
512
+ def _extract_themes(self, content: str) -> List[Dict[str, str]]:
513
+ """Extract major themes from synthesis."""
514
+ themes = []
515
+ theme_keywords = ['theme', 'pattern', 'trend', 'common', 'recurring', 'central']
516
+
517
+ paragraphs = content.split('\n\n')
518
+ for i, paragraph in enumerate(paragraphs[:5]): # Limit to first 5 paragraphs
519
+ if any(keyword in paragraph.lower() for keyword in theme_keywords):
520
+ themes.append({
521
+ "theme": f"Theme {i+1}",
522
+ "description": paragraph.strip()[:200] + "..." if len(paragraph) > 200 else paragraph.strip()
523
+ })
524
+
525
+ return themes
526
+
527
+ def _identify_consensus(self, content: str) -> List[str]:
528
+ """Identify points of consensus."""
529
+ consensus = []
530
+ consensus_keywords = ['agree', 'consensus', 'common', 'shared', 'unanimous', 'consistent']
531
+
532
+ sentences = content.split('.')
533
+ for sentence in sentences:
534
+ if any(keyword in sentence.lower() for keyword in consensus_keywords):
535
+ consensus.append(sentence.strip() + '.')
536
+
537
+ return consensus[:5]
538
+
539
+ def _identify_contradictions(self, content: str) -> List[str]:
540
+ """Identify contradictions or conflicts."""
541
+ contradictions = []
542
+ conflict_keywords = ['however', 'contrary', 'conflict', 'disagree', 'opposing', 'contradicts']
543
+
544
+ sentences = content.split('.')
545
+ for sentence in sentences:
546
+ if any(keyword in sentence.lower() for keyword in conflict_keywords):
547
+ contradictions.append(sentence.strip() + '.')
548
+
549
+ return contradictions[:3]
550
+
551
+ def _generate_final_recommendations(self, content: str) -> List[str]:
552
+ """Generate final synthesized recommendations."""
553
+ recommendations = []
554
+
555
+ # Look for recommendation sections
556
+ lines = content.split('\n')
557
+ in_recommendations = False
558
+
559
+ for line in lines:
560
+ if 'recommend' in line.lower() or 'conclusion' in line.lower():
561
+ in_recommendations = True
562
+ elif in_recommendations and line.strip():
563
+ recommendations.append(line.strip())
564
+ if len(recommendations) >= 5:
565
+ break
566
+
567
+ return recommendations
568
+
569
+ def _create_executive_summary(self, content: str) -> str:
570
+ """Create an executive summary of the synthesis."""
571
+ # Take first paragraph or create summary
572
+ paragraphs = content.split('\n\n')
573
+ if paragraphs:
574
+ summary = paragraphs[0][:300]
575
+ if len(paragraphs[0]) > 300:
576
+ summary += "..."
577
+ return summary
578
+ return "Summary generation in progress..."
579
+
580
+ class CoordinatorAgent(BaseAgent):
581
+ """Agent responsible for coordinating other agents and managing workflow."""
582
+
583
+ def __init__(self, name: str, llm: ChatOpenAI):
584
+ super().__init__(name, AgentRole.COORDINATOR, llm)
585
+ self.agents: Dict[str, BaseAgent] = {}
586
+ self.task_queue: List[Task] = []
587
+ self.completed_tasks: List[Task] = []
588
+ self.workflow_graph = nx.DiGraph()
589
+ self.execution_history: List[Dict[str, Any]] = []
590
+
591
+ def register_agent(self, agent: BaseAgent):
592
+ """Register an agent with the coordinator."""
593
+ self.agents[agent.name] = agent
594
+ self.workflow_graph.add_node(agent.name, role=agent.role.value)
595
+
596
+ async def decompose_problem(self, problem: str) -> List[Task]:
597
+ """Decompose a complex problem into subtasks."""
598
+ prompt = ChatPromptTemplate.from_messages([
599
+ SystemMessage(content="""You are a Coordinator Agent responsible for breaking down complex problems.
600
+ Decompose the problem into specific subtasks that can be assigned to specialized agents:
601
+ - Researcher: For gathering information and facts
602
+ - Analyst: For analyzing data and identifying patterns
603
+ - Critic: For evaluating quality and identifying issues
604
+ - Synthesizer: For combining insights and creating summaries
605
+
606
+ Create clear, actionable subtasks with dependencies."""),
607
+ HumanMessage(content=f"Decompose this problem into subtasks: {problem}")
608
+ ])
609
+
610
+ response = await self.llm.ainvoke(prompt.format_messages())
611
+
612
+ # Parse response into tasks
613
+ tasks = self._parse_tasks(response.content, problem)
614
+
615
+ return tasks
616
+
617
+ def _parse_tasks(self, content: str, original_problem: str) -> List[Task]:
618
+ """Parse LLM response into Task objects."""
619
+ tasks = []
620
+ lines = content.split('\n')
621
+
622
+ task_id = 1
623
+ current_role = None
624
+
625
+ for line in lines:
626
+ line = line.strip()
627
+ if not line:
628
+ continue
629
+
630
+ # Check if line indicates a role
631
+ for role in AgentRole:
632
+ if role.value in line:
633
+ current_role = role.value
634
+ break
635
+
636
+ # If line starts with number or dash, it's likely a task
637
+ if (line[0].isdigit() or line.startswith('-')) and current_role:
638
+ # Extract task description
639
+ task_desc = line.lstrip('0123456789.-').strip()
640
+
641
+ task = Task(
642
+ id=f"task_{task_id}",
643
+ description=task_desc,
644
+ metadata={
645
+ "original_problem": original_problem,
646
+ "suggested_role": current_role
647
+ }
648
+ )
649
+
650
+ tasks.append(task)
651
+ task_id += 1
652
+
653
+ # If no tasks were parsed, create default tasks
654
+ if not tasks:
655
+ tasks = [
656
+ Task(id="task_1", description=f"Research background information on: {original_problem}",
657
+ metadata={"suggested_role": "Researcher"}),
658
+ Task(id="task_2", description=f"Analyze key aspects of: {original_problem}",
659
+ metadata={"suggested_role": "Analyst"}),
660
+ Task(id="task_3", description="Critically evaluate the research and analysis",
661
+ metadata={"suggested_role": "Critic"}),
662
+ Task(id="task_4", description="Synthesize all findings into actionable insights",
663
+ metadata={"suggested_role": "Synthesizer"})
664
+ ]
665
+
666
+ return tasks
667
+
668
+ async def execute_workflow(self, tasks: List[Task], parallel: bool = True) -> Dict[str, Any]:
669
+ """Execute the workflow with given tasks."""
670
+ start_time = datetime.now()
671
+
672
+ # Build task dependency graph
673
+ self._build_dependency_graph(tasks)
674
+
675
+ # Execute tasks
676
+ if parallel:
677
+ results = await self._execute_parallel(tasks)
678
+ else:
679
+ results = await self._execute_sequential(tasks)
680
+
681
+ # Compile final results
682
+ end_time = datetime.now()
683
+ execution_time = (end_time - start_time).total_seconds()
684
+
685
+ workflow_result = {
686
+ "tasks": tasks,
687
+ "execution_time": execution_time,
688
+ "success_rate": self._calculate_success_rate(tasks),
689
+ "agent_contributions": self._compile_agent_contributions(tasks),
690
+ "workflow_graph": self.workflow_graph,
691
+ "timestamp": datetime.now()
692
+ }
693
+
694
+ self.execution_history.append(workflow_result)
695
+
696
+ return workflow_result
697
+
698
+ def _build_dependency_graph(self, tasks: List[Task]):
699
+ """Build a dependency graph for tasks."""
700
+ # For simplicity, create a linear dependency chain based on suggested roles
701
+ role_order = ["Researcher", "Analyst", "Critic", "Synthesizer"]
702
+
703
+ # Group tasks by role
704
+ tasks_by_role = {}
705
+ for task in tasks:
706
+ role = task.metadata.get("suggested_role", "Researcher")
707
+ if role not in tasks_by_role:
708
+ tasks_by_role[role] = []
709
+ tasks_by_role[role].append(task)
710
+
711
+ # Create dependencies
712
+ for i in range(len(role_order) - 1):
713
+ current_role = role_order[i]
714
+ next_role = role_order[i + 1]
715
+
716
+ if current_role in tasks_by_role and next_role in tasks_by_role:
717
+ for current_task in tasks_by_role[current_role]:
718
+ for next_task in tasks_by_role[next_role]:
719
+ next_task.dependencies.append(current_task.id)
720
+
721
+ async def _execute_parallel(self, tasks: List[Task]) -> List[Task]:
722
+ """Execute tasks in parallel where possible."""
723
+ completed = set()
724
+ pending = tasks.copy()
725
+
726
+ while pending:
727
+ # Find tasks that can be executed (no pending dependencies)
728
+ ready_tasks = [
729
+ task for task in pending
730
+ if all(dep in completed for dep in task.dependencies)
731
+ ]
732
+
733
+ if not ready_tasks:
734
+ # Deadlock prevention - execute first pending task
735
+ ready_tasks = [pending[0]]
736
+
737
+ # Execute ready tasks in parallel
738
+ task_futures = []
739
+ for task in ready_tasks:
740
+ agent_name = self._select_agent_for_task(task)
741
+ if agent_name and agent_name in self.agents:
742
+ agent = self.agents[agent_name]
743
+ task_futures.append(agent.process_task(task))
744
+
745
+ # Update workflow graph
746
+ self.workflow_graph.add_edge(
747
+ self.name, agent_name,
748
+ task_id=task.id,
749
+ timestamp=datetime.now()
750
+ )
751
+
752
+ # Wait for tasks to complete
753
+ if task_futures:
754
+ completed_tasks = await asyncio.gather(*task_futures)
755
+
756
+ for task in completed_tasks:
757
+ completed.add(task.id)
758
+ pending.remove(task)
759
+ self.completed_tasks.append(task)
760
+
761
+ return tasks
762
+
763
+ async def _execute_sequential(self, tasks: List[Task]) -> List[Task]:
764
+ """Execute tasks sequentially."""
765
+ for task in tasks:
766
+ agent_name = self._select_agent_for_task(task)
767
+ if agent_name and agent_name in self.agents:
768
+ agent = self.agents[agent_name]
769
+ await agent.process_task(task)
770
+
771
+ # Update workflow graph
772
+ self.workflow_graph.add_edge(
773
+ self.name, agent_name,
774
+ task_id=task.id,
775
+ timestamp=datetime.now()
776
+ )
777
+
778
+ self.completed_tasks.append(task)
779
+
780
+ return tasks
781
+
782
+ def _select_agent_for_task(self, task: Task) -> Optional[str]:
783
+ """Select the best agent for a given task."""
784
+ suggested_role = task.metadata.get("suggested_role")
785
+
786
+ # Find agent with matching role
787
+ for agent_name, agent in self.agents.items():
788
+ if agent.role.value == suggested_role:
789
+ return agent_name
790
+
791
+ # Fallback to first available agent
792
+ return list(self.agents.keys())[0] if self.agents else None
793
+
794
+ def _calculate_success_rate(self, tasks: List[Task]) -> float:
795
+ """Calculate the success rate of task execution."""
796
+ if not tasks:
797
+ return 0.0
798
+
799
+ successful = sum(1 for task in tasks if task.status == TaskStatus.COMPLETED)
800
+ return successful / len(tasks)
801
+
802
+ def _compile_agent_contributions(self, tasks: List[Task]) -> Dict[str, Any]:
803
+ """Compile contributions from each agent."""
804
+ contributions = {}
805
+
806
+ for agent_name, agent in self.agents.items():
807
+ agent_tasks = [task for task in tasks if task.assigned_to == agent_name]
808
+
809
+ contributions[agent_name] = {
810
+ "role": agent.role.value,
811
+ "tasks_completed": len(agent_tasks),
812
+ "average_confidence": np.mean([task.confidence for task in agent_tasks]) if agent_tasks else 0.0,
813
+ "total_execution_time": sum(
814
+ (task.completed_at - task.created_at).total_seconds()
815
+ for task in agent_tasks
816
+ if task.completed_at
817
+ )
818
+ }
819
+
820
+ return contributions
821
+
822
+ class WorkflowVisualizer:
823
+ """Handles visualization of agent interactions and workflow."""
824
+
825
+ def __init__(self):
826
+ self.color_map = Config.NODE_COLORS
827
+ self.layout_cache = {}
828
+
829
+ def create_workflow_graph(self, workflow_graph: nx.DiGraph,
830
+ active_agents: List[str] = None) -> go.Figure:
831
+ """Create an interactive workflow visualization."""
832
+
833
+ # Use hierarchical layout
834
+ pos = self._hierarchical_layout(workflow_graph)
835
+
836
+ # Create edge trace
837
+ edge_trace = self._create_edge_trace(workflow_graph, pos)
838
+
839
+ # Create node trace
840
+ node_trace = self._create_node_trace(workflow_graph, pos, active_agents)
841
+
842
+ # Create figure
843
+ fig = go.Figure(data=[edge_trace, node_trace],
844
+ layout=go.Layout(
845
+ title='Agent Collaboration Network',
846
+ titlefont_size=16,
847
+ showlegend=False,
848
+ hovermode='closest',
849
+ margin=dict(b=20, l=5, r=5, t=40),
850
+ xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
851
+ yaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
852
+ plot_bgcolor='white'
853
+ ))
854
+
855
+ return fig
856
+
857
+ def _hierarchical_layout(self, G: nx.DiGraph) -> Dict:
858
+ """Create hierarchical layout for the graph."""
859
+ if len(G) == 0:
860
+ return {}
861
+
862
+ # Try to use cached layout if graph structure hasn't changed
863
+ graph_hash = hash(tuple(G.edges()))
864
+ if graph_hash in self.layout_cache:
865
+ return self.layout_cache[graph_hash]
866
+
867
+ # Define role hierarchy
868
+ hierarchy = {
869
+ 'Coordinator': 0,
870
+ 'Researcher': 1,
871
+ 'Analyst': 2,
872
+ 'Critic': 3,
873
+ 'Synthesizer': 4
874
+ }
875
+
876
+ # Position nodes based on hierarchy
877
+ pos = {}
878
+ role_counts = {}
879
+
880
+ for node in G.nodes():
881
+ role = G.nodes[node].get('role', 'Unknown')
882
+ level = hierarchy.get(role, 5)
883
+
884
+ if level not in role_counts:
885
+ role_counts[level] = 0
886
+
887
+ x = role_counts[level] * 2 - (len([n for n in G.nodes() if hierarchy.get(G.nodes[n].get('role'), 5) == level]) - 1)
888
+ y = -level * 2
889
+
890
+ pos[node] = (x, y)
891
+ role_counts[level] += 1
892
+
893
+ # Cache the layout
894
+ self.layout_cache[graph_hash] = pos
895
+
896
+ return pos
897
+
898
+ def _create_edge_trace(self, G: nx.DiGraph, pos: Dict) -> go.Scatter:
899
+ """Create edge trace for the graph."""
900
+ edge_x = []
901
+ edge_y = []
902
+
903
+ for edge in G.edges():
904
+ x0, y0 = pos.get(edge[0], (0, 0))
905
+ x1, y1 = pos.get(edge[1], (0, 0))
906
+ edge_x.extend([x0, x1, None])
907
+ edge_y.extend([y0, y1, None])
908
+
909
+ edge_trace = go.Scatter(
910
+ x=edge_x, y=edge_y,
911
+ line=dict(width=2, color='#888'),
912
+ hoverinfo='none',
913
+ mode='lines'
914
+ )
915
+
916
+ return edge_trace
917
+
918
+ def _create_node_trace(self, G: nx.DiGraph, pos: Dict,
919
+ active_agents: List[str] = None) -> go.Scatter:
920
+ """Create node trace for the graph."""
921
+ node_x = []
922
+ node_y = []
923
+ node_colors = []
924
+ node_sizes = []
925
+ node_text = []
926
+
927
+ for node in G.nodes():
928
+ x, y = pos.get(node, (0, 0))
929
+ node_x.append(x)
930
+ node_y.append(y)
931
+
932
+ # Get node attributes
933
+ role = G.nodes[node].get('role', 'Unknown')
934
+ color = self.color_map.get(role, '#666')
935
+
936
+ # Highlight active agents
937
+ if active_agents and node in active_agents:
938
+ size = 30
939
+ color = self._brighten_color(color)
940
+ else:
941
+ size = 20
942
+
943
+ node_colors.append(color)
944
+ node_sizes.append(size)
945
+
946
+ # Create hover text
947
+ degree = G.degree(node)
948
+ hover_text = f"{node}<br>Role: {role}<br>Connections: {degree}"
949
+ node_text.append(hover_text)
950
+
951
+ node_trace = go.Scatter(
952
+ x=node_x, y=node_y,
953
+ mode='markers+text',
954
+ hoverinfo='text',
955
+ text=[node for node in G.nodes()],
956
+ textposition="top center",
957
+ hovertext=node_text,
958
+ marker=dict(
959
+ showscale=False,
960
+ color=node_colors,
961
+ size=node_sizes,
962
+ line=dict(color='white', width=2)
963
+ )
964
+ )
965
+
966
+ return node_trace
967
+
968
+ def _brighten_color(self, color: str) -> str:
969
+ """Make a color brighter for highlighting."""
970
+ # Simple brightening by mixing with white
971
+ if color.startswith('#'):
972
+ r = int(color[1:3], 16)
973
+ g = int(color[3:5], 16)
974
+ b = int(color[5:7], 16)
975
+
976
+ # Mix with white (255, 255, 255)
977
+ r = int(r + (255 - r) * 0.3)
978
+ g = int(g + (255 - g) * 0.3)
979
+ b = int(b + (255 - b) * 0.3)
980
+
981
+ return f"#{r:02x}{g:02x}{b:02x}"
982
+
983
+ return color
984
+
985
+ def create_task_timeline(self, tasks: List[Task]) -> go.Figure:
986
+ """Create a timeline visualization of task execution."""
987
+
988
+ # Prepare data for timeline
989
+ timeline_data = []
990
+
991
+ for task in tasks:
992
+ if task.created_at and task.completed_at:
993
+ timeline_data.append({
994
+ 'Task': task.id,
995
+ 'Agent': task.assigned_to or 'Unassigned',
996
+ 'Start': task.created_at,
997
+ 'Finish': task.completed_at,
998
+ 'Status': task.status.value,
999
+ 'Confidence': task.confidence
1000
+ })
1001
+
1002
+ if not timeline_data:
1003
+ # Return empty figure if no completed tasks
1004
+ fig = go.Figure()
1005
+ fig.add_annotation(
1006
+ text="No completed tasks to display",
1007
+ xref="paper", yref="paper",
1008
+ x=0.5, y=0.5,
1009
+ showarrow=False
1010
+ )
1011
+ return fig
1012
+
1013
+ df = pd.DataFrame(timeline_data)
1014
+
1015
+ # Create Gantt chart
1016
+ fig = px.timeline(
1017
+ df,
1018
+ x_start="Start",
1019
+ x_end="Finish",
1020
+ y="Agent",
1021
+ color="Confidence",
1022
+ hover_data=["Task", "Status", "Confidence"],
1023
+ title="Task Execution Timeline",
1024
+ color_continuous_scale="Viridis"
1025
+ )
1026
+
1027
+ fig.update_yaxis(categoryorder="total ascending")
1028
+ fig.update_layout(height=400)
1029
+
1030
+ return fig
1031
+
1032
+ def create_confidence_heatmap(self, agent_contributions: Dict[str, Any]) -> go.Figure:
1033
+ """Create a heatmap showing agent confidence levels."""
1034
+
1035
+ agents = list(agent_contributions.keys())
1036
+ metrics = ['tasks_completed', 'average_confidence', 'total_execution_time']
1037
+
1038
+ # Normalize data for heatmap
1039
+ data = []
1040
+ for metric in metrics:
1041
+ row = []
1042
+ for agent in agents:
1043
+ value = agent_contributions[agent].get(metric, 0)
1044
+ if metric == 'total_execution_time':
1045
+ value = value / 60 # Convert to minutes
1046
+ row.append(value)
1047
+ data.append(row)
1048
+
1049
+ # Create heatmap
1050
+ fig = go.Figure(data=go.Heatmap(
1051
+ z=data,
1052
+ x=agents,
1053
+ y=['Tasks Completed', 'Avg Confidence', 'Time (min)'],
1054
+ colorscale='Blues',
1055
+ text=np.round(data, 2),
1056
+ texttemplate='%{text}',
1057
+ textfont={"size": 10}
1058
+ ))
1059
+
1060
+ fig.update_layout(
1061
+ title="Agent Performance Metrics",
1062
+ xaxis_title="Agents",
1063
+ yaxis_title="Metrics",
1064
+ height=300
1065
+ )
1066
+
1067
+ return fig
1068
+
1069
+ class ReportGenerator:
1070
+ """Generates comprehensive reports from multi-agent collaboration."""
1071
+
1072
+ def __init__(self):
1073
+ self.section_generators = {
1074
+ 'executive_summary': self._generate_executive_summary,
1075
+ 'task_analysis': self._generate_task_analysis,
1076
+ 'agent_contributions': self._generate_agent_contributions,
1077
+ 'key_findings': self._generate_key_findings,
1078
+ 'recommendations': self._generate_recommendations,
1079
+ 'confidence_analysis': self._generate_confidence_analysis
1080
+ }
1081
+
1082
+ def generate_report(self,
1083
+ workflow_result: Dict[str, Any],
1084
+ problem_statement: str,
1085
+ include_sections: List[str] = None) -> str:
1086
+ """Generate a comprehensive report from workflow results."""
1087
+
1088
+ if include_sections is None:
1089
+ include_sections = list(self.section_generators.keys())
1090
+
1091
+ report_sections = []
1092
+
1093
+ # Header
1094
+ report_sections.append(self._generate_header(problem_statement))
1095
+
1096
+ # Generate requested sections
1097
+ for section in include_sections:
1098
+ if section in self.section_generators:
1099
+ section_content = self.section_generators[section](workflow_result, problem_statement)
1100
+ report_sections.append(section_content)
1101
+
1102
+ # Footer
1103
+ report_sections.append(self._generate_footer(workflow_result))
1104
+
1105
+ return "\n\n".join(report_sections)
1106
+
1107
+ def _generate_header(self, problem_statement: str) -> str:
1108
+ """Generate report header."""
1109
+ return f"""# Multi-Agent Analysis Report
1110
+
1111
+ **Generated:** {datetime.now().strftime('%B %d, %Y at %I:%M %p')}
1112
+
1113
+ **Problem Statement:** {problem_statement}
1114
+
1115
+ ---"""
1116
+
1117
+ def _generate_executive_summary(self, workflow_result: Dict[str, Any],
1118
+ problem_statement: str) -> str:
1119
+ """Generate executive summary section."""
1120
+ tasks = workflow_result.get('tasks', [])
1121
+ success_rate = workflow_result.get('success_rate', 0)
1122
+ execution_time = workflow_result.get('execution_time', 0)
1123
+
1124
+ # Find synthesis results
1125
+ synthesis_task = None
1126
+ for task in tasks:
1127
+ if task.assigned_to and 'Synthesizer' in task.assigned_to:
1128
+ synthesis_task = task
1129
+ break
1130
+
1131
+ summary = f"""## Executive Summary
1132
+
1133
+ The multi-agent system successfully analyzed the problem with a **{success_rate:.0%} task completion rate** in **{execution_time:.1f} seconds**.
1134
+
1135
+ """
1136
+
1137
+ if synthesis_task and synthesis_task.result:
1138
+ if isinstance(synthesis_task.result, dict) and 'executive_summary' in synthesis_task.result:
1139
+ summary += synthesis_task.result['executive_summary']
1140
+ else:
1141
+ summary += "The analysis revealed key insights across multiple dimensions of the problem."
1142
+
1143
+ return summary
1144
+
1145
+ def _generate_task_analysis(self, workflow_result: Dict[str, Any],
1146
+ problem_statement: str) -> str:
1147
+ """Generate task analysis section."""
1148
+ tasks = workflow_result.get('tasks', [])
1149
+
1150
+ content = "## Task Analysis\n\n"
1151
+
1152
+ # Group tasks by status
1153
+ completed_tasks = [t for t in tasks if t.status == TaskStatus.COMPLETED]
1154
+ failed_tasks = [t for t in tasks if t.status == TaskStatus.FAILED]
1155
+
1156
+ content += f"**Total Tasks:** {len(tasks)}\n"
1157
+ content += f"**Completed:** {len(completed_tasks)}\n"
1158
+ content += f"**Failed:** {len(failed_tasks)}\n\n"
1159
+
1160
+ # List tasks by agent
1161
+ content += "### Tasks by Agent\n\n"
1162
+
1163
+ agents_tasks = {}
1164
+ for task in tasks:
1165
+ agent = task.assigned_to or "Unassigned"
1166
+ if agent not in agents_tasks:
1167
+ agents_tasks[agent] = []
1168
+ agents_tasks[agent].append(task)
1169
+
1170
+ for agent, agent_tasks in agents_tasks.items():
1171
+ content += f"**{agent}:**\n"
1172
+ for task in agent_tasks:
1173
+ status_emoji = "✓" if task.status == TaskStatus.COMPLETED else "✗"
1174
+ confidence = f"({task.confidence:.0%} confidence)" if task.confidence > 0 else ""
1175
+ content += f"- {status_emoji} {task.description} {confidence}\n"
1176
+ content += "\n"
1177
+
1178
+ return content
1179
+
1180
+ def _generate_agent_contributions(self, workflow_result: Dict[str, Any],
1181
+ problem_statement: str) -> str:
1182
+ """Generate agent contributions section."""
1183
+ contributions = workflow_result.get('agent_contributions', {})
1184
+
1185
+ content = "## Agent Contributions\n\n"
1186
+
1187
+ for agent, stats in contributions.items():
1188
+ role = stats.get('role', 'Unknown')
1189
+ tasks_completed = stats.get('tasks_completed', 0)
1190
+ avg_confidence = stats.get('average_confidence', 0)
1191
+ exec_time = stats.get('total_execution_time', 0)
1192
+
1193
+ content += f"### {agent} ({role})\n"
1194
+ content += f"- Tasks Completed: {tasks_completed}\n"
1195
+ content += f"- Average Confidence: {avg_confidence:.0%}\n"
1196
+ content += f"- Total Execution Time: {exec_time:.1f}s\n\n"
1197
+
1198
+ return content
1199
+
1200
+ def _generate_key_findings(self, workflow_result: Dict[str, Any],
1201
+ problem_statement: str) -> str:
1202
+ """Generate key findings section."""
1203
+ tasks = workflow_result.get('tasks', [])
1204
+
1205
+ content = "## Key Findings\n\n"
1206
+
1207
+ # Extract findings from different agent types
1208
+ for task in tasks:
1209
+ if task.status == TaskStatus.COMPLETED and task.result:
1210
+ agent_role = task.metadata.get('suggested_role', '')
1211
+
1212
+ if 'Researcher' in agent_role and isinstance(task.result, dict):
1213
+ if 'key_points' in task.result:
1214
+ content += "### Research Findings\n"
1215
+ for point in task.result['key_points'][:5]:
1216
+ content += f"- {point}\n"
1217
+ content += "\n"
1218
+
1219
+ elif 'Analyst' in agent_role and isinstance(task.result, dict):
1220
+ if 'insights' in task.result:
1221
+ content += "### Analytical Insights\n"
1222
+ for insight in task.result['insights'][:5]:
1223
+ content += f"- {insight}\n"
1224
+ content += "\n"
1225
+
1226
+ elif 'Critic' in agent_role and isinstance(task.result, dict):
1227
+ if 'strengths' in task.result:
1228
+ content += "### Identified Strengths\n"
1229
+ for strength in task.result['strengths'][:3]:
1230
+ content += f"- {strength}\n"
1231
+ content += "\n"
1232
+
1233
+ if 'gaps' in task.result:
1234
+ content += "### Identified Gaps\n"
1235
+ for gap in task.result['gaps'][:3]:
1236
+ content += f"- {gap}\n"
1237
+ content += "\n"
1238
+
1239
+ return content
1240
+
1241
+ def _generate_recommendations(self, workflow_result: Dict[str, Any],
1242
+ problem_statement: str) -> str:
1243
+ """Generate recommendations section."""
1244
+ tasks = workflow_result.get('tasks', [])
1245
+
1246
+ content = "## Recommendations\n\n"
1247
+
1248
+ all_recommendations = []
1249
+
1250
+ # Collect recommendations from all agents
1251
+ for task in tasks:
1252
+ if task.status == TaskStatus.COMPLETED and task.result:
1253
+ if isinstance(task.result, dict):
1254
+ if 'recommendations' in task.result:
1255
+ all_recommendations.extend(task.result['recommendations'])
1256
+ elif 'final_recommendations' in task.result:
1257
+ all_recommendations.extend(task.result['final_recommendations'])
1258
+ elif 'improvements' in task.result:
1259
+ all_recommendations.extend(task.result['improvements'])
1260
+
1261
+ # Remove duplicates and limit
1262
+ unique_recommendations = []
1263
+ seen = set()
1264
+ for rec in all_recommendations:
1265
+ rec_lower = rec.lower()
1266
+ if rec_lower not in seen:
1267
+ seen.add(rec_lower)
1268
+ unique_recommendations.append(rec)
1269
+
1270
+ # Categorize recommendations
1271
+ high_priority = []
1272
+ medium_priority = []
1273
+ low_priority = []
1274
+
1275
+ for rec in unique_recommendations[:10]:
1276
+ # Simple prioritization based on keywords
1277
+ if any(word in rec.lower() for word in ['critical', 'must', 'essential', 'immediately']):
1278
+ high_priority.append(rec)
1279
+ elif any(word in rec.lower() for word in ['should', 'recommend', 'important']):
1280
+ medium_priority.append(rec)
1281
+ else:
1282
+ low_priority.append(rec)
1283
+
1284
+ if high_priority:
1285
+ content += "### High Priority\n"
1286
+ for rec in high_priority:
1287
+ content += f"- {rec}\n"
1288
+ content += "\n"
1289
+
1290
+ if medium_priority:
1291
+ content += "### Medium Priority\n"
1292
+ for rec in medium_priority:
1293
+ content += f"- {rec}\n"
1294
+ content += "\n"
1295
+
1296
+ if low_priority:
1297
+ content += "### Low Priority\n"
1298
+ for rec in low_priority:
1299
+ content += f"- {rec}\n"
1300
+ content += "\n"
1301
+
1302
+ return content
1303
+
1304
+ def _generate_confidence_analysis(self, workflow_result: Dict[str, Any],
1305
+ problem_statement: str) -> str:
1306
+ """Generate confidence analysis section."""
1307
+ tasks = workflow_result.get('tasks', [])
1308
+ contributions = workflow_result.get('agent_contributions', {})
1309
+
1310
+ content = "## Confidence Analysis\n\n"
1311
+
1312
+ # Overall confidence
1313
+ overall_confidence = np.mean([t.confidence for t in tasks if t.confidence > 0])
1314
+ content += f"**Overall Confidence Score:** {overall_confidence:.0%}\n\n"
1315
+
1316
+ # Confidence by agent type
1317
+ content += "### Confidence by Agent Role\n"
1318
+
1319
+ for agent, stats in contributions.items():
1320
+ avg_conf = stats.get('average_confidence', 0)
1321
+ role = stats.get('role', 'Unknown')
1322
+ content += f"- **{role}**: {avg_conf:.0%}\n"
1323
+
1324
+ content += "\n### Confidence Distribution\n"
1325
+
1326
+ # Categorize confidence levels
1327
+ high_conf = [t for t in tasks if t.confidence >= 0.8]
1328
+ medium_conf = [t for t in tasks if 0.5 <= t.confidence < 0.8]
1329
+ low_conf = [t for t in tasks if t.confidence < 0.5]
1330
+
1331
+ content += f"- High Confidence (≥80%): {len(high_conf)} tasks\n"
1332
+ content += f"- Medium Confidence (50-79%): {len(medium_conf)} tasks\n"
1333
+ content += f"- Low Confidence (<50%): {len(low_conf)} tasks\n"
1334
+
1335
+ return content
1336
+
1337
+ def _generate_footer(self, workflow_result: Dict[str, Any]) -> str:
1338
+ """Generate report footer."""
1339
+ execution_time = workflow_result.get('execution_time', 0)
1340
+ timestamp = workflow_result.get('timestamp', datetime.now())
1341
+
1342
+ return f"""---
1343
+
1344
+ **Report Generation Details:**
1345
+ - Analysis completed in {execution_time:.1f} seconds
1346
+ - Report generated at {timestamp.strftime('%B %d, %Y at %I:%M %p')}
1347
+ - Powered by Multi-Agent AI Collaboration System"""
1348
+
1349
+ # Gradio Interface Functions
1350
+ def create_gradio_interface():
1351
+ """Create the main Gradio interface for the multi-agent system."""
1352
+
1353
+ # Initialize components
1354
+ coordinator = None
1355
+ visualizer = WorkflowVisualizer()
1356
+ report_generator = ReportGenerator()
1357
+
1358
+ # State variables
1359
+ current_workflow = None
1360
+ current_problem = ""
1361
+
1362
+ def initialize_agents(api_key: str, model: str = "gpt-4") -> str:
1363
+ """Initialize the multi-agent system."""
1364
+ nonlocal coordinator
1365
+
1366
+ if not api_key:
1367
+ return "Please provide an OpenAI API key to initialize the agents."
1368
+
1369
+ try:
1370
+ # Initialize LLM
1371
+ llm = ChatOpenAI(
1372
+ api_key=api_key,
1373
+ model=model,
1374
+ temperature=Config.TEMPERATURE,
1375
+ max_tokens=Config.MAX_TOKENS
1376
+ )
1377
+
1378
+ # Create coordinator
1379
+ coordinator = CoordinatorAgent("Coordinator", llm)
1380
+
1381
+ # Create specialized agents
1382
+ researcher = ResearcherAgent("Researcher-1", llm)
1383
+ analyst = AnalystAgent("Analyst-1", llm)
1384
+ critic = CriticAgent("Critic-1", llm)
1385
+ synthesizer = SynthesizerAgent("Synthesizer-1", llm)
1386
+
1387
+ # Register agents with coordinator
1388
+ coordinator.register_agent(researcher)
1389
+ coordinator.register_agent(analyst)
1390
+ coordinator.register_agent(critic)
1391
+ coordinator.register_agent(synthesizer)
1392
+
1393
+ return f"Successfully initialized multi-agent system with {len(coordinator.agents)} agents."
1394
+
1395
+ except Exception as e:
1396
+ logger.error(f"Error initializing agents: {str(e)}")
1397
+ return f"Error initializing agents: {str(e)}"
1398
+
1399
+ async def analyze_problem(problem: str, execution_mode: str) -> Tuple[str, Any, Any, Any]:
1400
+ """Analyze a problem using the multi-agent system."""
1401
+ nonlocal current_workflow, current_problem
1402
+
1403
+ if not coordinator:
1404
+ return "Please initialize the agents first.", None, None, None
1405
+
1406
+ if not problem:
1407
+ return "Please enter a problem to analyze.", None, None, None
1408
+
1409
+ current_problem = problem
1410
+
1411
+ try:
1412
+ # Decompose problem into tasks
1413
+ status = "Decomposing problem into subtasks..."
1414
+ tasks = await coordinator.decompose_problem(problem)
1415
+
1416
+ if not tasks:
1417
+ return "Failed to decompose problem into tasks.", None, None, None
1418
+
1419
+ # Execute workflow
1420
+ status = f"Executing {len(tasks)} tasks using {execution_mode} mode..."
1421
+ parallel = execution_mode == "Parallel"
1422
+
1423
+ current_workflow = await coordinator.execute_workflow(tasks, parallel=parallel)
1424
+
1425
+ # Create visualizations
1426
+ workflow_graph = visualizer.create_workflow_graph(
1427
+ current_workflow['workflow_graph'],
1428
+ active_agents=list(coordinator.agents.keys())
1429
+ )
1430
+
1431
+ timeline_chart = visualizer.create_task_timeline(tasks)
1432
+
1433
+ confidence_heatmap = visualizer.create_confidence_heatmap(
1434
+ current_workflow['agent_contributions']
1435
+ )
1436
+
1437
+ # Generate status summary
1438
+ success_rate = current_workflow['success_rate']
1439
+ execution_time = current_workflow['execution_time']
1440
+
1441
+ status = f"""Analysis completed successfully!
1442
+
1443
+ - Tasks executed: {len(tasks)}
1444
+ - Success rate: {success_rate:.0%}
1445
+ - Execution time: {execution_time:.1f} seconds
1446
+ - Agents involved: {len(coordinator.agents)}"""
1447
+
1448
+ return status, workflow_graph, timeline_chart, confidence_heatmap
1449
+
1450
+ except Exception as e:
1451
+ logger.error(f"Error analyzing problem: {str(e)}")
1452
+ return f"Error during analysis: {str(e)}", None, None, None
1453
+
1454
+ def generate_report(selected_sections: List[str]) -> str:
1455
+ """Generate a report from the current workflow results."""
1456
+
1457
+ if not current_workflow:
1458
+ return "No analysis results available. Please run an analysis first."
1459
+
1460
+ try:
1461
+ report = report_generator.generate_report(
1462
+ current_workflow,
1463
+ current_problem,
1464
+ include_sections=selected_sections
1465
+ )
1466
+
1467
+ return report
1468
+
1469
+ except Exception as e:
1470
+ logger.error(f"Error generating report: {str(e)}")
1471
+ return f"Error generating report: {str(e)}"
1472
+
1473
+ def get_agent_details(agent_name: str) -> str:
1474
+ """Get detailed information about a specific agent."""
1475
+
1476
+ if not coordinator or agent_name not in coordinator.agents:
1477
+ return "Agent not found or system not initialized."
1478
+
1479
+ agent = coordinator.agents[agent_name]
1480
+
1481
+ details = f"""## Agent: {agent.name}
1482
+
1483
+ **Role:** {agent.role.value}
1484
+
1485
+ **Status:** {'Active' if agent.active else 'Inactive'}
1486
+
1487
+ **Completed Tasks:** {len(agent.completed_tasks)}
1488
+
1489
+ **Current Task:** {agent.current_task.description if agent.current_task else 'None'}
1490
+
1491
+ ### Recent Tasks:
1492
+ """
1493
+
1494
+ for task in agent.completed_tasks[-5:]:
1495
+ details += f"\n- **{task.id}**: {task.description}"
1496
+ details += f"\n - Status: {task.status.value}"
1497
+ details += f"\n - Confidence: {task.confidence:.0%}"
1498
+ details += f"\n - Execution Time: {(task.completed_at - task.created_at).total_seconds():.1f}s\n"
1499
+
1500
+ return details
1501
+
1502
+ # Create custom CSS for professional styling
1503
+ custom_css = """
1504
+ .gradio-container {
1505
+ font-family: 'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', sans-serif;
1506
+ }
1507
+
1508
+ .gr-button-primary {
1509
+ background-color: #2563eb !important;
1510
+ border-color: #2563eb !important;
1511
+ }
1512
+
1513
+ .gr-button-primary:hover {
1514
+ background-color: #1d4ed8 !important;
1515
+ border-color: #1d4ed8 !important;
1516
+ }
1517
+
1518
+ .container {
1519
+ max-width: 1200px;
1520
+ margin: 0 auto;
1521
+ }
1522
+
1523
+ h1 {
1524
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
1525
+ -webkit-background-clip: text;
1526
+ -webkit-text-fill-color: transparent;
1527
+ text-align: center;
1528
+ font-size: 2.5rem;
1529
+ font-weight: 800;
1530
+ margin-bottom: 1rem;
1531
+ }
1532
+
1533
+ h2 {
1534
+ color: #1f2937;
1535
+ font-size: 1.5rem;
1536
+ font-weight: 600;
1537
+ margin-top: 2rem;
1538
+ margin-bottom: 1rem;
1539
+ }
1540
+
1541
+ .status-box {
1542
+ background-color: #f3f4f6;
1543
+ border-radius: 8px;
1544
+ padding: 1rem;
1545
+ margin: 1rem 0;
1546
+ border-left: 4px solid #3b82f6;
1547
+ }
1548
+ """
1549
+
1550
+ # Create Gradio interface
1551
+ with gr.Blocks(title="Multi-Agent AI Collaboration System",
1552
+ theme=gr.themes.Base(),
1553
+ css=custom_css) as interface:
1554
+
1555
+ gr.Markdown("""
1556
+ # Multi-Agent AI Collaboration System
1557
+
1558
+ Advanced AI system with specialized agents working together to solve complex problems through intelligent task decomposition and parallel processing.
1559
+ """)
1560
+
1561
+ # System Configuration
1562
+ with gr.Row():
1563
+ with gr.Column(scale=3):
1564
+ api_key_input = gr.Textbox(
1565
+ label="OpenAI API Key",
1566
+ placeholder="sk-...",
1567
+ type="password"
1568
+ )
1569
+ with gr.Column(scale=1):
1570
+ model_select = gr.Dropdown(
1571
+ choices=["gpt-4", "gpt-3.5-turbo"],
1572
+ value="gpt-4",
1573
+ label="Model"
1574
+ )
1575
+ with gr.Column(scale=1):
1576
+ init_button = gr.Button("Initialize Agents", variant="primary")
1577
+
1578
+ init_status = gr.Textbox(label="Initialization Status", interactive=False)
1579
+
1580
+ # Main tabs
1581
+ with gr.Tabs():
1582
+ # Problem Analysis Tab
1583
+ with gr.TabItem("Problem Analysis"):
1584
+ gr.Markdown("### Enter a complex problem for multi-agent analysis")
1585
+
1586
+ problem_input = gr.Textbox(
1587
+ label="Problem Statement",
1588
+ placeholder="Example: Analyze the potential impact of AI on healthcare delivery in the next 5 years",
1589
+ lines=3
1590
+ )
1591
+
1592
+ with gr.Row():
1593
+ execution_mode = gr.Radio(
1594
+ choices=["Sequential", "Parallel"],
1595
+ value="Parallel",
1596
+ label="Execution Mode"
1597
+ )
1598
+ analyze_button = gr.Button("Analyze Problem", variant="primary")
1599
+
1600
+ analysis_status = gr.Textbox(
1601
+ label="Analysis Status",
1602
+ interactive=False,
1603
+ lines=5
1604
+ )
1605
+
1606
+ # Visualization outputs
1607
+ with gr.Row():
1608
+ workflow_graph = gr.Plot(label="Agent Collaboration Network")
1609
+
1610
+ with gr.Row():
1611
+ timeline_chart = gr.Plot(label="Task Execution Timeline")
1612
+ confidence_heatmap = gr.Plot(label="Agent Performance Metrics")
1613
+
1614
+ # Agent Details Tab
1615
+ with gr.TabItem("Agent Details"):
1616
+ gr.Markdown("### View detailed information about each agent")
1617
+
1618
+ agent_selector = gr.Dropdown(
1619
+ choices=["Researcher-1", "Analyst-1", "Critic-1", "Synthesizer-1"],
1620
+ label="Select Agent"
1621
+ )
1622
+
1623
+ agent_details_button = gr.Button("Get Agent Details")
1624
+
1625
+ agent_details_output = gr.Markdown()
1626
+
1627
+ # Report Generation Tab
1628
+ with gr.TabItem("Report Generation"):
1629
+ gr.Markdown("### Generate comprehensive analysis report")
1630
+
1631
+ section_selector = gr.CheckboxGroup(
1632
+ choices=[
1633
+ "executive_summary",
1634
+ "task_analysis",
1635
+ "agent_contributions",
1636
+ "key_findings",
1637
+ "recommendations",
1638
+ "confidence_analysis"
1639
+ ],
1640
+ value=[
1641
+ "executive_summary",
1642
+ "key_findings",
1643
+ "recommendations"
1644
+ ],
1645
+ label="Select Report Sections"
1646
+ )
1647
+
1648
+ generate_report_button = gr.Button("Generate Report", variant="primary")
1649
+
1650
+ report_output = gr.Markdown()
1651
+
1652
+ # Example Problems Tab
1653
+ with gr.TabItem("Example Problems"):
1654
+ gr.Markdown("""
1655
+ ### Example Problems for Analysis
1656
+
1657
+ Click on any example to load it into the analysis tab:
1658
+
1659
+ 1. **Business Strategy**: "Develop a comprehensive strategy for a traditional retail company to transition to e-commerce while maintaining customer loyalty"
1660
+
1661
+ 2. **Technology Assessment**: "Evaluate the potential risks and benefits of implementing blockchain technology in supply chain management"
1662
+
1663
+ 3. **Market Analysis**: "Analyze the competitive landscape for electric vehicles and identify key success factors for new entrants"
1664
+
1665
+ 4. **Policy Evaluation**: "Assess the implications of remote work policies on organizational culture and productivity"
1666
+
1667
+ 5. **Innovation Planning**: "Design an innovation framework for a healthcare organization to integrate AI-powered diagnostic tools"
1668
+ """)
1669
+
1670
+ example_buttons = []
1671
+ example_problems = [
1672
+ "Develop a comprehensive strategy for a traditional retail company to transition to e-commerce while maintaining customer loyalty",
1673
+ "Evaluate the potential risks and benefits of implementing blockchain technology in supply chain management",
1674
+ "Analyze the competitive landscape for electric vehicles and identify key success factors for new entrants",
1675
+ "Assess the implications of remote work policies on organizational culture and productivity",
1676
+ "Design an innovation framework for a healthcare organization to integrate AI-powered diagnostic tools"
1677
+ ]
1678
+
1679
+ for i, problem in enumerate(example_problems):
1680
+ btn = gr.Button(f"Load Example {i+1}", size="sm")
1681
+ example_buttons.append(btn)
1682
+
1683
+ # Event handlers
1684
+ init_button.click(
1685
+ fn=initialize_agents,
1686
+ inputs=[api_key_input, model_select],
1687
+ outputs=init_status
1688
+ )
1689
+
1690
+ analyze_button.click(
1691
+ fn=lambda p, m: asyncio.run(analyze_problem(p, m)),
1692
+ inputs=[problem_input, execution_mode],
1693
+ outputs=[analysis_status, workflow_graph, timeline_chart, confidence_heatmap]
1694
+ )
1695
+
1696
+ agent_details_button.click(
1697
+ fn=get_agent_details,
1698
+ inputs=agent_selector,
1699
+ outputs=agent_details_output
1700
+ )
1701
+
1702
+ generate_report_button.click(
1703
+ fn=generate_report,
1704
+ inputs=section_selector,
1705
+ outputs=report_output
1706
+ )
1707
+
1708
+ # Example button handlers
1709
+ for i, btn in enumerate(example_buttons):
1710
+ btn.click(
1711
+ fn=lambda idx=i: example_problems[idx],
1712
+ outputs=problem_input
1713
+ )
1714
+
1715
+ return interface
1716
+
1717
+ # Main execution
1718
+ if __name__ == "__main__":
1719
+ interface = create_gradio_interface()
1720
+ interface.launch(share=True)