Spaces:
Paused
Paused
Commit
·
918bdb4
1
Parent(s):
956cc4b
refactor!: add mpc solver, refactor all systems
Browse filesfeat!: completed mpc tool solver system
refactor!: streamlined services
feat!: add logging system
- README.md +95 -37
- src/agents/task_composer_agent.py +5 -3
- src/agents/task_processing.py +5 -3
- src/app.py +280 -211
- src/constraint_solvers/timetable/constraints.py +10 -6
- src/constraint_solvers/timetable/domain.py +4 -0
- src/constraint_solvers/timetable/solver.py +1 -1
- src/domain.py +3 -5
- src/factory/data_generators.py +61 -18
- src/factory/data_provider.py +63 -40
- src/handlers.py +58 -29
- src/helpers.py +9 -1
- src/mcp_handlers.py +202 -94
- src/services/__init__.py +2 -0
- src/services/data_service.py +43 -32
- src/services/logging_service.py +26 -7
- src/services/schedule_service.py +75 -58
- src/services/state_service.py +96 -0
- src/utils/extract_calendar.py +6 -0
- src/utils/load_secrets.py +6 -3
- src/utils/logging_config.py +84 -0
README.md
CHANGED
@@ -15,9 +15,7 @@ tags: ["agent-demo-track"]
|
|
15 |
|
16 |
Yuga Planner is a neuro-symbolic system prototype: it provides an agent-powered team scheduling and task allocation platform built on [Gradio](https://gradio.app/).
|
17 |
|
18 |
-
It takes a project description
|
19 |
-
|
20 |
-
**Demo Video:** [pCloud]()
|
21 |
|
22 |
## 🚀 Try It Now
|
23 |
**Live Demo:**
|
@@ -26,25 +24,49 @@ It takes a project description file such as a README.md file, breaks it down int
|
|
26 |
**Source Code on GitHub:**
|
27 |
[https://github.com/blackopsrepl/yuga-planner](https://github.com/blackopsrepl/yuga-planner)
|
28 |
|
29 |
-
### Usage
|
30 |
|
31 |
-
1. Go to [the live demo](https://huggingface.co/spaces/
|
32 |
|
33 |
-
2. Upload
|
|
|
|
|
34 |
- Each file will be taken as a separate project
|
35 |
-
- The app will parse, decompose, and estimate tasks
|
36 |
-
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
-
|
|
|
|
|
40 |
|
41 |
## Architecture
|
42 |
|
43 |
-
- **
|
44 |
-
|
45 |
-
|
46 |
-
- **
|
47 |
-
- **
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
---
|
50 |
|
@@ -63,30 +85,78 @@ It takes a project description file such as a README.md file, breaks it down int
|
|
63 |
| **Calendar Parsing** | Extracts tasks from uploaded calendar files (.ics) | ✅ |
|
64 |
| **MCP Endpoint** | API endpoint for MCP tool integration | ✅ |
|
65 |
|
66 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
|
68 |
-
|
|
|
|
|
|
|
|
|
|
|
69 |
|
70 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
**Features:**
|
73 |
-
- Accepts calendar files and user
|
74 |
-
- Parses events
|
75 |
-
-
|
76 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
|
78 |
See the [CHANGELOG.md](CHANGELOG.md) for details on recent MCP-related changes.
|
79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
### Work in Progress
|
81 |
|
82 |
-
- **
|
83 |
-
-
|
84 |
-
-
|
|
|
|
|
|
|
85 |
- **Migrate from violation_analyzer to Timefold dedicated libraries**
|
86 |
- **Include tests for all constraints using ConstraintVerifier**
|
87 |
|
88 |
### Future Work
|
89 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
90 |
- **RAG:** validation of task decomposition and estimation against industry relevant literature
|
91 |
- **More granular task dependency:** representation of tasks in a tree instead of a list to allow overlap within projects, where feasible/convenient
|
92 |
- **Input from GitHub issues:** instead of processing markdown directly, it creates a list by parsing issue
|
@@ -138,18 +208,6 @@ See the [CHANGELOG.md](CHANGELOG.md) for details on recent MCP-related changes.
|
|
138 |
|
139 |
---
|
140 |
|
141 |
-
## Testing
|
142 |
-
|
143 |
-
- **Run tests:**
|
144 |
-
```bash
|
145 |
-
make test
|
146 |
-
```
|
147 |
-
|
148 |
-
- **Test files:**
|
149 |
-
Located in the `tests/` directory.
|
150 |
-
|
151 |
-
---
|
152 |
-
|
153 |
## Python Dependencies
|
154 |
|
155 |
See `requirements.txt` for full list.
|
|
|
15 |
|
16 |
Yuga Planner is a neuro-symbolic system prototype: it provides an agent-powered team scheduling and task allocation platform built on [Gradio](https://gradio.app/).
|
17 |
|
18 |
+
It takes a project description, breaks it down into actionable tasks through a [LLamaIndex](https://www.llamaindex.ai/) agent, then uses [Timefold](http://www.timefold.ai) to generate optimal employee schedules for complex projects.
|
|
|
|
|
19 |
|
20 |
## 🚀 Try It Now
|
21 |
**Live Demo:**
|
|
|
24 |
**Source Code on GitHub:**
|
25 |
[https://github.com/blackopsrepl/yuga-planner](https://github.com/blackopsrepl/yuga-planner)
|
26 |
|
27 |
+
### Gradio Web Demo Usage
|
28 |
|
29 |
+
1. Go to [the live demo](https://huggingface.co/spaces/blackopsrepl/yuga-planner) or [http://localhost:7860](http://localhost:7860)
|
30 |
|
31 |
+
2. **Upload project files** or **use mock projects:**
|
32 |
+
- Upload one or more Markdown project file(s), then click "Load Data"
|
33 |
+
- OR select from pre-configured mock projects for quick testing
|
34 |
- Each file will be taken as a separate project
|
35 |
+
- The app will parse, decompose, and estimate tasks using LLM agents
|
36 |
+
|
37 |
+
3. **Generate schedule:**
|
38 |
+
- Click "Solve" to generate an optimal schedule against a randomly generated team
|
39 |
+
- View results interactively with real-time solver progress
|
40 |
+
- Task order is preserved within each project
|
41 |
+
|
42 |
+
### MCP Tool Usage
|
43 |
+
|
44 |
+
1. **In any MCP-compatible chatbot or agent platform:**
|
45 |
+
```
|
46 |
+
use yuga-planner mcp tool
|
47 |
+
Task Description: [Your task description]
|
48 |
+
```
|
49 |
|
50 |
+
2. **Attach your calendar file (.ics)** to provide existing commitments
|
51 |
+
|
52 |
+
3. **Receive optimized schedule** that integrates your new task with existing calendar events
|
53 |
|
54 |
## Architecture
|
55 |
|
56 |
+
Yuga Planner follows a **service-oriented architecture** with clear separation of concerns:
|
57 |
+
|
58 |
+
### Core Services Layer
|
59 |
+
- **DataService:** Handles data loading, processing, and format conversion from various sources (Markdown, calendars)
|
60 |
+
- **ScheduleService:** Orchestrates schedule generation, solver management, and solution polling
|
61 |
+
- **StateService:** Centralized state management for job tracking and schedule storage
|
62 |
+
- **LoggingService:** Real-time log streaming for UI feedback and debugging
|
63 |
+
- **MockProjectService:** Provides sample project data for testing and demos
|
64 |
+
|
65 |
+
### System Components
|
66 |
+
- **Gradio UI:** Modern web interface with real-time updates and interactive schedule visualization
|
67 |
+
- **Task Composer Agent:** Uses [LLamaIndex](https://www.llamaindex.ai/) + [Nebius AI](https://nebius.ai/) for intelligent task decomposition and estimation
|
68 |
+
- **Constraint Solver:** [Timefold](http://www.timefold.ai) optimization engine for optimal task-to-employee assignments
|
69 |
+
- **MCP Integration:** Model Context Protocol endpoint for agent workflow integration
|
70 |
|
71 |
---
|
72 |
|
|
|
85 |
| **Calendar Parsing** | Extracts tasks from uploaded calendar files (.ics) | ✅ |
|
86 |
| **MCP Endpoint** | API endpoint for MCP tool integration | ✅ |
|
87 |
|
88 |
+
## 🎯 Two Usage Modes
|
89 |
+
|
90 |
+
Yuga Planner operates as **two separate systems** serving different use cases:
|
91 |
+
|
92 |
+
### 1. 🖥️ Gradio Web Demo
|
93 |
+
**Purpose:** Interactive team scheduling for project management
|
94 |
+
- **Access:** [Live demo](https://huggingface.co/spaces/blackopsrepl/yuga-planner) or local web interface
|
95 |
+
- **Input:** Upload Markdown project files or use pre-configured mock projects
|
96 |
+
- **Team:** Schedules against a **randomly generated team** with diverse skills and availability
|
97 |
+
- **Use Case:** Project managers scheduling real teams for complex multi-project workloads
|
98 |
|
99 |
+
### 2. 🤖 MCP Personal Tool
|
100 |
+
**Purpose:** Individual task scheduling integrated with personal calendars
|
101 |
+
- **Access:** Through MCP-compatible chatbots and agent platforms
|
102 |
+
- **Input:** Attach `.ics` calendar files + natural language task descriptions
|
103 |
+
- **Team:** Schedules against your **personal calendar** and existing commitments
|
104 |
+
- **Use Case:** Personal productivity and task planning around existing appointments
|
105 |
|
106 |
+
**Example MCP Usage:**
|
107 |
+
```
|
108 |
+
User: use yuga-planner mcp tool
|
109 |
+
Task Description: Create a new EC2 instance on AWS
|
110 |
+
[Attaches calendar.ics file]
|
111 |
+
|
112 |
+
Tool Response: Optimized schedule created - EC2 setup task assigned to
|
113 |
+
available time slots around your existing meetings
|
114 |
+
```
|
115 |
+
|
116 |
+
## 🧩 MCP Tool Integration Details
|
117 |
|
118 |
**Features:**
|
119 |
+
- Accepts calendar files and user task descriptions via chat interface
|
120 |
+
- Parses existing calendar events and new task requirements
|
121 |
+
- **Full schedule solving support** - generates optimized task assignments
|
122 |
+
- Returns complete solved schedules integrated with personal calendar
|
123 |
+
- Designed for seamless chatbot and agent workflow integration
|
124 |
+
|
125 |
+
**Current Limitations:**
|
126 |
+
- **Weekend constraints:** Tasks can be scheduled on weekends (should respect work-week boundaries)
|
127 |
+
- **Working hours:** No enforcement of standard business hours (8 AM - 6 PM)
|
128 |
+
- **Calendar pinning:** Tasks from uploaded calendars are solved alongside other tasks but should remain pinned to their original time slots
|
129 |
|
130 |
See the [CHANGELOG.md](CHANGELOG.md) for details on recent MCP-related changes.
|
131 |
|
132 |
+
### Recent Improvements ✅
|
133 |
+
|
134 |
+
- **Service Architecture Refactoring:** Complete service-oriented architecture with proper encapsulation and clean boundaries
|
135 |
+
- **State Management:** Centralized state handling through dedicated StateService
|
136 |
+
- **Handler Compliance:** Clean separation between UI handlers and business logic services
|
137 |
+
- **Method Encapsulation:** Fixed all private method violations for better code maintainability
|
138 |
+
|
139 |
### Work in Progress
|
140 |
|
141 |
+
- **Constraint Enhancements:**
|
142 |
+
- Weekend respect (prevent scheduling on weekends)
|
143 |
+
- Working hours enforcement (8 AM - 6 PM business hours)
|
144 |
+
- Calendar task pinning (preserve original time slots for imported calendar events)
|
145 |
+
- **Gradio UI overhaul:** Enhanced user experience and visual improvements
|
146 |
+
- **Migration to Pydantic models:** Type-safe data validation and serialization
|
147 |
- **Migrate from violation_analyzer to Timefold dedicated libraries**
|
148 |
- **Include tests for all constraints using ConstraintVerifier**
|
149 |
|
150 |
### Future Work
|
151 |
|
152 |
+
#### System Integration Roadmap
|
153 |
+
Currently, the **Gradio web demo** and **MCP personal tool** operate as separate systems. As the project evolves, these will become **more integrated**, enabling:
|
154 |
+
- **Unified scheduling engine** that can handle both team management and personal productivity in one interface
|
155 |
+
- **Hybrid workflows** where personal tasks can be coordinated with team projects
|
156 |
+
- **Cross-system data sharing** between web demo projects and personal MCP calendars
|
157 |
+
- **Seamless switching** between team management and individual task planning modes
|
158 |
+
|
159 |
+
#### Core Feature Enhancements
|
160 |
- **RAG:** validation of task decomposition and estimation against industry relevant literature
|
161 |
- **More granular task dependency:** representation of tasks in a tree instead of a list to allow overlap within projects, where feasible/convenient
|
162 |
- **Input from GitHub issues:** instead of processing markdown directly, it creates a list by parsing issue
|
|
|
208 |
|
209 |
---
|
210 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
211 |
## Python Dependencies
|
212 |
|
213 |
See `requirements.txt` for full list.
|
src/agents/task_composer_agent.py
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
import os, asyncio
|
2 |
from typing import Optional, List
|
3 |
|
4 |
from llama_index.llms.nebius import NebiusLLM
|
@@ -19,9 +19,11 @@ from agents.task_processing import (
|
|
19 |
log_task_duration_breakdown,
|
20 |
log_total_time,
|
21 |
)
|
|
|
22 |
|
23 |
-
logging
|
24 |
-
|
|
|
25 |
|
26 |
|
27 |
from domain import AgentsConfig, AGENTS_CONFIG
|
|
|
1 |
+
import os, asyncio
|
2 |
from typing import Optional, List
|
3 |
|
4 |
from llama_index.llms.nebius import NebiusLLM
|
|
|
19 |
log_task_duration_breakdown,
|
20 |
log_total_time,
|
21 |
)
|
22 |
+
from utils.logging_config import setup_logging, get_logger
|
23 |
|
24 |
+
# Initialize logging
|
25 |
+
setup_logging()
|
26 |
+
logger = get_logger(__name__)
|
27 |
|
28 |
|
29 |
from domain import AgentsConfig, AGENTS_CONFIG
|
src/agents/task_processing.py
CHANGED
@@ -1,9 +1,11 @@
|
|
1 |
-
import re
|
2 |
|
3 |
from utils.markdown_analyzer import MarkdownAnalyzer
|
|
|
4 |
|
5 |
-
logging
|
6 |
-
|
|
|
7 |
|
8 |
|
9 |
### MARKDOWN UTILS ###
|
|
|
1 |
+
import re
|
2 |
|
3 |
from utils.markdown_analyzer import MarkdownAnalyzer
|
4 |
+
from utils.logging_config import setup_logging, get_logger
|
5 |
|
6 |
+
# Initialize logging
|
7 |
+
setup_logging()
|
8 |
+
logger = get_logger(__name__)
|
9 |
|
10 |
|
11 |
### MARKDOWN UTILS ###
|
src/app.py
CHANGED
@@ -1,8 +1,11 @@
|
|
1 |
-
import os, argparse
|
2 |
import gradio as gr
|
3 |
|
4 |
-
|
5 |
|
|
|
|
|
|
|
6 |
|
7 |
from utils.load_secrets import load_secrets
|
8 |
|
@@ -33,6 +36,18 @@ last_attached_file = None
|
|
33 |
|
34 |
|
35 |
def app(debug: bool = False):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
with gr.Blocks() as demo:
|
37 |
gr.Markdown(
|
38 |
"""
|
@@ -41,254 +56,308 @@ def app(debug: bool = False):
|
|
41 |
"""
|
42 |
)
|
43 |
|
44 |
-
|
|
|
45 |
|
46 |
-
|
47 |
-
|
48 |
-
return gr.get_state().server_url + "/gradio_api/mcp/sse"
|
49 |
-
except:
|
50 |
-
return "http://localhost:7860/gradio_api/mcp/sse"
|
51 |
|
52 |
-
|
53 |
-
f"""
|
54 |
-
This is a demo of the Yuga Planner system.
|
55 |
-
|
56 |
-
To use as an MCP server:
|
57 |
-
1. Register the MCP server with your client using the URL:
|
58 |
-
```
|
59 |
-
{get_server_url()}
|
60 |
-
```
|
61 |
-
2. Call the tool from your client. Example:
|
62 |
-
```
|
63 |
-
use yuga planner tool @tests/data/calendar.ics
|
64 |
-
Task Description: Create a new AWS VPC
|
65 |
-
```
|
66 |
-
|
67 |
-
"""
|
68 |
-
)
|
69 |
|
70 |
-
with gr.Tab("Task Scheduling"):
|
71 |
-
gr.Markdown("### SWE Team Task Scheduling Demo")
|
72 |
|
73 |
-
|
74 |
-
|
75 |
-
## Instructions
|
76 |
-
1. Choose a project source - either upload your own project file(s) or select from our mock projects
|
77 |
-
2. Click 'Load Data' to parse, decompose, and estimate tasks
|
78 |
-
3. Click 'Solve' to generate an optimal schedule based on employee skills and availability
|
79 |
-
4. Review the results in the tables below
|
80 |
-
"""
|
81 |
-
)
|
82 |
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
)
|
89 |
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
maximum=365,
|
105 |
-
step=1,
|
106 |
-
precision=0,
|
107 |
-
)
|
108 |
|
109 |
-
|
110 |
-
|
111 |
-
file_upload = gr.File(
|
112 |
-
label="Upload Project Files (Markdown)",
|
113 |
-
file_types=[".md"],
|
114 |
-
file_count="multiple",
|
115 |
-
)
|
116 |
|
117 |
-
# Mock projects dropdown (initially hidden)
|
118 |
-
with gr.Group(visible=False) as mock_projects_group:
|
119 |
-
# Get mock project names from ProjectService
|
120 |
-
available_projects = MockProjectService.get_available_project_names()
|
121 |
-
mock_project_dropdown = gr.Dropdown(
|
122 |
-
choices=available_projects,
|
123 |
-
label="Select Mock Projects (multiple selection allowed)",
|
124 |
-
value=[available_projects[0]] if available_projects else [],
|
125 |
-
multiselect=True,
|
126 |
-
)
|
127 |
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
|
|
|
|
|
|
145 |
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
label="
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
155 |
)
|
156 |
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
project_source.change(
|
165 |
-
toggle_visibility,
|
166 |
-
inputs=[project_source],
|
167 |
-
outputs=[file_upload_group, mock_projects_group],
|
168 |
)
|
169 |
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
)
|
180 |
|
181 |
-
|
182 |
-
|
183 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
184 |
|
185 |
-
|
186 |
-
|
|
|
|
|
|
|
|
|
187 |
|
188 |
-
|
189 |
-
|
190 |
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
job_id_state,
|
196 |
-
status_text,
|
197 |
-
llm_output_state,
|
198 |
-
log_terminal,
|
199 |
-
]
|
200 |
-
|
201 |
-
# Outputs for load_data that also enables solve button
|
202 |
-
load_outputs = outputs + [solve_btn]
|
203 |
-
|
204 |
-
# Create wrapper function to pass debug flag to auto_poll
|
205 |
-
async def auto_poll_with_debug(job_id, llm_output):
|
206 |
-
return await auto_poll(job_id, llm_output, debug=debug)
|
207 |
-
|
208 |
-
# Timer for polling (not related to state)
|
209 |
-
timer = gr.Timer(2, active=False)
|
210 |
-
timer.tick(
|
211 |
-
auto_poll_with_debug,
|
212 |
-
inputs=[job_id_state, llm_output_state],
|
213 |
-
outputs=outputs,
|
214 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
215 |
|
216 |
-
|
217 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
218 |
project_source,
|
219 |
file_obj,
|
220 |
mock_projects,
|
221 |
employee_count,
|
222 |
days_in_schedule,
|
223 |
llm_output,
|
224 |
-
|
|
|
225 |
):
|
226 |
-
|
227 |
-
project_source,
|
228 |
-
file_obj,
|
229 |
-
mock_projects,
|
230 |
-
employee_count,
|
231 |
-
days_in_schedule,
|
232 |
-
llm_output,
|
233 |
-
debug=debug,
|
234 |
-
progress=progress,
|
235 |
-
):
|
236 |
-
yield result
|
237 |
-
|
238 |
-
# Use state as both input and output
|
239 |
-
load_btn.click(
|
240 |
-
load_data_with_debug,
|
241 |
-
inputs=[
|
242 |
-
project_source,
|
243 |
-
file_upload,
|
244 |
-
mock_project_dropdown,
|
245 |
-
employee_count,
|
246 |
-
days_in_schedule,
|
247 |
-
llm_output_state,
|
248 |
-
],
|
249 |
-
outputs=load_outputs,
|
250 |
-
api_name="load_data",
|
251 |
-
)
|
252 |
|
253 |
-
|
254 |
-
|
255 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
256 |
|
257 |
-
|
258 |
-
|
259 |
-
|
260 |
-
outputs=outputs,
|
261 |
-
).then(start_timer, inputs=[job_id_state, llm_output_state], outputs=timer)
|
262 |
|
263 |
-
|
|
|
|
|
|
|
|
|
264 |
|
265 |
-
|
266 |
-
|
267 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
268 |
|
269 |
-
|
270 |
-
|
271 |
-
|
|
|
|
|
|
|
|
|
272 |
|
273 |
-
|
|
|
|
|
274 |
debug_set_btn = gr.Button("Debug Set State")
|
275 |
debug_show_btn = gr.Button("Debug Show State")
|
|
|
276 |
|
277 |
-
|
278 |
-
|
279 |
-
|
280 |
-
|
281 |
-
|
282 |
-
|
283 |
-
|
284 |
-
|
285 |
-
|
286 |
-
|
|
|
|
|
|
|
|
|
|
|
287 |
|
288 |
-
# Register the MCP tool as an API endpoint
|
289 |
-
gr.api(process_message_and_attached_file)
|
290 |
|
291 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
292 |
|
293 |
|
294 |
if __name__ == "__main__":
|
|
|
1 |
+
import os, argparse
|
2 |
import gradio as gr
|
3 |
|
4 |
+
from utils.logging_config import setup_logging, get_logger
|
5 |
|
6 |
+
# Initialize logging early - will be reconfigured based on debug mode
|
7 |
+
setup_logging()
|
8 |
+
logger = get_logger(__name__)
|
9 |
|
10 |
from utils.load_secrets import load_secrets
|
11 |
|
|
|
36 |
|
37 |
|
38 |
def app(debug: bool = False):
|
39 |
+
"""Main application function with optional debug mode"""
|
40 |
+
|
41 |
+
# Configure logging based on debug mode
|
42 |
+
if debug:
|
43 |
+
os.environ["YUGA_DEBUG"] = "true"
|
44 |
+
setup_logging("DEBUG")
|
45 |
+
logger.info("Application started in DEBUG mode")
|
46 |
+
else:
|
47 |
+
os.environ["YUGA_DEBUG"] = "false"
|
48 |
+
setup_logging("INFO")
|
49 |
+
logger.info("Application started in normal mode")
|
50 |
+
|
51 |
with gr.Blocks() as demo:
|
52 |
gr.Markdown(
|
53 |
"""
|
|
|
56 |
"""
|
57 |
)
|
58 |
|
59 |
+
_draw_info_page(debug)
|
60 |
+
# _draw_hackathon_page(debug)
|
61 |
|
62 |
+
# Register the MCP tool as an API endpoint
|
63 |
+
gr.api(process_message_and_attached_file)
|
|
|
|
|
|
|
64 |
|
65 |
+
return demo
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
|
|
|
|
|
67 |
|
68 |
+
def _draw_info_page(debug: bool = False):
|
69 |
+
with gr.Tab("Information"):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
|
71 |
+
def get_server_url():
|
72 |
+
try:
|
73 |
+
return gr.get_state().server_url + "/gradio_api/mcp/sse"
|
74 |
+
except:
|
75 |
+
return "http://localhost:7860/gradio_api/mcp/sse"
|
|
|
76 |
|
77 |
+
gr.Markdown(
|
78 |
+
f"""
|
79 |
+
This is a demo of the Yuga Planner system.
|
80 |
+
|
81 |
+
To use as an MCP server:
|
82 |
+
1. Register the MCP server with your client using the URL:
|
83 |
+
```
|
84 |
+
{get_server_url()}
|
85 |
+
```
|
86 |
+
2. Call the tool from your client. Example:
|
87 |
+
```
|
88 |
+
use yuga planner tool @tests/data/calendar.ics
|
89 |
+
Task Description: Create a new AWS VPC
|
90 |
+
```
|
|
|
|
|
|
|
|
|
91 |
|
92 |
+
"""
|
93 |
+
)
|
|
|
|
|
|
|
|
|
|
|
94 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
|
96 |
+
def _draw_hackathon_page(debug: bool = False):
|
97 |
+
with gr.Tab("Hackathon Agent Demo"):
|
98 |
+
gr.Markdown("### SWE Team Task Scheduling Demo")
|
99 |
+
|
100 |
+
gr.Markdown(
|
101 |
+
"""
|
102 |
+
## Instructions
|
103 |
+
1. Choose a project source - either upload your own project file(s) or select from our mock projects
|
104 |
+
2. Click 'Load Data' to parse, decompose, and estimate tasks
|
105 |
+
3. Click 'Solve' to generate an optimal schedule based on employee skills and availability
|
106 |
+
4. Review the results in the tables below
|
107 |
+
"""
|
108 |
+
)
|
109 |
+
|
110 |
+
# Project source selector
|
111 |
+
project_source = gr.Radio(
|
112 |
+
choices=["Upload Project Files", "Use Mock Projects"],
|
113 |
+
value="Upload Project Files",
|
114 |
+
label="Project Source",
|
115 |
+
)
|
116 |
|
117 |
+
# Configuration parameters
|
118 |
+
with gr.Row():
|
119 |
+
employee_count = gr.Number(
|
120 |
+
label="Number of Employees",
|
121 |
+
value=12,
|
122 |
+
minimum=1,
|
123 |
+
maximum=100,
|
124 |
+
step=1,
|
125 |
+
precision=0,
|
126 |
+
)
|
127 |
+
days_in_schedule = gr.Number(
|
128 |
+
label="Days in Schedule",
|
129 |
+
value=365,
|
130 |
+
minimum=1,
|
131 |
+
maximum=365,
|
132 |
+
step=1,
|
133 |
+
precision=0,
|
134 |
)
|
135 |
|
136 |
+
# File upload component (initially visible)
|
137 |
+
with gr.Group(visible=True) as file_upload_group:
|
138 |
+
file_upload = gr.File(
|
139 |
+
label="Upload Project Files (Markdown)",
|
140 |
+
file_types=[".md"],
|
141 |
+
file_count="multiple",
|
|
|
|
|
|
|
|
|
|
|
142 |
)
|
143 |
|
144 |
+
# Mock projects dropdown (initially hidden)
|
145 |
+
with gr.Group(visible=False) as mock_projects_group:
|
146 |
+
# Get mock project names from ProjectService
|
147 |
+
available_projects = MockProjectService.get_available_project_names()
|
148 |
+
mock_project_dropdown = gr.Dropdown(
|
149 |
+
choices=available_projects,
|
150 |
+
label="Select Mock Projects (multiple selection allowed)",
|
151 |
+
value=[available_projects[0]] if available_projects else [],
|
152 |
+
multiselect=True,
|
153 |
)
|
154 |
|
155 |
+
# Accordion for viewing mock project content
|
156 |
+
with gr.Accordion("📋 Project Content Preview", open=False):
|
157 |
+
mock_project_content_accordion = gr.Textbox(
|
158 |
+
label="Project Content",
|
159 |
+
interactive=False,
|
160 |
+
lines=15,
|
161 |
+
max_lines=20,
|
162 |
+
show_copy_button=True,
|
163 |
+
placeholder="Select projects above and expand this section to view content...",
|
164 |
+
)
|
165 |
|
166 |
+
# Auto-update content when projects change
|
167 |
+
mock_project_dropdown.change(
|
168 |
+
show_mock_project_content,
|
169 |
+
inputs=[mock_project_dropdown],
|
170 |
+
outputs=[mock_project_content_accordion],
|
171 |
+
)
|
172 |
|
173 |
+
# Log Terminal - Always visible for streaming logs
|
174 |
+
gr.Markdown("## Live Log Terminal")
|
175 |
|
176 |
+
# Show debug status
|
177 |
+
if debug:
|
178 |
+
gr.Markdown(
|
179 |
+
"🐛 **Debug Mode Enabled** - Showing detailed logs including DEBUG messages"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
180 |
)
|
181 |
+
else:
|
182 |
+
gr.Markdown(
|
183 |
+
"ℹ️ **Normal Mode** - Showing INFO, WARNING, and ERROR messages"
|
184 |
+
)
|
185 |
+
|
186 |
+
log_terminal = gr.Textbox(
|
187 |
+
label="Processing Logs",
|
188 |
+
interactive=False,
|
189 |
+
lines=8,
|
190 |
+
max_lines=15,
|
191 |
+
show_copy_button=True,
|
192 |
+
placeholder="Logs will appear here during data loading and solving...",
|
193 |
+
)
|
194 |
+
|
195 |
+
# Toggle visibility based on project source selection
|
196 |
+
def toggle_visibility(choice):
|
197 |
+
if choice == "Upload Project Files":
|
198 |
+
return gr.update(visible=True), gr.update(visible=False)
|
199 |
+
else:
|
200 |
+
return gr.update(visible=False), gr.update(visible=True)
|
201 |
+
|
202 |
+
project_source.change(
|
203 |
+
toggle_visibility,
|
204 |
+
inputs=[project_source],
|
205 |
+
outputs=[file_upload_group, mock_projects_group],
|
206 |
+
)
|
207 |
+
|
208 |
+
# State for LLM output, persists per session
|
209 |
+
llm_output_state = gr.State(value=None)
|
210 |
+
job_id_state = gr.State(value=None)
|
211 |
+
status_text = gr.Textbox(
|
212 |
+
label="Solver Status",
|
213 |
+
interactive=False,
|
214 |
+
lines=8,
|
215 |
+
max_lines=20,
|
216 |
+
show_copy_button=True,
|
217 |
+
)
|
218 |
|
219 |
+
with gr.Row():
|
220 |
+
load_btn = gr.Button("Load Data")
|
221 |
+
solve_btn = gr.Button("Solve", interactive=False) # Initially disabled
|
222 |
+
|
223 |
+
gr.Markdown("## Employees")
|
224 |
+
employees_table = gr.Dataframe(label="Employees", interactive=False)
|
225 |
+
|
226 |
+
gr.Markdown("## Tasks")
|
227 |
+
schedule_table = gr.Dataframe(label="Tasks Table", interactive=False)
|
228 |
+
|
229 |
+
# Outputs: always keep state as last output
|
230 |
+
outputs = [
|
231 |
+
employees_table,
|
232 |
+
schedule_table,
|
233 |
+
job_id_state,
|
234 |
+
status_text,
|
235 |
+
llm_output_state,
|
236 |
+
log_terminal,
|
237 |
+
]
|
238 |
+
|
239 |
+
# Outputs for load_data that also enables solve button
|
240 |
+
load_outputs = outputs + [solve_btn]
|
241 |
+
|
242 |
+
# Create wrapper function to pass debug flag to auto_poll
|
243 |
+
async def auto_poll_with_debug(job_id, llm_output):
|
244 |
+
result = await auto_poll(job_id, llm_output, debug=debug)
|
245 |
+
# auto_poll now returns 6 values including logs
|
246 |
+
return result
|
247 |
+
|
248 |
+
# Timer for polling (not related to state)
|
249 |
+
timer = gr.Timer(2, active=False)
|
250 |
+
timer.tick(
|
251 |
+
auto_poll_with_debug,
|
252 |
+
inputs=[job_id_state, llm_output_state],
|
253 |
+
outputs=outputs, # This now includes log_terminal updates
|
254 |
+
)
|
255 |
+
|
256 |
+
# Create wrapper function to pass debug flag to load_data
|
257 |
+
async def load_data_with_debug(
|
258 |
+
project_source,
|
259 |
+
file_obj,
|
260 |
+
mock_projects,
|
261 |
+
employee_count,
|
262 |
+
days_in_schedule,
|
263 |
+
llm_output,
|
264 |
+
progress=gr.Progress(),
|
265 |
+
):
|
266 |
+
async for result in load_data(
|
267 |
project_source,
|
268 |
file_obj,
|
269 |
mock_projects,
|
270 |
employee_count,
|
271 |
days_in_schedule,
|
272 |
llm_output,
|
273 |
+
debug=debug,
|
274 |
+
progress=progress,
|
275 |
):
|
276 |
+
yield result
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
277 |
|
278 |
+
# Use state as both input and output
|
279 |
+
load_btn.click(
|
280 |
+
load_data_with_debug,
|
281 |
+
inputs=[
|
282 |
+
project_source,
|
283 |
+
file_upload,
|
284 |
+
mock_project_dropdown,
|
285 |
+
employee_count,
|
286 |
+
days_in_schedule,
|
287 |
+
llm_output_state,
|
288 |
+
],
|
289 |
+
outputs=load_outputs,
|
290 |
+
api_name="load_data",
|
291 |
+
)
|
292 |
|
293 |
+
# Create wrapper function to pass debug flag to show_solved
|
294 |
+
async def show_solved_with_debug(state_data, job_id):
|
295 |
+
return await show_solved(state_data, job_id, debug=debug)
|
|
|
|
|
296 |
|
297 |
+
solve_btn.click(
|
298 |
+
show_solved_with_debug,
|
299 |
+
inputs=[llm_output_state, job_id_state],
|
300 |
+
outputs=outputs,
|
301 |
+
).then(start_timer, inputs=[job_id_state, llm_output_state], outputs=timer)
|
302 |
|
303 |
+
if debug:
|
304 |
+
gr.Markdown("### 🐛 Debug Controls")
|
305 |
+
gr.Markdown(
|
306 |
+
"These controls help test the centralized logging system and state management."
|
307 |
+
)
|
308 |
+
|
309 |
+
def debug_set_state(state):
|
310 |
+
logger.info("DEBUG: Setting state to test_value")
|
311 |
+
logger.debug("DEBUG: Detailed state operation in progress")
|
312 |
+
return "Debug: State set!", "test_value"
|
313 |
+
|
314 |
+
def debug_show_state(state):
|
315 |
+
logger.info("DEBUG: Current state is %s", state)
|
316 |
+
logger.debug("DEBUG: State retrieval operation completed")
|
317 |
+
return f"Debug: Current state: {state}", gr.update()
|
318 |
|
319 |
+
def debug_test_logging():
|
320 |
+
"""Test all logging levels for UI demonstration"""
|
321 |
+
logger.debug("🐛 DEBUG: This is a debug message")
|
322 |
+
logger.info("ℹ️ INFO: This is an info message")
|
323 |
+
logger.warning("⚠️ WARNING: This is a warning message")
|
324 |
+
logger.error("❌ ERROR: This is an error message")
|
325 |
+
return "Generated test log messages at all levels"
|
326 |
|
327 |
+
debug_out = gr.Textbox(label="Debug Output")
|
328 |
+
|
329 |
+
with gr.Row():
|
330 |
debug_set_btn = gr.Button("Debug Set State")
|
331 |
debug_show_btn = gr.Button("Debug Show State")
|
332 |
+
debug_log_btn = gr.Button("Test Log Levels")
|
333 |
|
334 |
+
debug_set_btn.click(
|
335 |
+
debug_set_state,
|
336 |
+
inputs=[llm_output_state],
|
337 |
+
outputs=[debug_out, llm_output_state],
|
338 |
+
)
|
339 |
+
debug_show_btn.click(
|
340 |
+
debug_show_state,
|
341 |
+
inputs=[llm_output_state],
|
342 |
+
outputs=[debug_out, gr.State()],
|
343 |
+
)
|
344 |
+
debug_log_btn.click(
|
345 |
+
debug_test_logging,
|
346 |
+
inputs=[],
|
347 |
+
outputs=[debug_out],
|
348 |
+
)
|
349 |
|
|
|
|
|
350 |
|
351 |
+
def set_test_state():
|
352 |
+
logger.debug("Setting state to test_value")
|
353 |
+
app_state.set("test_key", "test_value")
|
354 |
+
return "State set to test_value"
|
355 |
+
|
356 |
+
|
357 |
+
def get_test_state():
|
358 |
+
state = app_state.get("test_key", "No state found")
|
359 |
+
logger.debug("Current state is %s", state)
|
360 |
+
return f"Current state: {state}"
|
361 |
|
362 |
|
363 |
if __name__ == "__main__":
|
src/constraint_solvers/timetable/constraints.py
CHANGED
@@ -118,14 +118,17 @@ def no_overlapping_tasks(constraint_factory: ConstraintFactory):
|
|
118 |
return (
|
119 |
constraint_factory.for_each_unique_pair(
|
120 |
Task,
|
121 |
-
Joiners.equal(lambda task: task.employee
|
122 |
Joiners.overlapping(
|
123 |
lambda task: task.start_slot,
|
124 |
lambda task: task.start_slot + task.duration_slots,
|
125 |
),
|
126 |
)
|
|
|
|
|
|
|
127 |
.penalize(HardSoftDecimalScore.ONE_HARD, get_slot_overlap)
|
128 |
-
.as_constraint("No overlapping tasks")
|
129 |
)
|
130 |
|
131 |
|
@@ -194,10 +197,11 @@ def maintain_project_task_order(constraint_factory: ConstraintFactory):
|
|
194 |
.join(Task)
|
195 |
.filter(tasks_violate_sequence_order)
|
196 |
.penalize(
|
197 |
-
HardSoftDecimalScore.
|
198 |
-
lambda task1, task2:
|
199 |
-
|
200 |
-
|
|
|
201 |
.as_constraint("Project task sequence order")
|
202 |
)
|
203 |
|
|
|
118 |
return (
|
119 |
constraint_factory.for_each_unique_pair(
|
120 |
Task,
|
121 |
+
Joiners.equal(lambda task: task.employee), # Same employee
|
122 |
Joiners.overlapping(
|
123 |
lambda task: task.start_slot,
|
124 |
lambda task: task.start_slot + task.duration_slots,
|
125 |
),
|
126 |
)
|
127 |
+
.filter(
|
128 |
+
lambda task1, task2: task1.employee is not None
|
129 |
+
) # Only check assigned tasks
|
130 |
.penalize(HardSoftDecimalScore.ONE_HARD, get_slot_overlap)
|
131 |
+
.as_constraint("No overlapping tasks for same employee")
|
132 |
)
|
133 |
|
134 |
|
|
|
197 |
.join(Task)
|
198 |
.filter(tasks_violate_sequence_order)
|
199 |
.penalize(
|
200 |
+
HardSoftDecimalScore.ONE_HARD, # Make this a HARD constraint
|
201 |
+
lambda task1, task2: task1.start_slot
|
202 |
+
+ task1.duration_slots
|
203 |
+
- task2.start_slot,
|
204 |
+
) # Penalty proportional to overlap
|
205 |
.as_constraint("Project task sequence order")
|
206 |
)
|
207 |
|
src/constraint_solvers/timetable/domain.py
CHANGED
@@ -56,6 +56,8 @@ class Task:
|
|
56 |
project_id: str = ""
|
57 |
# Sequence number within the project to maintain original task order
|
58 |
sequence_number: int = 0
|
|
|
|
|
59 |
employee: Annotated[
|
60 |
Employee | None, PlanningVariable(value_range_provider_refs=["employeeRange"])
|
61 |
] = None
|
@@ -69,6 +71,7 @@ class Task:
|
|
69 |
"required_skill": self.required_skill,
|
70 |
"project_id": self.project_id,
|
71 |
"sequence_number": self.sequence_number,
|
|
|
72 |
"employee": self.employee.to_dict() if self.employee else None,
|
73 |
}
|
74 |
|
@@ -82,6 +85,7 @@ class Task:
|
|
82 |
required_skill=d["required_skill"],
|
83 |
project_id=d.get("project_id", ""),
|
84 |
sequence_number=d.get("sequence_number", 0),
|
|
|
85 |
employee=Employee.from_dict(d["employee"]) if d["employee"] else None,
|
86 |
)
|
87 |
|
|
|
56 |
project_id: str = ""
|
57 |
# Sequence number within the project to maintain original task order
|
58 |
sequence_number: int = 0
|
59 |
+
# Whether this task is pinned to its current assignment (for calendar events)
|
60 |
+
pinned: Annotated[bool, PlanningPin] = False
|
61 |
employee: Annotated[
|
62 |
Employee | None, PlanningVariable(value_range_provider_refs=["employeeRange"])
|
63 |
] = None
|
|
|
71 |
"required_skill": self.required_skill,
|
72 |
"project_id": self.project_id,
|
73 |
"sequence_number": self.sequence_number,
|
74 |
+
"pinned": self.pinned,
|
75 |
"employee": self.employee.to_dict() if self.employee else None,
|
76 |
}
|
77 |
|
|
|
85 |
required_skill=d["required_skill"],
|
86 |
project_id=d.get("project_id", ""),
|
87 |
sequence_number=d.get("sequence_number", 0),
|
88 |
+
pinned=d.get("pinned", False),
|
89 |
employee=Employee.from_dict(d["employee"]) if d["employee"] else None,
|
90 |
)
|
91 |
|
src/constraint_solvers/timetable/solver.py
CHANGED
@@ -16,7 +16,7 @@ solver_config: SolverConfig = SolverConfig(
|
|
16 |
score_director_factory_config=ScoreDirectorFactoryConfig(
|
17 |
constraint_provider_function=define_constraints
|
18 |
),
|
19 |
-
termination_config=TerminationConfig(spent_limit=Duration(seconds=30)),
|
20 |
)
|
21 |
|
22 |
solver_manager: SolverManager = SolverManager.create(
|
|
|
16 |
score_director_factory_config=ScoreDirectorFactoryConfig(
|
17 |
constraint_provider_function=define_constraints
|
18 |
),
|
19 |
+
# termination_config=TerminationConfig(spent_limit=Duration(seconds=30)), # Commented out to allow unlimited solving time
|
20 |
)
|
21 |
|
22 |
solver_manager: SolverManager = SolverManager.create(
|
src/domain.py
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
import os
|
2 |
from dataclasses import dataclass
|
3 |
|
4 |
# =========================
|
@@ -78,7 +78,7 @@ class AgentsConfig:
|
|
78 |
nebius_model: str
|
79 |
|
80 |
# Prompt templates
|
81 |
-
task_splitter_prompt: str = "Split the following task into an accurate and concise tree of required subtasks:\n{{query}}\n\nYour output must be a markdown bullet list, with no additional comments.\n\n"
|
82 |
task_evaluator_prompt: str = "Evaluate the elapsed time, in 30 minute units, for a competent human to complete the following task:\n{{query}}\n\nYour output must be a one integer, with no additional comments.\n\n"
|
83 |
task_deps_matcher_prompt: str = "Given the following task:\n{{task}}\n\nAnd these available skills:\n{{skills}}\n\nIn this context:\n{{context}}\n\nSelect the most appropriate skill to complete this task. Return only the skill name as a string, with no additional comments or formatting.\n\n"
|
84 |
|
@@ -95,12 +95,10 @@ class AgentsConfig:
|
|
95 |
"""Validate required configuration"""
|
96 |
if not self.nebius_model or not self.nebius_api_key:
|
97 |
if self.nebius_model == "dev-model" and self.nebius_api_key == "dev-key":
|
98 |
-
# Development mode - just warn
|
99 |
-
import warnings
|
100 |
-
|
101 |
warnings.warn(
|
102 |
"Using development defaults for NEBIUS_MODEL and NEBIUS_API_KEY"
|
103 |
)
|
|
|
104 |
else:
|
105 |
raise ValueError(
|
106 |
"NEBIUS_MODEL and NEBIUS_API_KEY environment variables must be set"
|
|
|
1 |
+
import os, warnings
|
2 |
from dataclasses import dataclass
|
3 |
|
4 |
# =========================
|
|
|
78 |
nebius_model: str
|
79 |
|
80 |
# Prompt templates
|
81 |
+
task_splitter_prompt: str = "Split the following task into an accurate and concise tree of required subtasks:\n{{query}}\n\nAim for 3 to 15 subtasks.\n\nYour output must be a markdown bullet list, with no additional comments.\n\n"
|
82 |
task_evaluator_prompt: str = "Evaluate the elapsed time, in 30 minute units, for a competent human to complete the following task:\n{{query}}\n\nYour output must be a one integer, with no additional comments.\n\n"
|
83 |
task_deps_matcher_prompt: str = "Given the following task:\n{{task}}\n\nAnd these available skills:\n{{skills}}\n\nIn this context:\n{{context}}\n\nSelect the most appropriate skill to complete this task. Return only the skill name as a string, with no additional comments or formatting.\n\n"
|
84 |
|
|
|
95 |
"""Validate required configuration"""
|
96 |
if not self.nebius_model or not self.nebius_api_key:
|
97 |
if self.nebius_model == "dev-model" and self.nebius_api_key == "dev-key":
|
|
|
|
|
|
|
98 |
warnings.warn(
|
99 |
"Using development defaults for NEBIUS_MODEL and NEBIUS_API_KEY"
|
100 |
)
|
101 |
+
|
102 |
else:
|
103 |
raise ValueError(
|
104 |
"NEBIUS_MODEL and NEBIUS_API_KEY environment variables must be set"
|
src/factory/data_generators.py
CHANGED
@@ -23,10 +23,13 @@ LAST_NAMES = (
|
|
23 |
|
24 |
|
25 |
def generate_employees(
|
26 |
-
parameters: TimeTableDataParameters,
|
|
|
|
|
27 |
) -> list[Employee]:
|
28 |
"""
|
29 |
Generates a list of Employee objects with random names and skills.
|
|
|
30 |
"""
|
31 |
name_permutations = [
|
32 |
f"{first_name} {last_name}"
|
@@ -36,19 +39,60 @@ def generate_employees(
|
|
36 |
random.shuffle(name_permutations)
|
37 |
|
38 |
employees = []
|
39 |
-
for i in range(parameters.employee_count):
|
40 |
-
(count,) = random.choices(
|
41 |
-
population=counts(parameters.optional_skill_distribution),
|
42 |
-
weights=weights(parameters.optional_skill_distribution),
|
43 |
-
)
|
44 |
|
45 |
-
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
return employees
|
54 |
|
@@ -185,11 +229,14 @@ def generate_tasks_from_calendar(
|
|
185 |
duration_slots = max(1, duration_minutes // 30)
|
186 |
else:
|
187 |
duration_slots = 2 # Default 1 hour
|
|
|
188 |
# Randomize required_skill as in generate_tasks
|
189 |
if random.random() >= 0.5:
|
190 |
required_skill = random.choice(parameters.skill_set.required_skills)
|
|
|
191 |
else:
|
192 |
required_skill = random.choice(parameters.skill_set.optional_skills)
|
|
|
193 |
tasks.append(
|
194 |
Task(
|
195 |
id=next(ids),
|
@@ -199,8 +246,10 @@ def generate_tasks_from_calendar(
|
|
199 |
required_skill=required_skill,
|
200 |
)
|
201 |
)
|
|
|
202 |
except Exception:
|
203 |
continue
|
|
|
204 |
return tasks
|
205 |
|
206 |
|
@@ -292,9 +341,3 @@ def tasks_from_agent_output(agent_output, parameters, project_id: str = ""):
|
|
292 |
)
|
293 |
)
|
294 |
return tasks
|
295 |
-
|
296 |
-
|
297 |
-
def skills_from_parameters(parameters: TimeTableDataParameters) -> list[str]:
|
298 |
-
return list(parameters.skill_set.required_skills) + list(
|
299 |
-
parameters.skill_set.optional_skills
|
300 |
-
)
|
|
|
23 |
|
24 |
|
25 |
def generate_employees(
|
26 |
+
parameters: TimeTableDataParameters,
|
27 |
+
random: Random,
|
28 |
+
required_skills_needed: set[str] = None,
|
29 |
) -> list[Employee]:
|
30 |
"""
|
31 |
Generates a list of Employee objects with random names and skills.
|
32 |
+
Ensures that collectively the employees have all required_skills_needed.
|
33 |
"""
|
34 |
name_permutations = [
|
35 |
f"{first_name} {last_name}"
|
|
|
39 |
random.shuffle(name_permutations)
|
40 |
|
41 |
employees = []
|
|
|
|
|
|
|
|
|
|
|
42 |
|
43 |
+
# If specific skills are needed, ensure they're covered
|
44 |
+
if required_skills_needed:
|
45 |
+
skills_needed = set(required_skills_needed)
|
46 |
+
|
47 |
+
# For single employee (MCP case), give them all needed skills plus some random ones
|
48 |
+
if parameters.employee_count == 1:
|
49 |
+
all_available_skills = list(parameters.skill_set.required_skills) + list(
|
50 |
+
parameters.skill_set.optional_skills
|
51 |
+
)
|
52 |
+
# Give all available skills to the single employee to handle any task
|
53 |
+
employees.append(
|
54 |
+
Employee(name=name_permutations[0], skills=set(all_available_skills))
|
55 |
+
)
|
56 |
+
return employees
|
57 |
+
|
58 |
+
# For multiple employees, distribute needed skills and add random skills
|
59 |
+
for i in range(parameters.employee_count):
|
60 |
+
(count,) = random.choices(
|
61 |
+
population=counts(parameters.optional_skill_distribution),
|
62 |
+
weights=weights(parameters.optional_skill_distribution),
|
63 |
+
)
|
64 |
+
count = min(count, len(parameters.skill_set.optional_skills))
|
65 |
+
|
66 |
+
skills = []
|
67 |
+
|
68 |
+
# Ensure each employee gets at least one required skill
|
69 |
+
skills += random.sample(parameters.skill_set.required_skills, 1)
|
70 |
+
|
71 |
+
# Add random optional skills
|
72 |
+
skills += random.sample(parameters.skill_set.optional_skills, count)
|
73 |
|
74 |
+
# If there are still skills needed and this is one of the first employees,
|
75 |
+
# ensure they get some of the needed skills
|
76 |
+
if skills_needed and i < len(skills_needed):
|
77 |
+
needed_skill = skills_needed.pop()
|
78 |
+
if needed_skill not in skills:
|
79 |
+
skills.append(needed_skill)
|
80 |
+
|
81 |
+
employees.append(Employee(name=name_permutations[i], skills=set(skills)))
|
82 |
+
|
83 |
+
else:
|
84 |
+
# Original random generation when no specific skills are needed
|
85 |
+
for i in range(parameters.employee_count):
|
86 |
+
(count,) = random.choices(
|
87 |
+
population=counts(parameters.optional_skill_distribution),
|
88 |
+
weights=weights(parameters.optional_skill_distribution),
|
89 |
+
)
|
90 |
+
count = min(count, len(parameters.skill_set.optional_skills))
|
91 |
+
|
92 |
+
skills = []
|
93 |
+
skills += random.sample(parameters.skill_set.optional_skills, count)
|
94 |
+
skills += random.sample(parameters.skill_set.required_skills, 1)
|
95 |
+
employees.append(Employee(name=name_permutations[i], skills=set(skills)))
|
96 |
|
97 |
return employees
|
98 |
|
|
|
229 |
duration_slots = max(1, duration_minutes // 30)
|
230 |
else:
|
231 |
duration_slots = 2 # Default 1 hour
|
232 |
+
|
233 |
# Randomize required_skill as in generate_tasks
|
234 |
if random.random() >= 0.5:
|
235 |
required_skill = random.choice(parameters.skill_set.required_skills)
|
236 |
+
|
237 |
else:
|
238 |
required_skill = random.choice(parameters.skill_set.optional_skills)
|
239 |
+
|
240 |
tasks.append(
|
241 |
Task(
|
242 |
id=next(ids),
|
|
|
246 |
required_skill=required_skill,
|
247 |
)
|
248 |
)
|
249 |
+
|
250 |
except Exception:
|
251 |
continue
|
252 |
+
|
253 |
return tasks
|
254 |
|
255 |
|
|
|
341 |
)
|
342 |
)
|
343 |
return tasks
|
|
|
|
|
|
|
|
|
|
|
|
src/factory/data_provider.py
CHANGED
@@ -16,9 +16,11 @@ from agents.task_composer_agent import TaskComposerAgent
|
|
16 |
|
17 |
from constraint_solvers.timetable.domain import *
|
18 |
|
19 |
-
import
|
20 |
|
21 |
-
logging
|
|
|
|
|
22 |
|
23 |
# =========================
|
24 |
# CONSTANTS
|
@@ -102,8 +104,7 @@ async def generate_agent_data(
|
|
102 |
employees: list[Employee] = generate_employees(parameters, randomizer)
|
103 |
total_slots: int = parameters.days_in_schedule * SLOTS_PER_DAY
|
104 |
|
105 |
-
|
106 |
-
logging.info("FILE OBJECT: %s %s", file, type(file))
|
107 |
|
108 |
match file:
|
109 |
case file if hasattr(file, "read"):
|
@@ -158,12 +159,44 @@ async def generate_mcp_data(
|
|
158 |
|
159 |
start_date: date = earliest_monday_on_or_after(date.today())
|
160 |
randomizer: Random = Random(parameters.random_seed)
|
161 |
-
employees: list[Employee] = generate_employees(parameters, randomizer)
|
162 |
total_slots: int = parameters.days_in_schedule * SLOTS_PER_DAY
|
163 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
164 |
# Set the single employee's name to 'Chatbot User'
|
165 |
if len(employees) == 1:
|
166 |
employees[0].name = "Chatbot User"
|
|
|
167 |
else:
|
168 |
raise ValueError("MCP data provider only supports one employee")
|
169 |
|
@@ -173,16 +206,11 @@ async def generate_mcp_data(
|
|
173 |
emp.undesired_dates.clear()
|
174 |
emp.desired_dates.clear()
|
175 |
|
176 |
-
# ---
|
177 |
-
|
178 |
-
parameters, randomizer, calendar_entries
|
179 |
-
)
|
180 |
-
# Assign project_id 'EXISTING' to all calendar tasks
|
181 |
-
for t in calendar_tasks:
|
182 |
-
t.sequence_number = 0 # will be overwritten later
|
183 |
t.employee = employees[0]
|
184 |
-
|
185 |
-
# Create
|
186 |
calendar_df = pd.DataFrame(
|
187 |
[
|
188 |
{
|
@@ -199,20 +227,8 @@ async def generate_mcp_data(
|
|
199 |
]
|
200 |
)
|
201 |
|
202 |
-
|
203 |
-
print(calendar_df)
|
204 |
|
205 |
-
# --- LLM TASKS ---
|
206 |
-
llm_tasks = []
|
207 |
-
if user_message:
|
208 |
-
from factory.data_provider import run_task_composer_agent
|
209 |
-
|
210 |
-
agent_output = await run_task_composer_agent(user_message, parameters)
|
211 |
-
llm_tasks = tasks_from_agent_output(agent_output, parameters, "PROJECT")
|
212 |
-
for t in llm_tasks:
|
213 |
-
t.sequence_number = 0 # will be overwritten later
|
214 |
-
t.employee = employees[0]
|
215 |
-
t.project_id = "PROJECT"
|
216 |
llm_df = pd.DataFrame(
|
217 |
[
|
218 |
{
|
@@ -229,12 +245,9 @@ async def generate_mcp_data(
|
|
229 |
]
|
230 |
)
|
231 |
|
232 |
-
|
233 |
-
print(llm_df)
|
234 |
|
235 |
-
# ---
|
236 |
-
all_tasks = calendar_tasks + llm_tasks
|
237 |
-
# Assign sequence_number per project group
|
238 |
existing_seq = 0
|
239 |
project_seq = 0
|
240 |
for t in all_tasks:
|
@@ -250,9 +263,11 @@ async def generate_mcp_data(
|
|
250 |
tasks=all_tasks,
|
251 |
schedule_info=ScheduleInfo(total_slots=total_slots),
|
252 |
)
|
|
|
253 |
final_df = schedule_to_dataframe(schedule)
|
254 |
-
|
255 |
-
|
|
|
256 |
return final_df
|
257 |
|
258 |
|
@@ -265,18 +280,26 @@ async def run_task_composer_agent(
|
|
265 |
)
|
266 |
context = f"Project scheduling for {parameters.employee_count} employees over {parameters.days_in_schedule} days"
|
267 |
|
268 |
-
|
269 |
-
|
270 |
-
|
|
|
|
|
|
|
|
|
271 |
|
272 |
try:
|
273 |
agent_output = await agent.run_workflow(
|
274 |
query=input_str, skills=available_skills, context=context
|
275 |
)
|
276 |
-
|
277 |
-
|
|
|
|
|
278 |
)
|
|
|
279 |
return agent_output
|
|
|
280 |
except Exception as e:
|
281 |
-
|
282 |
raise
|
|
|
16 |
|
17 |
from constraint_solvers.timetable.domain import *
|
18 |
|
19 |
+
from utils.logging_config import setup_logging, get_logger
|
20 |
|
21 |
+
# Initialize logging
|
22 |
+
setup_logging()
|
23 |
+
logger = get_logger(__name__)
|
24 |
|
25 |
# =========================
|
26 |
# CONSTANTS
|
|
|
104 |
employees: list[Employee] = generate_employees(parameters, randomizer)
|
105 |
total_slots: int = parameters.days_in_schedule * SLOTS_PER_DAY
|
106 |
|
107 |
+
logger.debug("Processing file object: %s (type: %s)", file, type(file))
|
|
|
108 |
|
109 |
match file:
|
110 |
case file if hasattr(file, "read"):
|
|
|
159 |
|
160 |
start_date: date = earliest_monday_on_or_after(date.today())
|
161 |
randomizer: Random = Random(parameters.random_seed)
|
|
|
162 |
total_slots: int = parameters.days_in_schedule * SLOTS_PER_DAY
|
163 |
|
164 |
+
# --- CALENDAR TASKS ---
|
165 |
+
calendar_tasks = generate_tasks_from_calendar(
|
166 |
+
parameters, randomizer, calendar_entries
|
167 |
+
)
|
168 |
+
# Assign project_id 'EXISTING' to all calendar tasks
|
169 |
+
for t in calendar_tasks:
|
170 |
+
t.sequence_number = 0 # will be overwritten later
|
171 |
+
t.project_id = "EXISTING"
|
172 |
+
|
173 |
+
# --- LLM TASKS ---
|
174 |
+
llm_tasks = []
|
175 |
+
if user_message:
|
176 |
+
from factory.data_provider import run_task_composer_agent
|
177 |
+
|
178 |
+
agent_output = await run_task_composer_agent(user_message, parameters)
|
179 |
+
llm_tasks = tasks_from_agent_output(agent_output, parameters, "PROJECT")
|
180 |
+
for t in llm_tasks:
|
181 |
+
t.sequence_number = 0 # will be overwritten later
|
182 |
+
t.project_id = "PROJECT"
|
183 |
+
|
184 |
+
# --- ANALYZE REQUIRED SKILLS ---
|
185 |
+
all_tasks = calendar_tasks + llm_tasks
|
186 |
+
required_skills_needed = set()
|
187 |
+
for task in all_tasks:
|
188 |
+
if hasattr(task, "required_skill") and task.required_skill:
|
189 |
+
required_skills_needed.add(task.required_skill)
|
190 |
+
|
191 |
+
# --- GENERATE EMPLOYEES WITH REQUIRED SKILLS ---
|
192 |
+
employees: list[Employee] = generate_employees(
|
193 |
+
parameters, randomizer, required_skills_needed
|
194 |
+
)
|
195 |
+
|
196 |
# Set the single employee's name to 'Chatbot User'
|
197 |
if len(employees) == 1:
|
198 |
employees[0].name = "Chatbot User"
|
199 |
+
|
200 |
else:
|
201 |
raise ValueError("MCP data provider only supports one employee")
|
202 |
|
|
|
206 |
emp.undesired_dates.clear()
|
207 |
emp.desired_dates.clear()
|
208 |
|
209 |
+
# --- ASSIGN EMPLOYEES TO TASKS ---
|
210 |
+
for t in all_tasks:
|
|
|
|
|
|
|
|
|
|
|
211 |
t.employee = employees[0]
|
212 |
+
|
213 |
+
# Create DataFrames for debugging
|
214 |
calendar_df = pd.DataFrame(
|
215 |
[
|
216 |
{
|
|
|
227 |
]
|
228 |
)
|
229 |
|
230 |
+
logger.debug("Generated calendar tasks DataFrame:\n%s", calendar_df)
|
|
|
231 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
232 |
llm_df = pd.DataFrame(
|
233 |
[
|
234 |
{
|
|
|
245 |
]
|
246 |
)
|
247 |
|
248 |
+
logger.debug("Generated LLM tasks DataFrame:\n%s", llm_df)
|
|
|
249 |
|
250 |
+
# --- ASSIGN SEQUENCE NUMBERS ---
|
|
|
|
|
251 |
existing_seq = 0
|
252 |
project_seq = 0
|
253 |
for t in all_tasks:
|
|
|
263 |
tasks=all_tasks,
|
264 |
schedule_info=ScheduleInfo(total_slots=total_slots),
|
265 |
)
|
266 |
+
|
267 |
final_df = schedule_to_dataframe(schedule)
|
268 |
+
|
269 |
+
logger.debug("Final schedule DataFrame (MCP-aligned):\n%s", final_df)
|
270 |
+
|
271 |
return final_df
|
272 |
|
273 |
|
|
|
280 |
)
|
281 |
context = f"Project scheduling for {parameters.employee_count} employees over {parameters.days_in_schedule} days"
|
282 |
|
283 |
+
logger.info(
|
284 |
+
"Starting task composer workflow - timeout: %ds, input length: %d chars",
|
285 |
+
AGENTS_CONFIG.workflow_timeout,
|
286 |
+
len(input_str),
|
287 |
+
)
|
288 |
+
|
289 |
+
logger.debug("Available skills: %s", available_skills)
|
290 |
|
291 |
try:
|
292 |
agent_output = await agent.run_workflow(
|
293 |
query=input_str, skills=available_skills, context=context
|
294 |
)
|
295 |
+
|
296 |
+
logger.info(
|
297 |
+
"Task composer workflow completed successfully - generated %d tasks",
|
298 |
+
len(agent_output),
|
299 |
)
|
300 |
+
|
301 |
return agent_output
|
302 |
+
|
303 |
except Exception as e:
|
304 |
+
logger.error("Task composer workflow failed: %s", e)
|
305 |
raise
|
src/handlers.py
CHANGED
@@ -1,16 +1,20 @@
|
|
1 |
-
import
|
2 |
-
from typing import Tuple, Dict, List, Optional
|
3 |
|
4 |
import pandas as pd
|
5 |
import gradio as gr
|
6 |
|
7 |
-
from
|
|
|
|
|
|
|
|
|
8 |
|
9 |
from services import (
|
10 |
LoggingService,
|
11 |
ScheduleService,
|
12 |
DataService,
|
13 |
MockProjectService,
|
|
|
14 |
)
|
15 |
|
16 |
# Global logging service instance for UI streaming
|
@@ -21,16 +25,18 @@ async def show_solved(
|
|
21 |
state_data, job_id: str, debug: bool = False
|
22 |
) -> Tuple[pd.DataFrame, pd.DataFrame, str, str, object, str]:
|
23 |
"""Handler for solving a schedule from UI state data"""
|
24 |
-
#
|
25 |
-
|
26 |
|
27 |
-
|
28 |
-
|
|
|
|
|
29 |
)
|
30 |
|
31 |
# Check if data has been loaded
|
32 |
if not state_data:
|
33 |
-
|
34 |
return (
|
35 |
gr.update(),
|
36 |
gr.update(),
|
@@ -40,7 +46,7 @@ async def show_solved(
|
|
40 |
logging_service.get_streaming_logs(),
|
41 |
)
|
42 |
|
43 |
-
|
44 |
|
45 |
try:
|
46 |
# Use the schedule service to solve the schedule
|
@@ -54,7 +60,7 @@ async def show_solved(
|
|
54 |
state_data, job_id, debug=debug
|
55 |
)
|
56 |
|
57 |
-
|
58 |
|
59 |
return (
|
60 |
emp_df,
|
@@ -65,7 +71,7 @@ async def show_solved(
|
|
65 |
logging_service.get_streaming_logs(),
|
66 |
)
|
67 |
except Exception as e:
|
68 |
-
|
69 |
return (
|
70 |
gr.update(),
|
71 |
gr.update(),
|
@@ -95,12 +101,14 @@ async def load_data(
|
|
95 |
Handler for data loading from either file uploads or mock projects - streaming version
|
96 |
Yields intermediate updates for real-time progress
|
97 |
"""
|
98 |
-
#
|
99 |
-
|
100 |
logging_service.clear_streaming_logs()
|
101 |
|
102 |
# Initial log message
|
103 |
-
|
|
|
|
|
104 |
|
105 |
# Yield initial state
|
106 |
yield (
|
@@ -131,7 +139,9 @@ async def load_data(
|
|
131 |
)
|
132 |
|
133 |
# Store schedule for later use
|
134 |
-
|
|
|
|
|
135 |
|
136 |
# Final yield with complete results
|
137 |
yield (
|
@@ -145,7 +155,7 @@ async def load_data(
|
|
145 |
)
|
146 |
|
147 |
except Exception as e:
|
148 |
-
|
149 |
yield (
|
150 |
gr.update(),
|
151 |
gr.update(),
|
@@ -181,25 +191,25 @@ def poll_solution(
|
|
181 |
job_id,
|
182 |
status_message,
|
183 |
schedule,
|
184 |
-
|
185 |
)
|
186 |
|
187 |
except Exception as e:
|
188 |
-
|
189 |
return (
|
190 |
gr.update(),
|
191 |
gr.update(),
|
192 |
job_id,
|
193 |
f"Error polling solution: {str(e)}",
|
194 |
schedule,
|
195 |
-
|
196 |
)
|
197 |
|
198 |
|
199 |
async def auto_poll(
|
200 |
job_id: str, llm_output: dict, debug: bool = False
|
201 |
) -> Tuple[pd.DataFrame, pd.DataFrame, str, str, dict, str]:
|
202 |
-
"""Handler for
|
203 |
try:
|
204 |
(
|
205 |
emp_df,
|
@@ -210,21 +220,40 @@ async def auto_poll(
|
|
210 |
) = await ScheduleService.auto_poll(job_id, llm_output, debug)
|
211 |
|
212 |
return (
|
213 |
-
emp_df,
|
214 |
-
task_df,
|
215 |
-
job_id,
|
216 |
-
status_message,
|
217 |
-
llm_output,
|
218 |
-
logging_service.get_streaming_logs(), #
|
219 |
)
|
220 |
|
221 |
except Exception as e:
|
222 |
-
|
223 |
return (
|
224 |
gr.update(),
|
225 |
gr.update(),
|
226 |
job_id,
|
227 |
-
f"Error in auto
|
228 |
llm_output,
|
229 |
-
logging_service.get_streaming_logs(), #
|
230 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import Tuple
|
|
|
2 |
|
3 |
import pandas as pd
|
4 |
import gradio as gr
|
5 |
|
6 |
+
from utils.logging_config import setup_logging, get_logger, is_debug_enabled
|
7 |
+
|
8 |
+
# Initialize logging
|
9 |
+
setup_logging()
|
10 |
+
logger = get_logger(__name__)
|
11 |
|
12 |
from services import (
|
13 |
LoggingService,
|
14 |
ScheduleService,
|
15 |
DataService,
|
16 |
MockProjectService,
|
17 |
+
StateService,
|
18 |
)
|
19 |
|
20 |
# Global logging service instance for UI streaming
|
|
|
25 |
state_data, job_id: str, debug: bool = False
|
26 |
) -> Tuple[pd.DataFrame, pd.DataFrame, str, str, object, str]:
|
27 |
"""Handler for solving a schedule from UI state data"""
|
28 |
+
# Ensure log streaming is set up and respects debug mode
|
29 |
+
_ensure_log_streaming_setup(debug)
|
30 |
|
31 |
+
logger.info(
|
32 |
+
"show_solved called with state_data type: %s, job_id: %s",
|
33 |
+
type(state_data),
|
34 |
+
job_id,
|
35 |
)
|
36 |
|
37 |
# Check if data has been loaded
|
38 |
if not state_data:
|
39 |
+
logger.warning("No data loaded - cannot solve schedule")
|
40 |
return (
|
41 |
gr.update(),
|
42 |
gr.update(),
|
|
|
46 |
logging_service.get_streaming_logs(),
|
47 |
)
|
48 |
|
49 |
+
logger.info("State data found, proceeding with solve...")
|
50 |
|
51 |
try:
|
52 |
# Use the schedule service to solve the schedule
|
|
|
60 |
state_data, job_id, debug=debug
|
61 |
)
|
62 |
|
63 |
+
logger.info("Solver completed successfully, returning results")
|
64 |
|
65 |
return (
|
66 |
emp_df,
|
|
|
71 |
logging_service.get_streaming_logs(),
|
72 |
)
|
73 |
except Exception as e:
|
74 |
+
logger.error("Error in show_solved: %s", e)
|
75 |
return (
|
76 |
gr.update(),
|
77 |
gr.update(),
|
|
|
101 |
Handler for data loading from either file uploads or mock projects - streaming version
|
102 |
Yields intermediate updates for real-time progress
|
103 |
"""
|
104 |
+
# Ensure log streaming is set up and clear previous logs
|
105 |
+
_ensure_log_streaming_setup(debug)
|
106 |
logging_service.clear_streaming_logs()
|
107 |
|
108 |
# Initial log message
|
109 |
+
logger.info("Starting data loading process...")
|
110 |
+
if debug:
|
111 |
+
logger.debug("Debug mode enabled for data loading")
|
112 |
|
113 |
# Yield initial state
|
114 |
yield (
|
|
|
139 |
)
|
140 |
|
141 |
# Store schedule for later use
|
142 |
+
StateService.store_solved_schedule(
|
143 |
+
job_id, None
|
144 |
+
) # Will be populated when solved
|
145 |
|
146 |
# Final yield with complete results
|
147 |
yield (
|
|
|
155 |
)
|
156 |
|
157 |
except Exception as e:
|
158 |
+
logger.error("Error loading data: %s", e)
|
159 |
yield (
|
160 |
gr.update(),
|
161 |
gr.update(),
|
|
|
191 |
job_id,
|
192 |
status_message,
|
193 |
schedule,
|
194 |
+
logging_service.get_streaming_logs(), # Include logs in polling updates
|
195 |
)
|
196 |
|
197 |
except Exception as e:
|
198 |
+
logger.error("Error in poll_solution: %s", e)
|
199 |
return (
|
200 |
gr.update(),
|
201 |
gr.update(),
|
202 |
job_id,
|
203 |
f"Error polling solution: {str(e)}",
|
204 |
schedule,
|
205 |
+
logging_service.get_streaming_logs(), # Include logs even on error
|
206 |
)
|
207 |
|
208 |
|
209 |
async def auto_poll(
|
210 |
job_id: str, llm_output: dict, debug: bool = False
|
211 |
) -> Tuple[pd.DataFrame, pd.DataFrame, str, str, dict, str]:
|
212 |
+
"""Handler for auto-polling a solution"""
|
213 |
try:
|
214 |
(
|
215 |
emp_df,
|
|
|
220 |
) = await ScheduleService.auto_poll(job_id, llm_output, debug)
|
221 |
|
222 |
return (
|
223 |
+
emp_df,
|
224 |
+
task_df,
|
225 |
+
job_id,
|
226 |
+
status_message,
|
227 |
+
llm_output,
|
228 |
+
logging_service.get_streaming_logs(), # Include logs in auto-poll updates
|
229 |
)
|
230 |
|
231 |
except Exception as e:
|
232 |
+
logger.error("Error in auto_poll: %s", e)
|
233 |
return (
|
234 |
gr.update(),
|
235 |
gr.update(),
|
236 |
job_id,
|
237 |
+
f"Error in auto-polling: {str(e)}",
|
238 |
llm_output,
|
239 |
+
logging_service.get_streaming_logs(), # Include logs even on error
|
240 |
)
|
241 |
+
|
242 |
+
|
243 |
+
def _ensure_log_streaming_setup(debug: bool = False) -> None:
|
244 |
+
"""
|
245 |
+
Ensure log streaming is properly set up with current debug settings.
|
246 |
+
This helps maintain consistency when debug mode changes at runtime.
|
247 |
+
"""
|
248 |
+
if debug:
|
249 |
+
# Force debug mode setup if explicitly requested
|
250 |
+
import os
|
251 |
+
|
252 |
+
os.environ["YUGA_DEBUG"] = "true"
|
253 |
+
setup_logging("DEBUG")
|
254 |
+
|
255 |
+
# Always setup streaming (it will respect current logging level)
|
256 |
+
logging_service.setup_log_streaming()
|
257 |
+
|
258 |
+
if debug:
|
259 |
+
logger.debug("Log streaming setup completed with debug mode enabled")
|
src/helpers.py
CHANGED
@@ -20,7 +20,15 @@ def schedule_to_dataframe(schedule) -> pd.DataFrame:
|
|
20 |
employee: str = task.employee.name if task.employee else "Unassigned"
|
21 |
|
22 |
# Calculate start and end times based on 30-minute slots
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
end_time: datetime = start_time + timedelta(minutes=30 * task.duration_slots)
|
25 |
|
26 |
# Add task data to list with availability flags
|
|
|
20 |
employee: str = task.employee.name if task.employee else "Unassigned"
|
21 |
|
22 |
# Calculate start and end times based on 30-minute slots
|
23 |
+
# Schedule starts from next Monday at 8 AM
|
24 |
+
from datetime import date
|
25 |
+
from factory.data_generators import earliest_monday_on_or_after
|
26 |
+
|
27 |
+
base_date = earliest_monday_on_or_after(date.today())
|
28 |
+
base_datetime = datetime.combine(
|
29 |
+
base_date, datetime.min.time().replace(hour=8)
|
30 |
+
) # Start at 8 AM Monday
|
31 |
+
start_time: datetime = base_datetime + timedelta(minutes=30 * task.start_slot)
|
32 |
end_time: datetime = start_time + timedelta(minutes=30 * task.duration_slots)
|
33 |
|
34 |
# Add task data to list with availability flags
|
src/mcp_handlers.py
CHANGED
@@ -1,116 +1,224 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
import time
|
5 |
-
import asyncio
|
6 |
|
7 |
from utils.extract_calendar import extract_ical_entries
|
|
|
8 |
from factory.data_provider import generate_mcp_data
|
9 |
-
from services
|
|
|
10 |
|
|
|
11 |
|
12 |
-
|
13 |
-
|
14 |
-
user_message: str
|
15 |
-
file: str
|
16 |
-
calendar_entries: list = None
|
17 |
-
error: str = None
|
18 |
-
solved_task_df: object = None
|
19 |
-
status: str = None
|
20 |
-
score: object = None
|
21 |
|
22 |
|
23 |
async def process_message_and_attached_file(file_path: str, message_body: str) -> dict:
|
24 |
"""
|
25 |
-
|
|
|
|
|
|
|
26 |
Args:
|
27 |
-
file_path (str): Path to the attached file
|
28 |
message_body (str): The body of the last chat message, which contains the task description
|
29 |
Returns:
|
30 |
dict: Contains confirmation, file info, calendar entries, error, and solved schedule info
|
31 |
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
try:
|
|
|
|
|
|
|
33 |
with open(file_path, "rb") as f:
|
34 |
file_bytes = f.read()
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
)
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
)
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
calendar_entries
|
61 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
)
|
63 |
-
return result.__dict__
|
64 |
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
solved_task_df,
|
75 |
-
new_job_id,
|
76 |
-
status,
|
77 |
-
state_data,
|
78 |
-
) = await ScheduleService.solve_schedule_from_state(state_data, job_id, debug=True)
|
79 |
-
|
80 |
-
# Poll for the solution until the status string does not contain 'Solving'
|
81 |
-
max_wait = 30 # seconds
|
82 |
-
interval = 0.5
|
83 |
-
waited = 0
|
84 |
-
final_task_df = None
|
85 |
-
final_status = None
|
86 |
-
final_score = None
|
87 |
-
solved = False
|
88 |
-
while waited < max_wait:
|
89 |
-
(
|
90 |
-
_,
|
91 |
-
polled_task_df,
|
92 |
-
_,
|
93 |
-
polled_status,
|
94 |
-
solved_schedule,
|
95 |
-
) = ScheduleService.poll_solution(new_job_id, None, debug=True)
|
96 |
-
if polled_status and "Solving" not in polled_status:
|
97 |
-
final_task_df = polled_task_df
|
98 |
-
final_status = polled_status
|
99 |
-
final_score = getattr(solved_schedule, "score", None)
|
100 |
-
solved = True
|
101 |
-
break
|
102 |
-
await asyncio.sleep(interval)
|
103 |
-
waited += interval
|
104 |
-
|
105 |
-
result = MCPProcessingResult(
|
106 |
-
user_message=f"Received your message: {message_body}",
|
107 |
-
file=os.path.basename(file_path),
|
108 |
-
calendar_entries=entries,
|
109 |
-
solved_task_df=final_task_df.to_dict(orient="records")
|
110 |
-
if final_task_df is not None
|
111 |
-
else None,
|
112 |
-
status=final_status,
|
113 |
-
score=final_score,
|
114 |
-
error=None if solved else "Solver did not finish within the timeout",
|
115 |
-
)
|
116 |
-
return result.__dict__
|
|
|
1 |
+
"""
|
2 |
+
MCP (Model Context Protocol) Handlers for Yuga Planner
|
3 |
+
|
4 |
+
This module provides an MCP tool endpoint for external integrations and is separate from the Gradio UI's workflow.
|
5 |
+
|
6 |
+
Key Features:
|
7 |
+
- Centralized logging integration with debug mode support
|
8 |
+
- Performance timing for API monitoring
|
9 |
+
- Comprehensive error handling for external consumers
|
10 |
+
- Automatic debug mode detection from environment variables
|
11 |
+
|
12 |
+
Usage:
|
13 |
+
The main endpoint is registered as a Gradio API and can be called by MCP clients:
|
14 |
+
|
15 |
+
POST /api/process_message_and_attached_file
|
16 |
+
{
|
17 |
+
"file_path": "/path/to/calendar.ics",
|
18 |
+
"message_body": "Create tasks for this week's meetings"
|
19 |
+
}
|
20 |
+
|
21 |
+
Environment Variables:
|
22 |
+
YUGA_DEBUG: Set to "true" to enable detailed debug logging for API requests
|
23 |
+
|
24 |
+
Logging:
|
25 |
+
- Uses centralized logging system from utils.logging_config
|
26 |
+
- Respects YUGA_DEBUG environment variable (from CLI flag --debug)
|
27 |
+
- Includes performance timing and detailed error information
|
28 |
+
- Provides different log levels for production vs development usage
|
29 |
+
"""
|
30 |
+
|
31 |
import time
|
|
|
32 |
|
33 |
from utils.extract_calendar import extract_ical_entries
|
34 |
+
|
35 |
from factory.data_provider import generate_mcp_data
|
36 |
+
from services import ScheduleService, StateService
|
37 |
+
from helpers import schedule_to_dataframe
|
38 |
|
39 |
+
from utils.logging_config import setup_logging, get_logger, is_debug_enabled
|
40 |
|
41 |
+
setup_logging()
|
42 |
+
logger = get_logger(__name__)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
|
45 |
async def process_message_and_attached_file(file_path: str, message_body: str) -> dict:
|
46 |
"""
|
47 |
+
MCP API endpoint for processing calendar files and task descriptions.
|
48 |
+
|
49 |
+
This is a separate workflow from the main Gradio UI and handles external API requests.
|
50 |
+
|
51 |
Args:
|
52 |
+
file_path (str): Path to the attached file (typically .ics calendar file)
|
53 |
message_body (str): The body of the last chat message, which contains the task description
|
54 |
Returns:
|
55 |
dict: Contains confirmation, file info, calendar entries, error, and solved schedule info
|
56 |
"""
|
57 |
+
|
58 |
+
# Determine debug mode from environment or default to False for API calls
|
59 |
+
debug_mode = is_debug_enabled()
|
60 |
+
|
61 |
+
logger.info("MCP Handler: Processing message with attached file")
|
62 |
+
logger.debug("File path: %s", file_path)
|
63 |
+
logger.debug("Message: %s", message_body)
|
64 |
+
logger.debug("Debug mode: %s", debug_mode)
|
65 |
+
|
66 |
+
# Track timing for API performance
|
67 |
+
start_time = time.time()
|
68 |
+
|
69 |
try:
|
70 |
+
# Step 1: Extract calendar entries from the attached file
|
71 |
+
logger.info("Step 1: Extracting calendar entries...")
|
72 |
+
|
73 |
with open(file_path, "rb") as f:
|
74 |
file_bytes = f.read()
|
75 |
+
|
76 |
+
calendar_entries, error = extract_ical_entries(file_bytes)
|
77 |
+
|
78 |
+
if error:
|
79 |
+
logger.error("Failed to extract calendar entries: %s", error)
|
80 |
+
return {
|
81 |
+
"error": f"Failed to extract calendar entries: {error}",
|
82 |
+
"status": "failed",
|
83 |
+
"timestamp": time.time(),
|
84 |
+
"processing_time_seconds": time.time() - start_time,
|
85 |
+
}
|
86 |
+
|
87 |
+
logger.info("Extracted %d calendar entries", len(calendar_entries))
|
88 |
+
if debug_mode:
|
89 |
+
logger.debug(
|
90 |
+
"Calendar entries details: %s",
|
91 |
+
[e.get("summary", "No summary") for e in calendar_entries[:5]],
|
92 |
+
)
|
93 |
+
|
94 |
+
# Step 2: Generate MCP data (combines calendar and LLM tasks)
|
95 |
+
logger.info("Step 2: Generating tasks using MCP data provider...")
|
96 |
+
|
97 |
+
schedule_data = await generate_mcp_data(
|
98 |
+
calendar_entries=calendar_entries,
|
99 |
+
user_message=message_body,
|
100 |
+
project_id="PROJECT",
|
101 |
+
employee_count=1, # MCP uses single user
|
102 |
+
days_in_schedule=365,
|
103 |
+
)
|
104 |
+
|
105 |
+
logger.info("Generated schedule with %d total tasks", len(schedule_data))
|
106 |
+
|
107 |
+
# Step 3: Convert to format needed for solving
|
108 |
+
logger.info("Step 3: Preparing schedule for solving...")
|
109 |
+
|
110 |
+
# Create state data format expected by ScheduleService
|
111 |
+
state_data = {
|
112 |
+
"task_df_json": schedule_data.to_json(orient="split"),
|
113 |
+
"employee_count": 1,
|
114 |
+
"days_in_schedule": 365,
|
115 |
+
}
|
116 |
+
|
117 |
+
# Step 4: Start solving the schedule
|
118 |
+
logger.info("Step 4: Starting schedule solver...")
|
119 |
+
|
120 |
+
(
|
121 |
+
emp_df,
|
122 |
+
task_df,
|
123 |
+
job_id,
|
124 |
+
status,
|
125 |
+
state_data,
|
126 |
+
) = await ScheduleService.solve_schedule_from_state(
|
127 |
+
state_data=state_data,
|
128 |
+
job_id=None,
|
129 |
+
debug=debug_mode, # Respect debug mode for MCP calls
|
130 |
)
|
131 |
+
|
132 |
+
logger.info("Solver started with job_id: %s", job_id)
|
133 |
+
logger.debug("Initial status: %s", status)
|
134 |
+
|
135 |
+
# Step 5: Poll until the schedule is solved
|
136 |
+
logger.info("Step 5: Polling for solution...")
|
137 |
+
|
138 |
+
max_polls = 60 # Maximum 60 polls (about 2 minutes)
|
139 |
+
poll_interval = 2 # Poll every 2 seconds
|
140 |
+
|
141 |
+
for poll_count in range(max_polls):
|
142 |
+
if StateService.has_solved_schedule(job_id):
|
143 |
+
solved_schedule = StateService.get_solved_schedule(job_id)
|
144 |
+
|
145 |
+
# Check if we have a valid solution
|
146 |
+
if solved_schedule and solved_schedule.score is not None:
|
147 |
+
processing_time = time.time() - start_time
|
148 |
+
logger.info(
|
149 |
+
"Schedule solved after %d polls! (Total time: %.2fs)",
|
150 |
+
poll_count + 1,
|
151 |
+
processing_time,
|
152 |
+
)
|
153 |
+
|
154 |
+
# Convert to final dataframe
|
155 |
+
final_df = schedule_to_dataframe(solved_schedule)
|
156 |
+
|
157 |
+
# Generate status message
|
158 |
+
status_message = ScheduleService.generate_status_message(
|
159 |
+
solved_schedule
|
160 |
+
)
|
161 |
+
|
162 |
+
logger.info("Final Status: %s", status_message)
|
163 |
+
|
164 |
+
# Return comprehensive JSON response
|
165 |
+
return {
|
166 |
+
"status": "success",
|
167 |
+
"message": "Schedule solved successfully",
|
168 |
+
"file_info": {
|
169 |
+
"path": file_path,
|
170 |
+
"calendar_entries_count": len(calendar_entries),
|
171 |
+
},
|
172 |
+
"calendar_entries": calendar_entries,
|
173 |
+
"solution_status": status_message,
|
174 |
+
"schedule": final_df.to_dict(
|
175 |
+
orient="records"
|
176 |
+
), # Convert to list of dicts for JSON
|
177 |
+
"job_id": job_id,
|
178 |
+
"polls_required": poll_count + 1,
|
179 |
+
"processing_time_seconds": processing_time,
|
180 |
+
"timestamp": time.time(),
|
181 |
+
"debug_mode": debug_mode,
|
182 |
+
}
|
183 |
+
|
184 |
+
if debug_mode:
|
185 |
+
logger.debug("Poll %d/%d: Still solving...", poll_count + 1, max_polls)
|
186 |
+
|
187 |
+
time.sleep(poll_interval)
|
188 |
+
|
189 |
+
# If we get here, polling timed out
|
190 |
+
processing_time = time.time() - start_time
|
191 |
+
logger.warning(
|
192 |
+
"Polling timed out after %.2fs - returning partial results", processing_time
|
193 |
)
|
194 |
+
|
195 |
+
return {
|
196 |
+
"status": "timeout",
|
197 |
+
"message": "Schedule solving timed out after maximum polls",
|
198 |
+
"file_info": {
|
199 |
+
"path": file_path,
|
200 |
+
"calendar_entries_count": len(calendar_entries),
|
201 |
+
},
|
202 |
+
"calendar_entries": calendar_entries,
|
203 |
+
"job_id": job_id,
|
204 |
+
"max_polls_reached": max_polls,
|
205 |
+
"processing_time_seconds": processing_time,
|
206 |
+
"timestamp": time.time(),
|
207 |
+
"debug_mode": debug_mode,
|
208 |
+
}
|
209 |
+
|
210 |
+
except Exception as e:
|
211 |
+
processing_time = time.time() - start_time
|
212 |
+
logger.error(
|
213 |
+
"MCP handler error after %.2fs: %s", processing_time, e, exc_info=debug_mode
|
214 |
)
|
|
|
215 |
|
216 |
+
return {
|
217 |
+
"error": str(e),
|
218 |
+
"status": "failed",
|
219 |
+
"file_path": file_path,
|
220 |
+
"message_body": message_body,
|
221 |
+
"processing_time_seconds": processing_time,
|
222 |
+
"timestamp": time.time(),
|
223 |
+
"debug_mode": debug_mode,
|
224 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/services/__init__.py
CHANGED
@@ -8,10 +8,12 @@ from .logging_service import LoggingService
|
|
8 |
from .schedule_service import ScheduleService
|
9 |
from .data_service import DataService
|
10 |
from .mock_projects_service import MockProjectService
|
|
|
11 |
|
12 |
__all__ = [
|
13 |
"LoggingService",
|
14 |
"ScheduleService",
|
15 |
"DataService",
|
16 |
"MockProjectService",
|
|
|
17 |
]
|
|
|
8 |
from .schedule_service import ScheduleService
|
9 |
from .data_service import DataService
|
10 |
from .mock_projects_service import MockProjectService
|
11 |
+
from .state_service import StateService
|
12 |
|
13 |
__all__ = [
|
14 |
"LoggingService",
|
15 |
"ScheduleService",
|
16 |
"DataService",
|
17 |
"MockProjectService",
|
18 |
+
"StateService",
|
19 |
]
|
src/services/data_service.py
CHANGED
@@ -1,7 +1,5 @@
|
|
1 |
import os
|
2 |
import uuid
|
3 |
-
import logging
|
4 |
-
from datetime import datetime
|
5 |
from io import StringIO
|
6 |
from typing import Dict, List, Tuple, Union, Optional, Any
|
7 |
|
@@ -23,6 +21,11 @@ from constraint_solvers.timetable.domain import (
|
|
23 |
|
24 |
from helpers import schedule_to_dataframe, employees_to_dataframe
|
25 |
from .mock_projects_service import MockProjectService
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
|
28 |
class DataService:
|
@@ -52,24 +55,25 @@ class DataService:
|
|
52 |
Tuple of (emp_df, task_df, job_id, status_message, state_data)
|
53 |
"""
|
54 |
if project_source == "Upload Project Files":
|
55 |
-
files, project_source_info = DataService.
|
|
|
56 |
else:
|
57 |
-
files, project_source_info = DataService.
|
58 |
mock_projects
|
59 |
)
|
60 |
|
61 |
-
|
62 |
|
63 |
combined_tasks: List[Task] = []
|
64 |
combined_employees: Dict[str, Employee] = {}
|
65 |
|
66 |
# Process each file/project
|
67 |
for idx, single_file in enumerate(files):
|
68 |
-
project_id = DataService.
|
69 |
project_source, single_file, mock_projects, idx
|
70 |
)
|
71 |
|
72 |
-
|
73 |
|
74 |
schedule_part: EmployeeSchedule = await generate_agent_data(
|
75 |
single_file,
|
@@ -77,7 +81,8 @@ class DataService:
|
|
77 |
employee_count=employee_count,
|
78 |
days_in_schedule=days_in_schedule,
|
79 |
)
|
80 |
-
|
|
|
81 |
|
82 |
# Merge employees (unique by name)
|
83 |
for emp in schedule_part.employees:
|
@@ -87,17 +92,17 @@ class DataService:
|
|
87 |
# Append tasks with project id already set
|
88 |
combined_tasks.extend(schedule_part.tasks)
|
89 |
|
90 |
-
|
91 |
f"👥 Merging data: {len(combined_employees)} unique employees, {len(combined_tasks)} total tasks"
|
92 |
)
|
93 |
|
94 |
# Build final schedule
|
95 |
-
final_schedule = DataService.
|
96 |
combined_employees, combined_tasks, employee_count, days_in_schedule
|
97 |
)
|
98 |
|
99 |
# Convert to DataFrames
|
100 |
-
emp_df, task_df = DataService.
|
101 |
|
102 |
# Generate job ID and state data
|
103 |
job_id = str(uuid.uuid4())
|
@@ -108,12 +113,12 @@ class DataService:
|
|
108 |
}
|
109 |
|
110 |
status_message = f"Data loaded successfully from {project_source_info}"
|
111 |
-
|
112 |
|
113 |
return emp_df, task_df, job_id, status_message, state_data
|
114 |
|
115 |
@staticmethod
|
116 |
-
def
|
117 |
"""Process uploaded files and return file list and description"""
|
118 |
if file_obj is None:
|
119 |
raise ValueError("No file uploaded. Please upload a file.")
|
@@ -121,12 +126,12 @@ class DataService:
|
|
121 |
# Support multiple files. Gradio returns a list when multiple files are selected.
|
122 |
files = file_obj if isinstance(file_obj, list) else [file_obj]
|
123 |
project_source_info = f"{len(files)} file(s)"
|
124 |
-
|
125 |
|
126 |
return files, project_source_info
|
127 |
|
128 |
@staticmethod
|
129 |
-
def
|
130 |
mock_projects: Union[str, List[str], None]
|
131 |
) -> Tuple[List[str], str]:
|
132 |
"""Process mock projects and return file contents and description"""
|
@@ -149,12 +154,12 @@ class DataService:
|
|
149 |
project_source_info = (
|
150 |
f"{len(mock_projects)} mock project(s): {', '.join(mock_projects)}"
|
151 |
)
|
152 |
-
|
153 |
|
154 |
return files, project_source_info
|
155 |
|
156 |
@staticmethod
|
157 |
-
def
|
158 |
project_source: str,
|
159 |
single_file: Any,
|
160 |
mock_projects: Union[str, List[str], None],
|
@@ -164,16 +169,19 @@ class DataService:
|
|
164 |
if project_source == "Upload Project Files":
|
165 |
try:
|
166 |
return os.path.splitext(os.path.basename(single_file.name))[0]
|
|
|
167 |
except AttributeError:
|
168 |
return f"project_{idx+1}"
|
|
|
169 |
else:
|
170 |
# For mock projects, use the mock project name as the project ID
|
171 |
if isinstance(mock_projects, list):
|
172 |
return mock_projects[idx]
|
|
|
173 |
return mock_projects or f"project_{idx+1}"
|
174 |
|
175 |
@staticmethod
|
176 |
-
def
|
177 |
combined_employees: Dict[str, Employee],
|
178 |
combined_tasks: List[Task],
|
179 |
employee_count: Optional[int],
|
@@ -184,7 +192,7 @@ class DataService:
|
|
184 |
|
185 |
# Override with custom parameters if provided
|
186 |
if employee_count is not None or days_in_schedule is not None:
|
187 |
-
|
188 |
f"⚙️ Customizing parameters: {employee_count} employees, {days_in_schedule} days"
|
189 |
)
|
190 |
parameters = TimeTableDataParameters(
|
@@ -200,7 +208,8 @@ class DataService:
|
|
200 |
random_seed=parameters.random_seed,
|
201 |
)
|
202 |
|
203 |
-
|
|
|
204 |
return EmployeeSchedule(
|
205 |
employees=list(combined_employees.values()),
|
206 |
tasks=combined_tasks,
|
@@ -210,11 +219,11 @@ class DataService:
|
|
210 |
)
|
211 |
|
212 |
@staticmethod
|
213 |
-
def
|
214 |
schedule: EmployeeSchedule, debug: bool = False
|
215 |
) -> Tuple[pd.DataFrame, pd.DataFrame]:
|
216 |
"""Convert schedule to DataFrames for display"""
|
217 |
-
|
218 |
emp_df: pd.DataFrame = employees_to_dataframe(schedule)
|
219 |
task_df: pd.DataFrame = schedule_to_dataframe(schedule)
|
220 |
|
@@ -234,12 +243,12 @@ class DataService:
|
|
234 |
|
235 |
if debug:
|
236 |
# Log sequence numbers for debugging
|
237 |
-
|
238 |
for _, row in task_df.iterrows():
|
239 |
-
|
240 |
f"Project: {row['Project']}, Sequence: {row['Sequence']}, Task: {row['Task']}"
|
241 |
)
|
242 |
-
|
243 |
|
244 |
return emp_df, task_df
|
245 |
|
@@ -261,20 +270,22 @@ class DataService:
|
|
261 |
raise ValueError("No task_df_json provided")
|
262 |
|
263 |
try:
|
264 |
-
|
265 |
task_df: pd.DataFrame = pd.read_json(StringIO(task_df_json), orient="split")
|
266 |
-
|
267 |
|
268 |
if debug:
|
269 |
-
|
|
|
270 |
for _, row in task_df.iterrows():
|
271 |
-
|
272 |
f"Project: {row.get('Project', 'N/A')}, Sequence: {row.get('Sequence', 'N/A')}, Task: {row['Task']}"
|
273 |
)
|
274 |
|
275 |
return task_df
|
|
|
276 |
except Exception as e:
|
277 |
-
|
278 |
raise ValueError(f"Error parsing task data: {str(e)}")
|
279 |
|
280 |
@staticmethod
|
@@ -288,7 +299,7 @@ class DataService:
|
|
288 |
Returns:
|
289 |
List of Task objects
|
290 |
"""
|
291 |
-
|
292 |
ids = (str(i) for i in range(len(task_df)))
|
293 |
|
294 |
tasks = []
|
@@ -305,5 +316,5 @@ class DataService:
|
|
305 |
)
|
306 |
)
|
307 |
|
308 |
-
|
309 |
return tasks
|
|
|
1 |
import os
|
2 |
import uuid
|
|
|
|
|
3 |
from io import StringIO
|
4 |
from typing import Dict, List, Tuple, Union, Optional, Any
|
5 |
|
|
|
21 |
|
22 |
from helpers import schedule_to_dataframe, employees_to_dataframe
|
23 |
from .mock_projects_service import MockProjectService
|
24 |
+
from utils.logging_config import setup_logging, get_logger
|
25 |
+
|
26 |
+
# Initialize logging
|
27 |
+
setup_logging()
|
28 |
+
logger = get_logger(__name__)
|
29 |
|
30 |
|
31 |
class DataService:
|
|
|
55 |
Tuple of (emp_df, task_df, job_id, status_message, state_data)
|
56 |
"""
|
57 |
if project_source == "Upload Project Files":
|
58 |
+
files, project_source_info = DataService.process_uploaded_files(file_obj)
|
59 |
+
|
60 |
else:
|
61 |
+
files, project_source_info = DataService.process_mock_projects(
|
62 |
mock_projects
|
63 |
)
|
64 |
|
65 |
+
logger.info(f"🔄 Processing {len(files)} project(s)...")
|
66 |
|
67 |
combined_tasks: List[Task] = []
|
68 |
combined_employees: Dict[str, Employee] = {}
|
69 |
|
70 |
# Process each file/project
|
71 |
for idx, single_file in enumerate(files):
|
72 |
+
project_id = DataService.derive_project_id(
|
73 |
project_source, single_file, mock_projects, idx
|
74 |
)
|
75 |
|
76 |
+
logger.info(f"⚙️ Processing project {idx+1}/{len(files)}: '{project_id}'")
|
77 |
|
78 |
schedule_part: EmployeeSchedule = await generate_agent_data(
|
79 |
single_file,
|
|
|
81 |
employee_count=employee_count,
|
82 |
days_in_schedule=days_in_schedule,
|
83 |
)
|
84 |
+
|
85 |
+
logger.info(f"✅ Completed processing project '{project_id}'")
|
86 |
|
87 |
# Merge employees (unique by name)
|
88 |
for emp in schedule_part.employees:
|
|
|
92 |
# Append tasks with project id already set
|
93 |
combined_tasks.extend(schedule_part.tasks)
|
94 |
|
95 |
+
logger.info(
|
96 |
f"👥 Merging data: {len(combined_employees)} unique employees, {len(combined_tasks)} total tasks"
|
97 |
)
|
98 |
|
99 |
# Build final schedule
|
100 |
+
final_schedule = DataService.build_final_schedule(
|
101 |
combined_employees, combined_tasks, employee_count, days_in_schedule
|
102 |
)
|
103 |
|
104 |
# Convert to DataFrames
|
105 |
+
emp_df, task_df = DataService.convert_to_dataframes(final_schedule, debug)
|
106 |
|
107 |
# Generate job ID and state data
|
108 |
job_id = str(uuid.uuid4())
|
|
|
113 |
}
|
114 |
|
115 |
status_message = f"Data loaded successfully from {project_source_info}"
|
116 |
+
logger.info("🎉 Data loading completed successfully!")
|
117 |
|
118 |
return emp_df, task_df, job_id, status_message, state_data
|
119 |
|
120 |
@staticmethod
|
121 |
+
def process_uploaded_files(file_obj: Any) -> Tuple[List[Any], str]:
|
122 |
"""Process uploaded files and return file list and description"""
|
123 |
if file_obj is None:
|
124 |
raise ValueError("No file uploaded. Please upload a file.")
|
|
|
126 |
# Support multiple files. Gradio returns a list when multiple files are selected.
|
127 |
files = file_obj if isinstance(file_obj, list) else [file_obj]
|
128 |
project_source_info = f"{len(files)} file(s)"
|
129 |
+
logger.info(f"📄 Found {len(files)} file(s) to process")
|
130 |
|
131 |
return files, project_source_info
|
132 |
|
133 |
@staticmethod
|
134 |
+
def process_mock_projects(
|
135 |
mock_projects: Union[str, List[str], None]
|
136 |
) -> Tuple[List[str], str]:
|
137 |
"""Process mock projects and return file contents and description"""
|
|
|
154 |
project_source_info = (
|
155 |
f"{len(mock_projects)} mock project(s): {', '.join(mock_projects)}"
|
156 |
)
|
157 |
+
logger.info(f"📋 Selected mock projects: {', '.join(mock_projects)}")
|
158 |
|
159 |
return files, project_source_info
|
160 |
|
161 |
@staticmethod
|
162 |
+
def derive_project_id(
|
163 |
project_source: str,
|
164 |
single_file: Any,
|
165 |
mock_projects: Union[str, List[str], None],
|
|
|
169 |
if project_source == "Upload Project Files":
|
170 |
try:
|
171 |
return os.path.splitext(os.path.basename(single_file.name))[0]
|
172 |
+
|
173 |
except AttributeError:
|
174 |
return f"project_{idx+1}"
|
175 |
+
|
176 |
else:
|
177 |
# For mock projects, use the mock project name as the project ID
|
178 |
if isinstance(mock_projects, list):
|
179 |
return mock_projects[idx]
|
180 |
+
|
181 |
return mock_projects or f"project_{idx+1}"
|
182 |
|
183 |
@staticmethod
|
184 |
+
def build_final_schedule(
|
185 |
combined_employees: Dict[str, Employee],
|
186 |
combined_tasks: List[Task],
|
187 |
employee_count: Optional[int],
|
|
|
192 |
|
193 |
# Override with custom parameters if provided
|
194 |
if employee_count is not None or days_in_schedule is not None:
|
195 |
+
logger.info(
|
196 |
f"⚙️ Customizing parameters: {employee_count} employees, {days_in_schedule} days"
|
197 |
)
|
198 |
parameters = TimeTableDataParameters(
|
|
|
208 |
random_seed=parameters.random_seed,
|
209 |
)
|
210 |
|
211 |
+
logger.info("🏗️ Building final schedule structure...")
|
212 |
+
|
213 |
return EmployeeSchedule(
|
214 |
employees=list(combined_employees.values()),
|
215 |
tasks=combined_tasks,
|
|
|
219 |
)
|
220 |
|
221 |
@staticmethod
|
222 |
+
def convert_to_dataframes(
|
223 |
schedule: EmployeeSchedule, debug: bool = False
|
224 |
) -> Tuple[pd.DataFrame, pd.DataFrame]:
|
225 |
"""Convert schedule to DataFrames for display"""
|
226 |
+
logger.info("📊 Converting to data tables...")
|
227 |
emp_df: pd.DataFrame = employees_to_dataframe(schedule)
|
228 |
task_df: pd.DataFrame = schedule_to_dataframe(schedule)
|
229 |
|
|
|
243 |
|
244 |
if debug:
|
245 |
# Log sequence numbers for debugging
|
246 |
+
logger.info("Task sequence numbers after load_data:")
|
247 |
for _, row in task_df.iterrows():
|
248 |
+
logger.info(
|
249 |
f"Project: {row['Project']}, Sequence: {row['Sequence']}, Task: {row['Task']}"
|
250 |
)
|
251 |
+
logger.info("Task DataFrame being set in load_data: %s", task_df.head())
|
252 |
|
253 |
return emp_df, task_df
|
254 |
|
|
|
270 |
raise ValueError("No task_df_json provided")
|
271 |
|
272 |
try:
|
273 |
+
logger.info("📋 Parsing task data from JSON...")
|
274 |
task_df: pd.DataFrame = pd.read_json(StringIO(task_df_json), orient="split")
|
275 |
+
logger.info(f"📊 Found {len(task_df)} tasks to schedule")
|
276 |
|
277 |
if debug:
|
278 |
+
logger.info("Task sequence numbers from JSON:")
|
279 |
+
|
280 |
for _, row in task_df.iterrows():
|
281 |
+
logger.info(
|
282 |
f"Project: {row.get('Project', 'N/A')}, Sequence: {row.get('Sequence', 'N/A')}, Task: {row['Task']}"
|
283 |
)
|
284 |
|
285 |
return task_df
|
286 |
+
|
287 |
except Exception as e:
|
288 |
+
logger.error(f"❌ Error parsing task_df_json: {e}")
|
289 |
raise ValueError(f"Error parsing task data: {str(e)}")
|
290 |
|
291 |
@staticmethod
|
|
|
299 |
Returns:
|
300 |
List of Task objects
|
301 |
"""
|
302 |
+
logger.info("🆔 Generating task IDs and converting to solver format...")
|
303 |
ids = (str(i) for i in range(len(task_df)))
|
304 |
|
305 |
tasks = []
|
|
|
316 |
)
|
317 |
)
|
318 |
|
319 |
+
logger.info(f"✅ Converted {len(tasks)} tasks for solver")
|
320 |
return tasks
|
src/services/logging_service.py
CHANGED
@@ -3,6 +3,12 @@ import threading
|
|
3 |
from datetime import datetime
|
4 |
from typing import List
|
5 |
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
class LogCapture:
|
8 |
"""Helper class to capture logs for streaming to UI"""
|
@@ -52,27 +58,40 @@ class LoggingService:
|
|
52 |
|
53 |
def setup_log_streaming(self) -> None:
|
54 |
"""Set up log streaming to capture logs for UI"""
|
55 |
-
logger
|
|
|
56 |
|
57 |
-
# Remove existing handlers to avoid
|
58 |
-
for handler in
|
59 |
if isinstance(handler, StreamingLogHandler):
|
60 |
-
|
61 |
|
62 |
# Add our streaming handler
|
63 |
stream_handler = StreamingLogHandler(self.log_capture)
|
64 |
-
|
65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
stream_handler.setFormatter(formatter)
|
67 |
-
|
68 |
self._handler_added = True
|
69 |
|
|
|
|
|
70 |
def get_streaming_logs(self) -> str:
|
71 |
"""Get accumulated logs for streaming to UI"""
|
72 |
return self.log_capture.get_logs()
|
73 |
|
74 |
def clear_streaming_logs(self) -> None:
|
75 |
"""Clear accumulated logs"""
|
|
|
76 |
self.log_capture.clear()
|
77 |
|
78 |
def is_setup(self) -> bool:
|
|
|
3 |
from datetime import datetime
|
4 |
from typing import List
|
5 |
|
6 |
+
from utils.logging_config import setup_logging, get_logger, is_debug_enabled
|
7 |
+
|
8 |
+
# Initialize logging
|
9 |
+
setup_logging()
|
10 |
+
logger = get_logger(__name__)
|
11 |
+
|
12 |
|
13 |
class LogCapture:
|
14 |
"""Helper class to capture logs for streaming to UI"""
|
|
|
58 |
|
59 |
def setup_log_streaming(self) -> None:
|
60 |
"""Set up log streaming to capture logs for UI"""
|
61 |
+
# Use the root logger which is configured by our centralized system
|
62 |
+
root_logger = logging.getLogger()
|
63 |
|
64 |
+
# Remove existing streaming handlers to avoid duplicates
|
65 |
+
for handler in root_logger.handlers[:]:
|
66 |
if isinstance(handler, StreamingLogHandler):
|
67 |
+
root_logger.removeHandler(handler)
|
68 |
|
69 |
# Add our streaming handler
|
70 |
stream_handler = StreamingLogHandler(self.log_capture)
|
71 |
+
|
72 |
+
# Respect the debug flag when setting the handler level
|
73 |
+
if is_debug_enabled():
|
74 |
+
stream_handler.setLevel(logging.DEBUG)
|
75 |
+
logger.debug("UI log streaming configured for DEBUG level")
|
76 |
+
else:
|
77 |
+
stream_handler.setLevel(logging.INFO)
|
78 |
+
logger.debug("UI log streaming configured for INFO level")
|
79 |
+
|
80 |
+
# Use a more detailed formatter for UI streaming
|
81 |
+
formatter = logging.Formatter("%(levelname)s - %(name)s - %(message)s")
|
82 |
stream_handler.setFormatter(formatter)
|
83 |
+
root_logger.addHandler(stream_handler)
|
84 |
self._handler_added = True
|
85 |
|
86 |
+
logger.debug("UI log streaming handler added to root logger")
|
87 |
+
|
88 |
def get_streaming_logs(self) -> str:
|
89 |
"""Get accumulated logs for streaming to UI"""
|
90 |
return self.log_capture.get_logs()
|
91 |
|
92 |
def clear_streaming_logs(self) -> None:
|
93 |
"""Clear accumulated logs"""
|
94 |
+
logger.debug("Clearing UI streaming logs")
|
95 |
self.log_capture.clear()
|
96 |
|
97 |
def is_setup(self) -> bool:
|
src/services/schedule_service.py
CHANGED
@@ -1,13 +1,11 @@
|
|
1 |
-
import uuid
|
2 |
-
import logging
|
3 |
-
import random
|
4 |
from datetime import datetime
|
5 |
from typing import Tuple, Dict, Any, Optional
|
6 |
|
7 |
import pandas as pd
|
8 |
import gradio as gr
|
9 |
|
10 |
-
from
|
11 |
from constraint_solvers.timetable.solver import solver_manager
|
12 |
from factory.data_provider import (
|
13 |
generate_employees,
|
@@ -17,15 +15,16 @@ from factory.data_provider import (
|
|
17 |
SLOTS_PER_DAY,
|
18 |
)
|
19 |
|
20 |
-
from constraint_solvers.timetable.domain import
|
21 |
-
EmployeeSchedule,
|
22 |
-
ScheduleInfo,
|
23 |
-
Task,
|
24 |
-
)
|
25 |
|
26 |
from helpers import schedule_to_dataframe, employees_to_dataframe
|
27 |
from .data_service import DataService
|
28 |
from constraint_solvers.timetable.analysis import ConstraintViolationAnalyzer
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
|
31 |
class ScheduleService:
|
@@ -46,33 +45,25 @@ class ScheduleService:
|
|
46 |
Returns:
|
47 |
Tuple of (emp_df, task_df, new_job_id, status_message, state_data)
|
48 |
"""
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
# Set debug environment variable for constraint system
|
53 |
-
import os
|
54 |
|
55 |
if debug:
|
56 |
os.environ["YUGA_DEBUG"] = "true"
|
|
|
|
|
|
|
57 |
else:
|
58 |
os.environ["YUGA_DEBUG"] = "false"
|
59 |
|
60 |
-
#
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
days_in_schedule = None
|
65 |
-
elif isinstance(state_data, dict):
|
66 |
-
task_df_json = state_data.get("task_df_json")
|
67 |
-
employee_count = state_data.get("employee_count")
|
68 |
-
days_in_schedule = state_data.get("days_in_schedule")
|
69 |
-
else:
|
70 |
-
task_df_json = None
|
71 |
-
employee_count = None
|
72 |
-
days_in_schedule = None
|
73 |
|
74 |
if not task_df_json:
|
75 |
-
|
|
|
76 |
return (
|
77 |
gr.update(),
|
78 |
gr.update(),
|
@@ -90,15 +81,16 @@ class ScheduleService:
|
|
90 |
|
91 |
# Debug: Log task information if debug is enabled
|
92 |
if debug:
|
93 |
-
|
|
|
94 |
for task in tasks:
|
95 |
-
|
96 |
f" Task ID: {task.id}, Project: '{task.project_id}', "
|
97 |
f"Sequence: {task.sequence_number}, Description: '{task.description[:30]}...'"
|
98 |
)
|
99 |
|
100 |
# Generate schedule
|
101 |
-
schedule = ScheduleService.
|
102 |
tasks, employee_count, days_in_schedule
|
103 |
)
|
104 |
|
@@ -108,13 +100,14 @@ class ScheduleService:
|
|
108 |
solved_task_df,
|
109 |
new_job_id,
|
110 |
status,
|
111 |
-
) = await ScheduleService.
|
112 |
|
113 |
-
|
114 |
return emp_df, solved_task_df, new_job_id, status, state_data
|
115 |
|
116 |
except Exception as e:
|
117 |
-
|
|
|
118 |
return (
|
119 |
gr.update(),
|
120 |
gr.update(),
|
@@ -124,7 +117,7 @@ class ScheduleService:
|
|
124 |
)
|
125 |
|
126 |
@staticmethod
|
127 |
-
def
|
128 |
tasks: list, employee_count: Optional[int], days_in_schedule: Optional[int]
|
129 |
) -> EmployeeSchedule:
|
130 |
"""Generate a complete schedule ready for solving"""
|
@@ -145,16 +138,38 @@ class ScheduleService:
|
|
145 |
random_seed=parameters.random_seed,
|
146 |
)
|
147 |
|
148 |
-
|
149 |
start_date = datetime.now().date()
|
|
|
150 |
randomizer = random.Random(parameters.random_seed)
|
151 |
-
employees = generate_employees(parameters, randomizer)
|
152 |
-
logging.info(f"✅ Generated {len(employees)} employees")
|
153 |
|
154 |
-
#
|
155 |
-
|
156 |
-
|
157 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
158 |
|
159 |
return EmployeeSchedule(
|
160 |
employees=employees,
|
@@ -165,7 +180,7 @@ class ScheduleService:
|
|
165 |
)
|
166 |
|
167 |
@staticmethod
|
168 |
-
async def
|
169 |
schedule: EmployeeSchedule, debug: bool = False
|
170 |
) -> Tuple[pd.DataFrame, pd.DataFrame, str, str]:
|
171 |
"""
|
@@ -185,7 +200,7 @@ class ScheduleService:
|
|
185 |
|
186 |
# Start solving asynchronously
|
187 |
def listener(solution):
|
188 |
-
|
189 |
|
190 |
solver_manager.solve_and_listen(job_id, schedule, listener)
|
191 |
|
@@ -222,17 +237,18 @@ class ScheduleService:
|
|
222 |
Returns:
|
223 |
Tuple of (emp_df, task_df, job_id, status_message, schedule)
|
224 |
"""
|
225 |
-
if job_id and
|
226 |
-
solved_schedule: EmployeeSchedule =
|
227 |
|
228 |
emp_df: pd.DataFrame = employees_to_dataframe(solved_schedule)
|
229 |
task_df: pd.DataFrame = schedule_to_dataframe(solved_schedule)
|
230 |
|
231 |
if debug:
|
232 |
# Log solved task order for debugging
|
233 |
-
|
|
|
234 |
for _, row in task_df.iterrows():
|
235 |
-
|
236 |
f"Project: {row['Project']}, Sequence: {row['Sequence']}, Task: {row['Task'][:30]}, Start: {row['Start']}"
|
237 |
)
|
238 |
|
@@ -250,7 +266,7 @@ class ScheduleService:
|
|
250 |
].sort_values(["Start"])
|
251 |
|
252 |
# Check if hard constraints are violated (infeasible solution)
|
253 |
-
status_message = ScheduleService.
|
254 |
|
255 |
return emp_df, task_df, job_id, status_message, solved_schedule
|
256 |
|
@@ -272,8 +288,8 @@ class ScheduleService:
|
|
272 |
Tuple of (emp_df, task_df, job_id, status_message, llm_output)
|
273 |
"""
|
274 |
try:
|
275 |
-
if job_id and
|
276 |
-
schedule =
|
277 |
emp_df = employees_to_dataframe(schedule)
|
278 |
task_df = schedule_to_dataframe(schedule)
|
279 |
|
@@ -281,16 +297,16 @@ class ScheduleService:
|
|
281 |
task_df = task_df.sort_values("Start")
|
282 |
|
283 |
if debug:
|
284 |
-
|
285 |
-
|
286 |
|
287 |
# Generate status message based on constraint satisfaction
|
288 |
-
status_message = ScheduleService.
|
289 |
|
290 |
return emp_df, task_df, job_id, status_message, llm_output
|
291 |
|
292 |
except Exception as e:
|
293 |
-
|
294 |
return (
|
295 |
gr.update(),
|
296 |
gr.update(),
|
@@ -308,7 +324,7 @@ class ScheduleService:
|
|
308 |
)
|
309 |
|
310 |
@staticmethod
|
311 |
-
def
|
312 |
"""Generate status message based on schedule score and constraint violations"""
|
313 |
status_message = "Solution updated"
|
314 |
|
@@ -327,13 +343,14 @@ class ScheduleService:
|
|
327 |
f"⚠️ CONSTRAINTS VIOLATED: {violation_count} hard constraint(s) could not be satisfied. "
|
328 |
f"The schedule is not feasible.\n\n{violation_details}\n\nSuggestions:\n{suggestion_text}"
|
329 |
)
|
330 |
-
|
331 |
f"Infeasible solution detected. Hard score: {hard_score}"
|
332 |
)
|
|
|
333 |
else:
|
334 |
soft_score = schedule.score.soft_score
|
335 |
status_message = f"✅ Solved successfully! Score: {hard_score}/{soft_score} (hard/soft)"
|
336 |
-
|
337 |
f"Feasible solution found. Score: {hard_score}/{soft_score}"
|
338 |
)
|
339 |
|
|
|
1 |
+
import os, uuid, random
|
|
|
|
|
2 |
from datetime import datetime
|
3 |
from typing import Tuple, Dict, Any, Optional
|
4 |
|
5 |
import pandas as pd
|
6 |
import gradio as gr
|
7 |
|
8 |
+
from .state_service import StateService
|
9 |
from constraint_solvers.timetable.solver import solver_manager
|
10 |
from factory.data_provider import (
|
11 |
generate_employees,
|
|
|
15 |
SLOTS_PER_DAY,
|
16 |
)
|
17 |
|
18 |
+
from constraint_solvers.timetable.domain import EmployeeSchedule, ScheduleInfo
|
|
|
|
|
|
|
|
|
19 |
|
20 |
from helpers import schedule_to_dataframe, employees_to_dataframe
|
21 |
from .data_service import DataService
|
22 |
from constraint_solvers.timetable.analysis import ConstraintViolationAnalyzer
|
23 |
+
from utils.logging_config import setup_logging, get_logger
|
24 |
+
|
25 |
+
# Initialize logging
|
26 |
+
setup_logging()
|
27 |
+
logger = get_logger(__name__)
|
28 |
|
29 |
|
30 |
class ScheduleService:
|
|
|
45 |
Returns:
|
46 |
Tuple of (emp_df, task_df, new_job_id, status_message, state_data)
|
47 |
"""
|
48 |
+
logger.info(f"🔧 solve_schedule_from_state called with job_id: {job_id}")
|
49 |
+
logger.info("🚀 Starting solve process...")
|
|
|
|
|
|
|
50 |
|
51 |
if debug:
|
52 |
os.environ["YUGA_DEBUG"] = "true"
|
53 |
+
# Reconfigure logging for debug mode
|
54 |
+
setup_logging("DEBUG")
|
55 |
+
|
56 |
else:
|
57 |
os.environ["YUGA_DEBUG"] = "false"
|
58 |
|
59 |
+
# Extract parameters from state data dict
|
60 |
+
task_df_json = state_data.get("task_df_json")
|
61 |
+
employee_count = state_data.get("employee_count")
|
62 |
+
days_in_schedule = state_data.get("days_in_schedule")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
|
64 |
if not task_df_json:
|
65 |
+
logger.warning("❌ No task_df_json provided to solve_schedule_from_state")
|
66 |
+
|
67 |
return (
|
68 |
gr.update(),
|
69 |
gr.update(),
|
|
|
81 |
|
82 |
# Debug: Log task information if debug is enabled
|
83 |
if debug:
|
84 |
+
logger.info("🔍 DEBUG: Task information for constraint checking:")
|
85 |
+
|
86 |
for task in tasks:
|
87 |
+
logger.info(
|
88 |
f" Task ID: {task.id}, Project: '{task.project_id}', "
|
89 |
f"Sequence: {task.sequence_number}, Description: '{task.description[:30]}...'"
|
90 |
)
|
91 |
|
92 |
# Generate schedule
|
93 |
+
schedule = ScheduleService.generate_schedule_for_solving(
|
94 |
tasks, employee_count, days_in_schedule
|
95 |
)
|
96 |
|
|
|
100 |
solved_task_df,
|
101 |
new_job_id,
|
102 |
status,
|
103 |
+
) = await ScheduleService.solve_schedule(schedule, debug)
|
104 |
|
105 |
+
logger.info("📈 Solver process initiated successfully")
|
106 |
return emp_df, solved_task_df, new_job_id, status, state_data
|
107 |
|
108 |
except Exception as e:
|
109 |
+
logger.error(f"Error in solve_schedule_from_state: {e}")
|
110 |
+
|
111 |
return (
|
112 |
gr.update(),
|
113 |
gr.update(),
|
|
|
117 |
)
|
118 |
|
119 |
@staticmethod
|
120 |
+
def generate_schedule_for_solving(
|
121 |
tasks: list, employee_count: Optional[int], days_in_schedule: Optional[int]
|
122 |
) -> EmployeeSchedule:
|
123 |
"""Generate a complete schedule ready for solving"""
|
|
|
138 |
random_seed=parameters.random_seed,
|
139 |
)
|
140 |
|
141 |
+
logger.info("👥 Generating employees and availability...")
|
142 |
start_date = datetime.now().date()
|
143 |
+
|
144 |
randomizer = random.Random(parameters.random_seed)
|
|
|
|
|
145 |
|
146 |
+
# Analyze tasks to determine what skills are actually needed
|
147 |
+
required_skills_needed = set()
|
148 |
+
for task in tasks:
|
149 |
+
if hasattr(task, "required_skill") and task.required_skill:
|
150 |
+
required_skills_needed.add(task.required_skill)
|
151 |
+
|
152 |
+
logger.info(f"🔍 Tasks require skills: {sorted(required_skills_needed)}")
|
153 |
+
|
154 |
+
# Generate employees with skills needed for the tasks
|
155 |
+
employees = generate_employees(parameters, randomizer, required_skills_needed)
|
156 |
+
|
157 |
+
# For single employee scenarios, set name and clear availability constraints
|
158 |
+
if parameters.employee_count == 1 and len(employees) == 1:
|
159 |
+
employees[0].name = "Chatbot User"
|
160 |
+
employees[0].unavailable_dates.clear()
|
161 |
+
employees[0].undesired_dates.clear()
|
162 |
+
employees[0].desired_dates.clear()
|
163 |
+
|
164 |
+
else:
|
165 |
+
# Generate employee availability preferences for multi-employee scenarios
|
166 |
+
logger.info("📅 Generating employee availability preferences...")
|
167 |
+
generate_employee_availability(
|
168 |
+
employees, parameters, start_date, randomizer
|
169 |
+
)
|
170 |
+
logger.info("✅ Employee availability generated")
|
171 |
+
|
172 |
+
logger.info(f"✅ Generated {len(employees)} employees")
|
173 |
|
174 |
return EmployeeSchedule(
|
175 |
employees=employees,
|
|
|
180 |
)
|
181 |
|
182 |
@staticmethod
|
183 |
+
async def solve_schedule(
|
184 |
schedule: EmployeeSchedule, debug: bool = False
|
185 |
) -> Tuple[pd.DataFrame, pd.DataFrame, str, str]:
|
186 |
"""
|
|
|
200 |
|
201 |
# Start solving asynchronously
|
202 |
def listener(solution):
|
203 |
+
StateService.store_solved_schedule(job_id, solution)
|
204 |
|
205 |
solver_manager.solve_and_listen(job_id, schedule, listener)
|
206 |
|
|
|
237 |
Returns:
|
238 |
Tuple of (emp_df, task_df, job_id, status_message, schedule)
|
239 |
"""
|
240 |
+
if job_id and StateService.has_solved_schedule(job_id):
|
241 |
+
solved_schedule: EmployeeSchedule = StateService.get_solved_schedule(job_id)
|
242 |
|
243 |
emp_df: pd.DataFrame = employees_to_dataframe(solved_schedule)
|
244 |
task_df: pd.DataFrame = schedule_to_dataframe(solved_schedule)
|
245 |
|
246 |
if debug:
|
247 |
# Log solved task order for debugging
|
248 |
+
logger.info("Solved task order:")
|
249 |
+
|
250 |
for _, row in task_df.iterrows():
|
251 |
+
logger.info(
|
252 |
f"Project: {row['Project']}, Sequence: {row['Sequence']}, Task: {row['Task'][:30]}, Start: {row['Start']}"
|
253 |
)
|
254 |
|
|
|
266 |
].sort_values(["Start"])
|
267 |
|
268 |
# Check if hard constraints are violated (infeasible solution)
|
269 |
+
status_message = ScheduleService.generate_status_message(solved_schedule)
|
270 |
|
271 |
return emp_df, task_df, job_id, status_message, solved_schedule
|
272 |
|
|
|
288 |
Tuple of (emp_df, task_df, job_id, status_message, llm_output)
|
289 |
"""
|
290 |
try:
|
291 |
+
if job_id and StateService.has_solved_schedule(job_id):
|
292 |
+
schedule = StateService.get_solved_schedule(job_id)
|
293 |
emp_df = employees_to_dataframe(schedule)
|
294 |
task_df = schedule_to_dataframe(schedule)
|
295 |
|
|
|
297 |
task_df = task_df.sort_values("Start")
|
298 |
|
299 |
if debug:
|
300 |
+
logger.info(f"Polling for job {job_id}")
|
301 |
+
logger.info(f"Current schedule state: {task_df.head()}")
|
302 |
|
303 |
# Generate status message based on constraint satisfaction
|
304 |
+
status_message = ScheduleService.generate_status_message(schedule)
|
305 |
|
306 |
return emp_df, task_df, job_id, status_message, llm_output
|
307 |
|
308 |
except Exception as e:
|
309 |
+
logger.error(f"Error polling: {e}")
|
310 |
return (
|
311 |
gr.update(),
|
312 |
gr.update(),
|
|
|
324 |
)
|
325 |
|
326 |
@staticmethod
|
327 |
+
def generate_status_message(schedule: EmployeeSchedule) -> str:
|
328 |
"""Generate status message based on schedule score and constraint violations"""
|
329 |
status_message = "Solution updated"
|
330 |
|
|
|
343 |
f"⚠️ CONSTRAINTS VIOLATED: {violation_count} hard constraint(s) could not be satisfied. "
|
344 |
f"The schedule is not feasible.\n\n{violation_details}\n\nSuggestions:\n{suggestion_text}"
|
345 |
)
|
346 |
+
logger.warning(
|
347 |
f"Infeasible solution detected. Hard score: {hard_score}"
|
348 |
)
|
349 |
+
|
350 |
else:
|
351 |
soft_score = schedule.score.soft_score
|
352 |
status_message = f"✅ Solved successfully! Score: {hard_score}/{soft_score} (hard/soft)"
|
353 |
+
logger.info(
|
354 |
f"Feasible solution found. Score: {hard_score}/{soft_score}"
|
355 |
)
|
356 |
|
src/services/state_service.py
ADDED
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import Optional
|
2 |
+
from state import app_state
|
3 |
+
from constraint_solvers.timetable.domain import EmployeeSchedule
|
4 |
+
from utils.logging_config import setup_logging, get_logger
|
5 |
+
|
6 |
+
# Initialize logging
|
7 |
+
setup_logging()
|
8 |
+
logger = get_logger(__name__)
|
9 |
+
|
10 |
+
|
11 |
+
class StateService:
|
12 |
+
"""Service for managing application state operations"""
|
13 |
+
|
14 |
+
@staticmethod
|
15 |
+
def store_solved_schedule(
|
16 |
+
job_id: str, schedule: Optional[EmployeeSchedule]
|
17 |
+
) -> None:
|
18 |
+
"""
|
19 |
+
Store a solved schedule in the application state.
|
20 |
+
|
21 |
+
Args:
|
22 |
+
job_id: Unique identifier for the job
|
23 |
+
schedule: The schedule to store (can be None for placeholder)
|
24 |
+
"""
|
25 |
+
logger.debug(f"Storing schedule for job_id: {job_id}")
|
26 |
+
app_state.add_solved_schedule(job_id, schedule)
|
27 |
+
|
28 |
+
@staticmethod
|
29 |
+
def has_solved_schedule(job_id: str) -> bool:
|
30 |
+
"""
|
31 |
+
Check if a solved schedule exists for the given job ID.
|
32 |
+
|
33 |
+
Args:
|
34 |
+
job_id: Job identifier to check
|
35 |
+
|
36 |
+
Returns:
|
37 |
+
True if a solved schedule exists for the job ID
|
38 |
+
"""
|
39 |
+
return app_state.has_solved_schedule(job_id)
|
40 |
+
|
41 |
+
@staticmethod
|
42 |
+
def get_solved_schedule(job_id: str) -> Optional[EmployeeSchedule]:
|
43 |
+
"""
|
44 |
+
Retrieve a solved schedule by job ID.
|
45 |
+
|
46 |
+
Args:
|
47 |
+
job_id: Job identifier to retrieve
|
48 |
+
|
49 |
+
Returns:
|
50 |
+
The solved schedule if it exists, None otherwise
|
51 |
+
"""
|
52 |
+
if app_state.has_solved_schedule(job_id):
|
53 |
+
return app_state.get_solved_schedule(job_id)
|
54 |
+
return None
|
55 |
+
|
56 |
+
@staticmethod
|
57 |
+
def clear_schedule(job_id: str) -> None:
|
58 |
+
"""
|
59 |
+
Clear a schedule from application state.
|
60 |
+
|
61 |
+
Args:
|
62 |
+
job_id: Job identifier to clear
|
63 |
+
"""
|
64 |
+
logger.debug(f"Clearing schedule for job_id: {job_id}")
|
65 |
+
# Note: app_state doesn't have a clear method, but we can implement if needed
|
66 |
+
# For now, we'll log the request
|
67 |
+
logger.warning(
|
68 |
+
f"Clear schedule requested for {job_id} but not implemented in app_state"
|
69 |
+
)
|
70 |
+
|
71 |
+
@staticmethod
|
72 |
+
def get_all_job_ids() -> list:
|
73 |
+
"""
|
74 |
+
Get all job IDs currently in the state.
|
75 |
+
|
76 |
+
Returns:
|
77 |
+
List of job IDs
|
78 |
+
"""
|
79 |
+
# Note: This would need to be implemented in app_state if needed
|
80 |
+
logger.warning("get_all_job_ids called but not implemented in app_state")
|
81 |
+
return []
|
82 |
+
|
83 |
+
@staticmethod
|
84 |
+
def get_state_info() -> dict:
|
85 |
+
"""
|
86 |
+
Get general information about the current state.
|
87 |
+
|
88 |
+
Returns:
|
89 |
+
Dictionary with state information
|
90 |
+
"""
|
91 |
+
# This is a placeholder for state introspection
|
92 |
+
return {
|
93 |
+
"service": "StateService",
|
94 |
+
"status": "active",
|
95 |
+
"note": "State information retrieval not fully implemented",
|
96 |
+
}
|
src/utils/extract_calendar.py
CHANGED
@@ -5,6 +5,7 @@ def extract_ical_entries(file_bytes):
|
|
5 |
try:
|
6 |
cal = Calendar.from_ical(file_bytes)
|
7 |
entries = []
|
|
|
8 |
for component in cal.walk():
|
9 |
if component.name == "VEVENT":
|
10 |
summary = str(component.get("summary", ""))
|
@@ -14,9 +15,12 @@ def extract_ical_entries(file_bytes):
|
|
14 |
def to_iso(val):
|
15 |
if hasattr(val, "dt"):
|
16 |
dt = val.dt
|
|
|
17 |
if hasattr(dt, "isoformat"):
|
18 |
return dt.isoformat()
|
|
|
19 |
return str(dt)
|
|
|
20 |
return str(val)
|
21 |
|
22 |
entries.append(
|
@@ -26,6 +30,8 @@ def extract_ical_entries(file_bytes):
|
|
26 |
"dtend": to_iso(dtend),
|
27 |
}
|
28 |
)
|
|
|
29 |
return entries, None
|
|
|
30 |
except Exception as e:
|
31 |
return None, str(e)
|
|
|
5 |
try:
|
6 |
cal = Calendar.from_ical(file_bytes)
|
7 |
entries = []
|
8 |
+
|
9 |
for component in cal.walk():
|
10 |
if component.name == "VEVENT":
|
11 |
summary = str(component.get("summary", ""))
|
|
|
15 |
def to_iso(val):
|
16 |
if hasattr(val, "dt"):
|
17 |
dt = val.dt
|
18 |
+
|
19 |
if hasattr(dt, "isoformat"):
|
20 |
return dt.isoformat()
|
21 |
+
|
22 |
return str(dt)
|
23 |
+
|
24 |
return str(val)
|
25 |
|
26 |
entries.append(
|
|
|
30 |
"dtend": to_iso(dtend),
|
31 |
}
|
32 |
)
|
33 |
+
|
34 |
return entries, None
|
35 |
+
|
36 |
except Exception as e:
|
37 |
return None, str(e)
|
src/utils/load_secrets.py
CHANGED
@@ -1,7 +1,10 @@
|
|
1 |
-
import os
|
2 |
|
3 |
-
|
4 |
-
|
|
|
|
|
|
|
5 |
|
6 |
### SECRETS ###
|
7 |
def load_secrets(secrets_file: str):
|
|
|
1 |
+
import os
|
2 |
|
3 |
+
from utils.logging_config import setup_logging, get_logger
|
4 |
+
|
5 |
+
# Initialize logging
|
6 |
+
setup_logging()
|
7 |
+
logger = get_logger(__name__)
|
8 |
|
9 |
### SECRETS ###
|
10 |
def load_secrets(secrets_file: str):
|
src/utils/logging_config.py
ADDED
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Centralized Logging Configuration for Yuga Planner
|
3 |
+
|
4 |
+
This module provides a unified logging configuration that:
|
5 |
+
1. Respects the YUGA_DEBUG environment variable for debug logging
|
6 |
+
2. Uses consistent formatting across the entire codebase
|
7 |
+
3. Eliminates the need for individual logging.basicConfig() calls
|
8 |
+
|
9 |
+
Usage:
|
10 |
+
from utils.logging_config import setup_logging, get_logger
|
11 |
+
|
12 |
+
# Initialize logging (typically done once per module)
|
13 |
+
setup_logging()
|
14 |
+
logger = get_logger(__name__)
|
15 |
+
|
16 |
+
# Use logging methods
|
17 |
+
logger.debug("Debug message - only shown when YUGA_DEBUG=true")
|
18 |
+
logger.info("Info message - always shown")
|
19 |
+
logger.warning("Warning message")
|
20 |
+
logger.error("Error message")
|
21 |
+
|
22 |
+
Environment Variables:
|
23 |
+
YUGA_DEBUG: Set to "true" to enable debug logging
|
24 |
+
|
25 |
+
Migration from old logging:
|
26 |
+
Replace:
|
27 |
+
import logging
|
28 |
+
logging.basicConfig(level=logging.INFO)
|
29 |
+
logger = logging.getLogger(__name__)
|
30 |
+
|
31 |
+
With:
|
32 |
+
from utils.logging_config import setup_logging, get_logger
|
33 |
+
setup_logging()
|
34 |
+
logger = get_logger(__name__)
|
35 |
+
"""
|
36 |
+
|
37 |
+
import logging
|
38 |
+
import os
|
39 |
+
from typing import Optional
|
40 |
+
|
41 |
+
|
42 |
+
def setup_logging(level: Optional[str] = None) -> None:
|
43 |
+
"""
|
44 |
+
Set up centralized logging configuration for the application.
|
45 |
+
|
46 |
+
Args:
|
47 |
+
level: Override the logging level. If None, uses YUGA_DEBUG environment variable.
|
48 |
+
"""
|
49 |
+
# Determine logging level
|
50 |
+
if level is not None:
|
51 |
+
log_level = getattr(logging, level.upper(), logging.INFO)
|
52 |
+
|
53 |
+
else:
|
54 |
+
debug_enabled = os.getenv("YUGA_DEBUG", "false").lower() == "true"
|
55 |
+
log_level = logging.DEBUG if debug_enabled else logging.INFO
|
56 |
+
|
57 |
+
# Configure logging
|
58 |
+
logging.basicConfig(
|
59 |
+
level=log_level,
|
60 |
+
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
|
61 |
+
datefmt="%Y-%m-%d %H:%M:%S",
|
62 |
+
)
|
63 |
+
|
64 |
+
# Log the configuration
|
65 |
+
logger = logging.getLogger(__name__)
|
66 |
+
logger.debug("Debug logging enabled via YUGA_DEBUG environment variable")
|
67 |
+
|
68 |
+
|
69 |
+
def get_logger(name: str) -> logging.Logger:
|
70 |
+
"""
|
71 |
+
Get a logger instance with the specified name.
|
72 |
+
|
73 |
+
Args:
|
74 |
+
name: Name for the logger, typically __name__
|
75 |
+
|
76 |
+
Returns:
|
77 |
+
Configured logger instance
|
78 |
+
"""
|
79 |
+
return logging.getLogger(name)
|
80 |
+
|
81 |
+
|
82 |
+
def is_debug_enabled() -> bool:
|
83 |
+
"""Check if debug logging is enabled via environment variable."""
|
84 |
+
return os.getenv("YUGA_DEBUG", "false").lower() == "true"
|