da03 commited on
Commit
096295a
·
1 Parent(s): 7323319
Files changed (6) hide show
  1. Dockerfile +44 -1
  2. MULTI_GPU_SETUP.md +192 -0
  3. README.md +27 -1
  4. start_remote_worker.sh +87 -0
  5. static/index.html +87 -5
  6. worker.py +2 -2
Dockerfile CHANGED
@@ -35,4 +35,47 @@ WORKDIR $HOME/app
35
  # Copy the current directory contents into the container at $HOME/app setting the owner to the user
36
  COPY --chown=user . $HOME/app
37
 
38
- CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860", "--workers", "1", "--timeout-keep-alive", "3000"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  # Copy the current directory contents into the container at $HOME/app setting the owner to the user
36
  COPY --chown=user . $HOME/app
37
 
38
+ # Create a startup script for HF Spaces
39
+ COPY --chown=user <<EOF $HOME/app/start_hf_spaces.sh
40
+ #!/bin/bash
41
+ set -e
42
+
43
+ echo "🚀 Starting Neural OS for HF Spaces"
44
+ echo "===================================="
45
+
46
+ # Start dispatcher in background
47
+ echo "🎯 Starting dispatcher..."
48
+ python dispatcher.py --port 7860 > dispatcher.log 2>&1 &
49
+ DISPATCHER_PID=\$!
50
+
51
+ # Wait for dispatcher to start
52
+ sleep 3
53
+
54
+ # Start single worker (HF Spaces typically has 1 GPU or CPU)
55
+ echo "🔧 Starting worker..."
56
+ python worker.py --worker-address localhost:8001 --dispatcher-url http://localhost:7860 > worker.log 2>&1 &
57
+ WORKER_PID=\$!
58
+
59
+ # Wait for worker to initialize
60
+ echo "⏳ Waiting for worker to initialize..."
61
+ sleep 30
62
+
63
+ echo "✅ System ready!"
64
+ echo "🌍 Web interface: http://localhost:7860"
65
+
66
+ # Function to cleanup
67
+ cleanup() {
68
+ echo "🛑 Shutting down..."
69
+ kill \$DISPATCHER_PID \$WORKER_PID 2>/dev/null || true
70
+ exit 0
71
+ }
72
+
73
+ trap cleanup SIGINT SIGTERM
74
+
75
+ # Wait for dispatcher (main process)
76
+ wait \$DISPATCHER_PID
77
+ EOF
78
+
79
+ RUN chmod +x $HOME/app/start_hf_spaces.sh
80
+
81
+ CMD ["bash", "start_hf_spaces.sh"]
MULTI_GPU_SETUP.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Multi-GPU Setup Guide
2
+
3
+ This guide explains how to run the neural OS demo with multiple GPUs and user queue management.
4
+
5
+ ## Architecture Overview
6
+
7
+ The system has been split into two main components:
8
+
9
+ 1. **Dispatcher** (`dispatcher.py`): Handles WebSocket connections, manages user queues, and routes requests to workers
10
+ 2. **Worker** (`worker.py`): Runs the actual model inference on individual GPUs
11
+
12
+ ## Files Overview
13
+
14
+ - `main.py` - Original single-GPU implementation (kept as backup)
15
+ - `dispatcher.py` - Queue management and WebSocket handling
16
+ - `worker.py` - GPU worker for model inference
17
+ - `start_workers.py` - Helper script to start multiple workers
18
+ - `start_system.sh` - Shell script to start the entire system
19
+ - `tail_workers.py` - Script to monitor all worker logs simultaneously
20
+ - `requirements.txt` - Dependencies
21
+ - `static/index.html` - Frontend interface
22
+
23
+ ## Setup Instructions
24
+
25
+ ### 1. Install Dependencies
26
+
27
+ ```bash
28
+ pip install -r requirements.txt
29
+ ```
30
+
31
+ ### 2. Start the Dispatcher
32
+
33
+ The dispatcher runs on port 7860 and manages user connections and queues:
34
+
35
+ ```bash
36
+ python dispatcher.py
37
+ ```
38
+
39
+ ### 3. Start Workers (One per GPU)
40
+
41
+ Start one worker for each GPU you want to use. Workers automatically register with the dispatcher.
42
+
43
+ #### GPU 0:
44
+ ```bash
45
+ python worker.py --gpu-id 0
46
+ ```
47
+
48
+ #### GPU 1:
49
+ ```bash
50
+ python worker.py --gpu-id 1
51
+ ```
52
+
53
+ #### GPU 2:
54
+ ```bash
55
+ python worker.py --gpu-id 2
56
+ ```
57
+
58
+ And so on for additional GPUs.
59
+
60
+ Workers run on ports 8001, 8002, 8003, etc. (8001 + GPU_ID).
61
+
62
+ ### 4. Access the Application
63
+
64
+ Open your browser and go to: `http://localhost:7860`
65
+
66
+ ## System Behavior
67
+
68
+ ### Queue Management
69
+
70
+ - **No Queue**: Users get normal timeout behavior (20 seconds of inactivity)
71
+ - **With Queue**: Users get limited session time (60 seconds) with warnings and grace periods
72
+ - **Grace Period**: If queue becomes empty during grace period, time limits are removed
73
+
74
+ ### User Experience
75
+
76
+ 1. **Immediate Access**: If GPUs are available, users start immediately
77
+ 2. **Queue Position**: Users see their position and estimated wait time
78
+ 3. **Session Warnings**: Users get warnings when their time is running out
79
+ 4. **Grace Period**: 10-second countdown when session time expires, but if queue empties, users can continue
80
+ 5. **Queue Updates**: Real-time updates on queue position every 5 seconds
81
+
82
+ ### Worker Management
83
+
84
+ - Workers automatically register with the dispatcher on startup
85
+ - Workers send periodic pings (every 10 seconds) to maintain connection
86
+ - Workers handle session cleanup when users disconnect
87
+ - Each worker can handle one session at a time
88
+
89
+ ### Input Queue Optimization
90
+
91
+ The system implements intelligent input filtering to maintain performance:
92
+
93
+ - **Queue Management**: Each worker maintains an input queue per session
94
+ - **Interesting Input Detection**: The system identifies "interesting" inputs (clicks, key presses) vs. uninteresting ones (mouse movements)
95
+ - **Smart Processing**: When multiple inputs are queued:
96
+ - Processes "interesting" inputs immediately, skipping boring mouse movements
97
+ - If no interesting inputs are found, processes the latest mouse position
98
+ - This prevents the system from getting bogged down processing every mouse movement
99
+ - **Performance**: Maintains responsiveness even during rapid mouse movements
100
+
101
+ ## Configuration
102
+
103
+ ### Dispatcher Settings (in `dispatcher.py`)
104
+
105
+ ```python
106
+ self.IDLE_TIMEOUT = 20.0 # When no queue
107
+ self.QUEUE_WARNING_TIME = 10.0
108
+ self.MAX_SESSION_TIME_WITH_QUEUE = 60.0 # When there's a queue
109
+ self.QUEUE_SESSION_WARNING_TIME = 45.0 # 15 seconds before timeout
110
+ self.GRACE_PERIOD = 10.0
111
+ ```
112
+
113
+ ### Worker Settings (in `worker.py`)
114
+
115
+ ```python
116
+ self.MODEL_NAME = "yuntian-deng/computer-model-s-newnewd-freezernn-origunet-nospatial-online-x0-joint-onlineonly-222222k7-06k"
117
+ self.SCREEN_WIDTH = 512
118
+ self.SCREEN_HEIGHT = 384
119
+ self.NUM_SAMPLING_STEPS = 32
120
+ self.USE_RNN = False
121
+ ```
122
+
123
+ ## Monitoring
124
+
125
+ ### Health Checks
126
+
127
+ Check worker health:
128
+ ```bash
129
+ curl http://localhost:8001/health # GPU 0
130
+ curl http://localhost:8002/health # GPU 1
131
+ ```
132
+
133
+ ### Logs
134
+
135
+ The system provides detailed logging for debugging and monitoring:
136
+
137
+ **Dispatcher logs:**
138
+ - `dispatcher.log` - All dispatcher activity, session management, queue operations
139
+
140
+ **Worker logs:**
141
+ - `workers.log` - Summary output from the worker startup script
142
+ - `worker_gpu_0.log` - Detailed logs from GPU 0 worker
143
+ - `worker_gpu_1.log` - Detailed logs from GPU 1 worker
144
+ - `worker_gpu_N.log` - Detailed logs from GPU N worker
145
+
146
+ **Monitor all worker logs:**
147
+ ```bash
148
+ # Tail all worker logs simultaneously
149
+ python tail_workers.py --num-gpus 2
150
+
151
+ # Or monitor individual workers
152
+ tail -f worker_gpu_0.log
153
+ tail -f worker_gpu_1.log
154
+ ```
155
+
156
+ ## Troubleshooting
157
+
158
+ ### Common Issues
159
+
160
+ 1. **Worker not registering**: Check that dispatcher is running first
161
+ 2. **GPU memory issues**: Ensure each worker is assigned to a different GPU
162
+ 3. **Port conflicts**: Make sure ports 7860, 8001, 8002, etc. are available
163
+ 4. **Model loading errors**: Check that model files and configurations are present
164
+
165
+ ### Debug Mode
166
+
167
+ Enable debug logging by setting log level in both files:
168
+ ```python
169
+ logging.basicConfig(level=logging.DEBUG)
170
+ ```
171
+
172
+ ## Scaling
173
+
174
+ To add more GPUs:
175
+ 1. Start additional workers with higher GPU IDs
176
+ 2. Workers automatically register with the dispatcher
177
+ 3. Queue processing automatically utilizes all available workers
178
+
179
+ The system scales horizontally - add as many workers as you have GPUs available.
180
+
181
+ ## API Endpoints
182
+
183
+ ### Dispatcher
184
+ - `GET /` - Serve the web interface
185
+ - `WebSocket /ws` - User connections
186
+ - `POST /register_worker` - Worker registration
187
+ - `POST /worker_ping` - Worker health pings
188
+
189
+ ### Worker
190
+ - `POST /process_input` - Process user input
191
+ - `POST /end_session` - Clean up session
192
+ - `GET /health` - Health check
README.md CHANGED
@@ -1,10 +1,36 @@
1
  ---
2
  title: Neural Computer
3
- emoji: 🐢
4
  colorFrom: purple
5
  colorTo: blue
6
  sdk: docker
7
  pinned: false
8
  ---
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
  title: Neural Computer
3
+ emoji: 🧠
4
  colorFrom: purple
5
  colorTo: blue
6
  sdk: docker
7
  pinned: false
8
  ---
9
 
10
+ # Neural Computer Demo
11
+
12
+ This is a demonstration of a Neural Computer system that can generate computer screen interactions in real-time. The system uses a trained diffusion model to predict what the screen should look like based on mouse movements, clicks, and keyboard inputs.
13
+
14
+ ## How to Use
15
+
16
+ 1. **Wait for the model to load** - This may take a minute or two on first startup
17
+ 2. **Click anywhere on the canvas** to begin interacting
18
+ 3. **Move your mouse** around to see the model predict screen changes
19
+ 4. **Click and drag** to simulate mouse interactions
20
+ 5. **Use keyboard inputs** while focused on the canvas
21
+ 6. **Use the controls** to:
22
+ - Reset the simulation
23
+ - Adjust sampling steps (lower = faster, higher = better quality)
24
+ - Toggle RNN mode for even faster inference
25
+
26
+ ## Settings
27
+
28
+ - **Sampling Steps**: Controls the quality vs speed tradeoff (1-50 steps)
29
+ - **Use RNN**: Enables faster inference mode using RNN output directly
30
+ - **Reset**: Clears the simulation and starts fresh
31
+
32
+ ## Technical Details
33
+
34
+ This system uses a specialized diffusion model trained on computer interaction data. The model can predict realistic screen changes based on user inputs in real-time.
35
+
36
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
start_remote_worker.sh ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Remote Worker Startup Script
4
+ # Usage: ./start_remote_worker.sh <dispatcher_ip> <local_ip> <num_gpus>
5
+
6
+ DISPATCHER_IP=${1:-"192.168.1.50"}
7
+ LOCAL_IP=${2:-$(hostname -I | awk '{print $1}')}
8
+ NUM_GPUS=${3:-1}
9
+ DISPATCHER_URL="http://${DISPATCHER_IP}:7860"
10
+
11
+ echo "🚀 Starting Remote GPU Workers"
12
+ echo "==============================="
13
+ echo "🌐 Dispatcher: $DISPATCHER_URL"
14
+ echo "📍 Local IP: $LOCAL_IP"
15
+ echo "🖥️ GPUs: $NUM_GPUS"
16
+ echo ""
17
+
18
+ # Check if required files exist
19
+ REQUIRED_FILES=("worker.py" "utils.py" "latent_stats.json")
20
+ for file in "${REQUIRED_FILES[@]}"; do
21
+ if [[ ! -f "$file" ]]; then
22
+ echo "❌ Error: $file not found"
23
+ echo "💡 Copy required files from main machine:"
24
+ echo " scp user@dispatcher-machine:/path/to/{worker.py,utils.py,latent_stats.json,config_*.yaml} ."
25
+ exit 1
26
+ fi
27
+ done
28
+
29
+ # Test GPU access
30
+ echo "🧪 Testing GPU access..."
31
+ python -c "import torch; print(f'✅ CUDA available: {torch.cuda.is_available()}'); print(f'📊 GPU count: {torch.cuda.device_count()}')"
32
+
33
+ # Test dispatcher connectivity
34
+ echo "🌐 Testing dispatcher connectivity..."
35
+ if curl -s --connect-timeout 5 "$DISPATCHER_URL" > /dev/null; then
36
+ echo "✅ Dispatcher reachable"
37
+ else
38
+ echo "❌ Cannot reach dispatcher at $DISPATCHER_URL"
39
+ echo "💡 Check network connectivity and dispatcher status"
40
+ exit 1
41
+ fi
42
+
43
+ # Start workers
44
+ echo "🔧 Starting $NUM_GPUS GPU workers..."
45
+ for ((i=0; i<NUM_GPUS; i++)); do
46
+ PORT=$((8001 + i))
47
+ WORKER_ADDRESS="${LOCAL_IP}:${PORT}"
48
+
49
+ echo "Starting worker on GPU $i: $WORKER_ADDRESS"
50
+
51
+ CUDA_VISIBLE_DEVICES=$i python worker.py \
52
+ --worker-address "$WORKER_ADDRESS" \
53
+ --dispatcher-url "$DISPATCHER_URL" \
54
+ > "worker_gpu_${i}.log" 2>&1 &
55
+
56
+ WORKER_PID=$!
57
+ echo "✅ Worker $i started (PID: $WORKER_PID)"
58
+
59
+ # Small delay between starts
60
+ sleep 2
61
+ done
62
+
63
+ echo ""
64
+ echo "🎉 All workers started!"
65
+ echo "📋 Monitor logs:"
66
+ for ((i=0; i<NUM_GPUS; i++)); do
67
+ echo " GPU $i: tail -f worker_gpu_${i}.log"
68
+ done
69
+ echo ""
70
+ echo "🔍 Check worker health:"
71
+ for ((i=0; i<NUM_GPUS; i++)); do
72
+ PORT=$((8001 + i))
73
+ echo " GPU $i: curl http://${LOCAL_IP}:${PORT}/health"
74
+ done
75
+ echo ""
76
+ echo "⚠️ To stop workers: pkill -f 'python.*worker.py'"
77
+ echo "Press Ctrl+C to continue monitoring or any key to exit..."
78
+
79
+ # Keep script running to show it's active
80
+ trap 'echo ""; echo "🛑 Stopping workers..."; pkill -f "python.*worker.py"; exit 0' SIGINT
81
+
82
+ # Show real-time worker status
83
+ while true; do
84
+ sleep 10
85
+ RUNNING=$(ps aux | grep -c "python.*worker.py" || echo "0")
86
+ echo "$(date): $RUNNING workers running"
87
+ done
static/index.html CHANGED
@@ -290,6 +290,9 @@
290
  console.log(`Queue update: Position ${data.position}/${data.total_waiting}, wait: ${data.maximum_wait_seconds.toFixed(1)} seconds`);
291
  const waitSeconds = Math.ceil(data.maximum_wait_seconds);
292
 
 
 
 
293
  if (waitSeconds === 0) {
294
  showConnectionStatus("Starting soon...");
295
  stopQueueCountdown();
@@ -313,6 +316,8 @@
313
  console.log("Session started, clearing queue display");
314
  // Stop queue countdown and clear the display
315
  stopQueueCountdown();
 
 
316
  //ctx.clearRect(0, 0, canvas.width, canvas.height);
317
  } else if (data.type === "session_warning") {
318
  console.log(`Session time warning: ${data.time_remaining} seconds remaining`);
@@ -391,6 +396,10 @@
391
  let autoInputEnabled = true; // Default to enabled
392
  let userHasInteracted = false; // Track if user has moved mouse inside canvas
393
 
 
 
 
 
394
  // Timeout countdown mechanism - support concurrent timeouts
395
  let timeoutCountdownInterval = null;
396
  let timeoutCountdown = 10;
@@ -534,11 +543,11 @@
534
  }
535
 
536
  // Update initial display
537
- const countdownElement = document.getElementById('timeoutCountdown');
538
- if (countdownElement) {
539
- countdownElement.textContent = timeoutCountdown;
540
- }
541
-
542
  console.log(`Starting ${earliestTimeout.type} timeout countdown: ${timeoutCountdown} seconds`);
543
 
544
  // Start countdown
@@ -710,6 +719,44 @@
710
  queueCountdownActive = false;
711
  queueWaitTime = 0;
712
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
713
 
714
  function updateQueueCountdownDisplay() {
715
  if (queueWaitTime <= 0) {
@@ -815,6 +862,13 @@
815
  }
816
 
817
  if (!isConnected || isProcessing) return;
 
 
 
 
 
 
 
818
  let rect = canvas.getBoundingClientRect();
819
  let x = event.clientX - rect.left;
820
  let y = event.clientY - rect.top;
@@ -832,6 +886,13 @@
832
 
833
  canvas.addEventListener("click", function (event) {
834
  if (!isConnected || isProcessing) return;
 
 
 
 
 
 
 
835
  let rect = canvas.getBoundingClientRect();
836
  let x = event.clientX - rect.left;
837
  let y = event.clientY - rect.top;
@@ -844,6 +905,12 @@
844
  event.preventDefault(); // Prevent default context menu
845
  if (!isConnected || isProcessing) return;
846
 
 
 
 
 
 
 
847
  let rect = canvas.getBoundingClientRect();
848
  let x = event.clientX - rect.left;
849
  let y = event.clientY - rect.top;
@@ -907,6 +974,12 @@
907
  }
908
  if (!isConnected || isProcessing || !userHasInteracted) return;
909
 
 
 
 
 
 
 
910
  // Get the current mouse position
911
  let rect = canvas.getBoundingClientRect();
912
  let x = lastSentPosition ? lastSentPosition.x : canvas.width / 2;
@@ -923,6 +996,12 @@
923
  }
924
  if (!isConnected || socket.readyState !== WebSocket.OPEN || !userHasInteracted) return;
925
 
 
 
 
 
 
 
926
  // Get the current mouse position
927
  let rect = canvas.getBoundingClientRect();
928
  let x = lastSentPosition ? lastSentPosition.x : canvas.width / 2;
@@ -1030,6 +1109,9 @@
1030
  }
1031
  }
1032
  });
 
 
 
1033
  </script>
1034
 
1035
  <!-- Bootstrap JS (optional) -->
 
290
  console.log(`Queue update: Position ${data.position}/${data.total_waiting}, wait: ${data.maximum_wait_seconds.toFixed(1)} seconds`);
291
  const waitSeconds = Math.ceil(data.maximum_wait_seconds);
292
 
293
+ // Disable canvas interaction while in queue
294
+ disableCanvasInteraction();
295
+
296
  if (waitSeconds === 0) {
297
  showConnectionStatus("Starting soon...");
298
  stopQueueCountdown();
 
316
  console.log("Session started, clearing queue display");
317
  // Stop queue countdown and clear the display
318
  stopQueueCountdown();
319
+ // Enable canvas interaction when session starts
320
+ enableCanvasInteraction();
321
  //ctx.clearRect(0, 0, canvas.width, canvas.height);
322
  } else if (data.type === "session_warning") {
323
  console.log(`Session time warning: ${data.time_remaining} seconds remaining`);
 
396
  let autoInputEnabled = true; // Default to enabled
397
  let userHasInteracted = false; // Track if user has moved mouse inside canvas
398
 
399
+ // Session state tracking
400
+ let sessionState = 'queued'; // 'queued', 'active', 'disconnected'
401
+ let canvasInteractionEnabled = false;
402
+
403
  // Timeout countdown mechanism - support concurrent timeouts
404
  let timeoutCountdownInterval = null;
405
  let timeoutCountdown = 10;
 
543
  }
544
 
545
  // Update initial display
546
+ const countdownElement = document.getElementById('timeoutCountdown');
547
+ if (countdownElement) {
548
+ countdownElement.textContent = timeoutCountdown;
549
+ }
550
+
551
  console.log(`Starting ${earliestTimeout.type} timeout countdown: ${timeoutCountdown} seconds`);
552
 
553
  // Start countdown
 
719
  queueCountdownActive = false;
720
  queueWaitTime = 0;
721
  }
722
+
723
+ function enableCanvasInteraction() {
724
+ canvasInteractionEnabled = true;
725
+ sessionState = 'active';
726
+
727
+ // Remove visual queue indicator
728
+ if (canvas) {
729
+ canvas.style.opacity = '1';
730
+ canvas.style.cursor = 'crosshair';
731
+ canvas.style.pointerEvents = 'auto';
732
+ }
733
+
734
+ // Update status
735
+ const statusElement = document.getElementById('connectionStatus');
736
+ if (statusElement) {
737
+ statusElement.textContent = 'Active';
738
+ statusElement.className = 'connected';
739
+ }
740
+ }
741
+
742
+ function disableCanvasInteraction() {
743
+ canvasInteractionEnabled = false;
744
+ sessionState = 'queued';
745
+
746
+ // Add visual queue indicator
747
+ if (canvas) {
748
+ canvas.style.opacity = '0.5';
749
+ canvas.style.cursor = 'not-allowed';
750
+ canvas.style.pointerEvents = 'none';
751
+ }
752
+
753
+ // Update status
754
+ const statusElement = document.getElementById('connectionStatus');
755
+ if (statusElement) {
756
+ statusElement.textContent = 'Queued';
757
+ statusElement.className = 'connecting';
758
+ }
759
+ }
760
 
761
  function updateQueueCountdownDisplay() {
762
  if (queueWaitTime <= 0) {
 
862
  }
863
 
864
  if (!isConnected || isProcessing) return;
865
+
866
+ // Check if canvas interaction is enabled (not queued)
867
+ if (!canvasInteractionEnabled) {
868
+ console.log("Canvas interaction disabled - user is queued");
869
+ return;
870
+ }
871
+
872
  let rect = canvas.getBoundingClientRect();
873
  let x = event.clientX - rect.left;
874
  let y = event.clientY - rect.top;
 
886
 
887
  canvas.addEventListener("click", function (event) {
888
  if (!isConnected || isProcessing) return;
889
+
890
+ // Check if canvas interaction is enabled (not queued)
891
+ if (!canvasInteractionEnabled) {
892
+ console.log("Canvas interaction disabled - user is queued");
893
+ return;
894
+ }
895
+
896
  let rect = canvas.getBoundingClientRect();
897
  let x = event.clientX - rect.left;
898
  let y = event.clientY - rect.top;
 
905
  event.preventDefault(); // Prevent default context menu
906
  if (!isConnected || isProcessing) return;
907
 
908
+ // Check if canvas interaction is enabled (not queued)
909
+ if (!canvasInteractionEnabled) {
910
+ console.log("Canvas interaction disabled - user is queued");
911
+ return;
912
+ }
913
+
914
  let rect = canvas.getBoundingClientRect();
915
  let x = event.clientX - rect.left;
916
  let y = event.clientY - rect.top;
 
974
  }
975
  if (!isConnected || isProcessing || !userHasInteracted) return;
976
 
977
+ // Check if canvas interaction is enabled (not queued)
978
+ if (!canvasInteractionEnabled) {
979
+ console.log("Canvas interaction disabled - user is queued");
980
+ return;
981
+ }
982
+
983
  // Get the current mouse position
984
  let rect = canvas.getBoundingClientRect();
985
  let x = lastSentPosition ? lastSentPosition.x : canvas.width / 2;
 
996
  }
997
  if (!isConnected || socket.readyState !== WebSocket.OPEN || !userHasInteracted) return;
998
 
999
+ // Check if canvas interaction is enabled (not queued)
1000
+ if (!canvasInteractionEnabled) {
1001
+ console.log("Canvas interaction disabled - user is queued");
1002
+ return;
1003
+ }
1004
+
1005
  // Get the current mouse position
1006
  let rect = canvas.getBoundingClientRect();
1007
  let x = lastSentPosition ? lastSentPosition.x : canvas.width / 2;
 
1109
  }
1110
  }
1111
  });
1112
+
1113
+ // Initialize canvas in disabled state (user starts queued)
1114
+ disableCanvasInteraction();
1115
  </script>
1116
 
1117
  <!-- Bootstrap JS (optional) -->
worker.py CHANGED
@@ -53,7 +53,7 @@ class GPUWorker:
53
  self.NUM_SAMPLING_STEPS = 32
54
  self.USE_RNN = False
55
 
56
- self.MODEL_NAME = "yuntian-deng/computer-model-s-newnewd-freezernn-origunet-nospatial-online-x0-joint-onlineonly-222222k72-108k"
57
 
58
  # Initialize model
59
  self._initialize_model()
@@ -810,4 +810,4 @@ if __name__ == "__main__":
810
  logger.error(f"❌ Failed to start worker: {e}")
811
  import traceback
812
  logger.error(f"🔍 Full traceback: {traceback.format_exc()}")
813
- raise
 
53
  self.NUM_SAMPLING_STEPS = 32
54
  self.USE_RNN = False
55
 
56
+ self.MODEL_NAME = "yuntian-deng/computer-model-s-newnewd-freezernn-origunet-nospatial-online-x0-joint-onlineonly-222222k7-06k"
57
 
58
  # Initialize model
59
  self._initialize_model()
 
810
  logger.error(f"❌ Failed to start worker: {e}")
811
  import traceback
812
  logger.error(f"🔍 Full traceback: {traceback.format_exc()}")
813
+ raise