diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,10841 @@ +[2024-06-10 18:04:25,644][46753] Saving configuration to /workspace/metta/train_dir/p2.metta.k.5_8/config.json... +[2024-06-10 18:04:25,661][46753] Rollout worker 0 uses device cpu +[2024-06-10 18:04:25,661][46753] Rollout worker 1 uses device cpu +[2024-06-10 18:04:25,662][46753] Rollout worker 2 uses device cpu +[2024-06-10 18:04:25,662][46753] Rollout worker 3 uses device cpu +[2024-06-10 18:04:25,662][46753] Rollout worker 4 uses device cpu +[2024-06-10 18:04:25,662][46753] Rollout worker 5 uses device cpu +[2024-06-10 18:04:25,663][46753] Rollout worker 6 uses device cpu +[2024-06-10 18:04:25,663][46753] Rollout worker 7 uses device cpu +[2024-06-10 18:04:25,663][46753] Rollout worker 8 uses device cpu +[2024-06-10 18:04:25,663][46753] Rollout worker 9 uses device cpu +[2024-06-10 18:04:25,664][46753] Rollout worker 10 uses device cpu +[2024-06-10 18:04:25,664][46753] Rollout worker 11 uses device cpu +[2024-06-10 18:04:25,664][46753] Rollout worker 12 uses device cpu +[2024-06-10 18:04:25,664][46753] Rollout worker 13 uses device cpu +[2024-06-10 18:04:25,665][46753] Rollout worker 14 uses device cpu +[2024-06-10 18:04:25,665][46753] Rollout worker 15 uses device cpu +[2024-06-10 18:04:25,665][46753] Rollout worker 16 uses device cpu +[2024-06-10 18:04:25,665][46753] Rollout worker 17 uses device cpu +[2024-06-10 18:04:25,666][46753] Rollout worker 18 uses device cpu +[2024-06-10 18:04:25,666][46753] Rollout worker 19 uses device cpu +[2024-06-10 18:04:25,666][46753] Rollout worker 20 uses device cpu +[2024-06-10 18:04:25,666][46753] Rollout worker 21 uses device cpu +[2024-06-10 18:04:25,667][46753] Rollout worker 22 uses device cpu +[2024-06-10 18:04:25,667][46753] Rollout worker 23 uses device cpu +[2024-06-10 18:04:25,667][46753] Rollout worker 24 uses device cpu +[2024-06-10 18:04:25,667][46753] Rollout worker 25 uses device cpu +[2024-06-10 18:04:25,668][46753] Rollout worker 26 uses device cpu +[2024-06-10 18:04:25,668][46753] Rollout worker 27 uses device cpu +[2024-06-10 18:04:25,668][46753] Rollout worker 28 uses device cpu +[2024-06-10 18:04:25,669][46753] Rollout worker 29 uses device cpu +[2024-06-10 18:04:25,669][46753] Rollout worker 30 uses device cpu +[2024-06-10 18:04:25,669][46753] Rollout worker 31 uses device cpu +[2024-06-10 18:04:26,283][46753] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-06-10 18:04:26,283][46753] InferenceWorker_p0-w0: min num requests: 10 +[2024-06-10 18:04:26,335][46753] Starting all processes... +[2024-06-10 18:04:26,335][46753] Starting process learner_proc0 +[2024-06-10 18:04:26,603][46753] Starting all processes... +[2024-06-10 18:04:26,606][46753] Starting process inference_proc0-0 +[2024-06-10 18:04:26,606][46753] Starting process rollout_proc0 +[2024-06-10 18:04:26,607][46753] Starting process rollout_proc1 +[2024-06-10 18:04:26,607][46753] Starting process rollout_proc2 +[2024-06-10 18:04:26,607][46753] Starting process rollout_proc3 +[2024-06-10 18:04:26,607][46753] Starting process rollout_proc4 +[2024-06-10 18:04:26,607][46753] Starting process rollout_proc5 +[2024-06-10 18:04:26,607][46753] Starting process rollout_proc6 +[2024-06-10 18:04:26,609][46753] Starting process rollout_proc7 +[2024-06-10 18:04:26,610][46753] Starting process rollout_proc8 +[2024-06-10 18:04:26,610][46753] Starting process rollout_proc9 +[2024-06-10 18:04:26,611][46753] Starting process rollout_proc10 +[2024-06-10 18:04:26,612][46753] Starting process rollout_proc11 +[2024-06-10 18:04:26,612][46753] Starting process rollout_proc12 +[2024-06-10 18:04:26,612][46753] Starting process rollout_proc13 +[2024-06-10 18:04:26,612][46753] Starting process rollout_proc14 +[2024-06-10 18:04:26,613][46753] Starting process rollout_proc15 +[2024-06-10 18:04:26,613][46753] Starting process rollout_proc16 +[2024-06-10 18:04:26,613][46753] Starting process rollout_proc17 +[2024-06-10 18:04:26,614][46753] Starting process rollout_proc18 +[2024-06-10 18:04:26,616][46753] Starting process rollout_proc19 +[2024-06-10 18:04:26,618][46753] Starting process rollout_proc20 +[2024-06-10 18:04:26,618][46753] Starting process rollout_proc21 +[2024-06-10 18:04:26,619][46753] Starting process rollout_proc22 +[2024-06-10 18:04:26,620][46753] Starting process rollout_proc23 +[2024-06-10 18:04:26,622][46753] Starting process rollout_proc24 +[2024-06-10 18:04:26,623][46753] Starting process rollout_proc25 +[2024-06-10 18:04:26,625][46753] Starting process rollout_proc26 +[2024-06-10 18:04:26,630][46753] Starting process rollout_proc27 +[2024-06-10 18:04:26,630][46753] Starting process rollout_proc28 +[2024-06-10 18:04:26,630][46753] Starting process rollout_proc29 +[2024-06-10 18:04:26,632][46753] Starting process rollout_proc30 +[2024-06-10 18:04:26,632][46753] Starting process rollout_proc31 +[2024-06-10 18:04:28,483][47009] Worker 17 uses CPU cores [17] +[2024-06-10 18:04:28,672][46991] Worker 0 uses CPU cores [0] +[2024-06-10 18:04:28,716][47007] Worker 16 uses CPU cores [16] +[2024-06-10 18:04:28,772][47010] Worker 19 uses CPU cores [19] +[2024-06-10 18:04:28,816][47021] Worker 27 uses CPU cores [27] +[2024-06-10 18:04:28,820][46998] Worker 8 uses CPU cores [8] +[2024-06-10 18:04:28,820][47013] Worker 23 uses CPU cores [23] +[2024-06-10 18:04:28,820][47011] Worker 20 uses CPU cores [20] +[2024-06-10 18:04:28,828][47001] Worker 10 uses CPU cores [10] +[2024-06-10 18:04:28,828][46993] Worker 2 uses CPU cores [2] +[2024-06-10 18:04:28,859][47000] Worker 9 uses CPU cores [9] +[2024-06-10 18:04:28,860][46994] Worker 3 uses CPU cores [3] +[2024-06-10 18:04:28,860][47019] Worker 28 uses CPU cores [28] +[2024-06-10 18:04:28,864][47018] Worker 30 uses CPU cores [30] +[2024-06-10 18:04:28,868][47016] Worker 26 uses CPU cores [26] +[2024-06-10 18:04:28,906][46992] Worker 1 uses CPU cores [1] +[2024-06-10 18:04:28,939][47004] Worker 12 uses CPU cores [12] +[2024-06-10 18:04:28,940][46970] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-06-10 18:04:28,940][46970] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2024-06-10 18:04:28,949][46970] Num visible devices: 1 +[2024-06-10 18:04:28,968][46970] Setting fixed seed 0 +[2024-06-10 18:04:28,969][46970] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-06-10 18:04:28,969][46970] Initializing actor-critic model on device cuda:0 +[2024-06-10 18:04:28,980][47020] Worker 29 uses CPU cores [29] +[2024-06-10 18:04:28,983][46997] Worker 7 uses CPU cores [7] +[2024-06-10 18:04:29,005][47014] Worker 22 uses CPU cores [22] +[2024-06-10 18:04:29,042][46990] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-06-10 18:04:29,042][46990] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2024-06-10 18:04:29,050][46990] Num visible devices: 1 +[2024-06-10 18:04:29,076][47008] Worker 18 uses CPU cores [18] +[2024-06-10 18:04:29,079][47017] Worker 25 uses CPU cores [25] +[2024-06-10 18:04:29,088][47012] Worker 21 uses CPU cores [21] +[2024-06-10 18:04:29,104][47005] Worker 14 uses CPU cores [14] +[2024-06-10 18:04:29,114][46999] Worker 6 uses CPU cores [6] +[2024-06-10 18:04:29,128][47006] Worker 15 uses CPU cores [15] +[2024-06-10 18:04:29,131][46996] Worker 5 uses CPU cores [5] +[2024-06-10 18:04:29,152][46995] Worker 4 uses CPU cores [4] +[2024-06-10 18:04:29,154][47003] Worker 13 uses CPU cores [13] +[2024-06-10 18:04:29,164][47002] Worker 11 uses CPU cores [11] +[2024-06-10 18:04:29,173][47015] Worker 24 uses CPU cores [24] +[2024-06-10 18:04:29,219][47022] Worker 31 uses CPU cores [31] +[2024-06-10 18:04:29,739][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,740][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,741][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,741][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,744][46970] RunningMeanStd input shape: (1,) +[2024-06-10 18:04:29,744][46970] RunningMeanStd input shape: (1,) +[2024-06-10 18:04:29,744][46970] RunningMeanStd input shape: (1,) +[2024-06-10 18:04:29,744][46970] RunningMeanStd input shape: (1,) +[2024-06-10 18:04:29,744][46970] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:29,784][46970] RunningMeanStd input shape: (1,) +[2024-06-10 18:04:29,788][46970] Created Actor Critic model with architecture: +[2024-06-10 18:04:29,788][46970] SampleFactoryAgentWrapper( + (obs_normalizer): ObservationNormalizer() + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (agent): MettaAgent( + (_encoder): MultiFeatureSetEncoder( + (feature_set_encoders): ModuleDict( + (grid_obs): FeatureSetEncoder( + (_normalizer): FeatureListNormalizer( + (_norms_dict): ModuleDict( + (agent): RunningMeanStdInPlace() + (altar): RunningMeanStdInPlace() + (converter): RunningMeanStdInPlace() + (generator): RunningMeanStdInPlace() + (wall): RunningMeanStdInPlace() + (agent:dir): RunningMeanStdInPlace() + (agent:energy): RunningMeanStdInPlace() + (agent:frozen): RunningMeanStdInPlace() + (agent:hp): RunningMeanStdInPlace() + (agent:id): RunningMeanStdInPlace() + (agent:inv_r1): RunningMeanStdInPlace() + (agent:inv_r2): RunningMeanStdInPlace() + (agent:inv_r3): RunningMeanStdInPlace() + (agent:shield): RunningMeanStdInPlace() + (altar:hp): RunningMeanStdInPlace() + (altar:state): RunningMeanStdInPlace() + (converter:hp): RunningMeanStdInPlace() + (converter:state): RunningMeanStdInPlace() + (generator:amount): RunningMeanStdInPlace() + (generator:hp): RunningMeanStdInPlace() + (generator:state): RunningMeanStdInPlace() + (wall:hp): RunningMeanStdInPlace() + ) + ) + (embedding_net): Sequential( + (0): Linear(in_features=125, out_features=512, bias=True) + (1): ELU(alpha=1.0) + (2): Linear(in_features=512, out_features=512, bias=True) + (3): ELU(alpha=1.0) + (4): Linear(in_features=512, out_features=512, bias=True) + (5): ELU(alpha=1.0) + (6): Linear(in_features=512, out_features=512, bias=True) + (7): ELU(alpha=1.0) + ) + ) + (global_vars): FeatureSetEncoder( + (_normalizer): FeatureListNormalizer( + (_norms_dict): ModuleDict( + (_steps): RunningMeanStdInPlace() + ) + ) + (embedding_net): Sequential( + (0): Linear(in_features=5, out_features=8, bias=True) + (1): ELU(alpha=1.0) + (2): Linear(in_features=8, out_features=8, bias=True) + (3): ELU(alpha=1.0) + ) + ) + (last_action): FeatureSetEncoder( + (_normalizer): FeatureListNormalizer( + (_norms_dict): ModuleDict( + (last_action_id): RunningMeanStdInPlace() + (last_action_val): RunningMeanStdInPlace() + ) + ) + (embedding_net): Sequential( + (0): Linear(in_features=5, out_features=8, bias=True) + (1): ELU(alpha=1.0) + (2): Linear(in_features=8, out_features=8, bias=True) + (3): ELU(alpha=1.0) + ) + ) + (last_reward): FeatureSetEncoder( + (_normalizer): FeatureListNormalizer( + (_norms_dict): ModuleDict( + (last_reward): RunningMeanStdInPlace() + ) + ) + (embedding_net): Sequential( + (0): Linear(in_features=5, out_features=8, bias=True) + (1): ELU(alpha=1.0) + (2): Linear(in_features=8, out_features=8, bias=True) + (3): ELU(alpha=1.0) + ) + ) + (kinship): FeatureSetEncoder( + (_normalizer): FeatureListNormalizer( + (_norms_dict): ModuleDict( + (kinship): RunningMeanStdInPlace() + ) + ) + (embedding_net): Sequential( + (0): Linear(in_features=125, out_features=8, bias=True) + (1): ELU(alpha=1.0) + (2): Linear(in_features=8, out_features=8, bias=True) + (3): ELU(alpha=1.0) + ) + ) + ) + (merged_encoder): Sequential( + (0): Linear(in_features=544, out_features=512, bias=True) + (1): ELU(alpha=1.0) + (2): Linear(in_features=512, out_features=512, bias=True) + (3): ELU(alpha=1.0) + (4): Linear(in_features=512, out_features=512, bias=True) + (5): ELU(alpha=1.0) + ) + ) + (_core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (_decoder): Decoder( + (mlp): Identity() + ) + (_critic_linear): Linear(in_features=512, out_features=1, bias=True) + (_action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=16, bias=True) + ) + ) +) +[2024-06-10 18:04:29,861][46970] Using optimizer +[2024-06-10 18:04:30,053][46970] No checkpoints found +[2024-06-10 18:04:30,053][46970] Did not load from checkpoint, starting from scratch! +[2024-06-10 18:04:30,053][46970] Initialized policy 0 weights for model version 0 +[2024-06-10 18:04:30,055][46970] LearnerWorker_p0 finished initialization! +[2024-06-10 18:04:30,055][46970] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-06-10 18:04:30,790][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,790][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,790][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,790][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,790][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,791][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,794][46990] RunningMeanStd input shape: (1,) +[2024-06-10 18:04:30,799][46990] RunningMeanStd input shape: (1,) +[2024-06-10 18:04:30,799][46990] RunningMeanStd input shape: (1,) +[2024-06-10 18:04:30,799][46990] RunningMeanStd input shape: (1,) +[2024-06-10 18:04:30,799][46990] RunningMeanStd input shape: (11, 11) +[2024-06-10 18:04:30,839][46990] RunningMeanStd input shape: (1,) +[2024-06-10 18:04:30,861][46753] Inference worker 0-0 is ready! +[2024-06-10 18:04:30,861][46753] All inference workers are ready! Signal rollout workers to start! +[2024-06-10 18:04:33,239][46753] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2024-06-10 18:04:33,417][47013] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,419][47008] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,430][47022] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,433][47012] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,434][47010] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,437][47016] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,438][47007] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,447][47009] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,453][47011] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,460][47015] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,460][47018] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,469][47020] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,469][47017] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,478][47019] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,499][47014] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,507][46994] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,508][47000] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,510][46992] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,512][47003] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,513][46997] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,520][46996] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,520][47006] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,524][46993] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,527][47004] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,528][47005] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,530][46991] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,530][46995] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,532][47021] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,533][47002] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,540][46998] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,540][47001] Decorrelating experience for 0 frames... +[2024-06-10 18:04:33,541][46999] Decorrelating experience for 0 frames... +[2024-06-10 18:04:34,884][47013] Decorrelating experience for 256 frames... +[2024-06-10 18:04:34,905][47008] Decorrelating experience for 256 frames... +[2024-06-10 18:04:34,918][47022] Decorrelating experience for 256 frames... +[2024-06-10 18:04:34,929][47016] Decorrelating experience for 256 frames... +[2024-06-10 18:04:34,941][47007] Decorrelating experience for 256 frames... +[2024-06-10 18:04:34,948][47012] Decorrelating experience for 256 frames... +[2024-06-10 18:04:34,948][47010] Decorrelating experience for 256 frames... +[2024-06-10 18:04:34,968][47009] Decorrelating experience for 256 frames... +[2024-06-10 18:04:34,984][47011] Decorrelating experience for 256 frames... +[2024-06-10 18:04:34,988][47015] Decorrelating experience for 256 frames... +[2024-06-10 18:04:34,991][47018] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,017][47020] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,019][47017] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,031][46994] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,045][47000] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,058][47019] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,070][46992] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,070][46997] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,073][46996] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,075][47003] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,077][47006] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,088][46993] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,090][47005] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,091][47004] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,096][46995] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,097][46998] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,098][47002] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,102][46991] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,104][47001] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,106][46999] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,120][47014] Decorrelating experience for 256 frames... +[2024-06-10 18:04:35,132][47021] Decorrelating experience for 256 frames... +[2024-06-10 18:04:38,239][46753] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 5888.2. Samples: 29440. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2024-06-10 18:04:41,834][47012] Worker 21, sleep for 98.438 sec to decorrelate experience collection +[2024-06-10 18:04:41,834][47009] Worker 17, sleep for 79.688 sec to decorrelate experience collection +[2024-06-10 18:04:41,847][47008] Worker 18, sleep for 84.375 sec to decorrelate experience collection +[2024-06-10 18:04:41,848][47011] Worker 20, sleep for 93.750 sec to decorrelate experience collection +[2024-06-10 18:04:41,860][46998] Worker 8, sleep for 37.500 sec to decorrelate experience collection +[2024-06-10 18:04:41,861][47005] Worker 14, sleep for 65.625 sec to decorrelate experience collection +[2024-06-10 18:04:41,862][47022] Worker 31, sleep for 145.312 sec to decorrelate experience collection +[2024-06-10 18:04:41,862][47015] Worker 24, sleep for 112.500 sec to decorrelate experience collection +[2024-06-10 18:04:41,871][47004] Worker 12, sleep for 56.250 sec to decorrelate experience collection +[2024-06-10 18:04:41,873][47001] Worker 10, sleep for 46.875 sec to decorrelate experience collection +[2024-06-10 18:04:41,874][47017] Worker 25, sleep for 117.188 sec to decorrelate experience collection +[2024-06-10 18:04:41,874][47007] Worker 16, sleep for 75.000 sec to decorrelate experience collection +[2024-06-10 18:04:41,874][47010] Worker 19, sleep for 89.062 sec to decorrelate experience collection +[2024-06-10 18:04:41,880][47002] Worker 11, sleep for 51.562 sec to decorrelate experience collection +[2024-06-10 18:04:41,883][46993] Worker 2, sleep for 9.375 sec to decorrelate experience collection +[2024-06-10 18:04:41,883][47013] Worker 23, sleep for 107.812 sec to decorrelate experience collection +[2024-06-10 18:04:41,884][47020] Worker 29, sleep for 135.938 sec to decorrelate experience collection +[2024-06-10 18:04:41,886][47016] Worker 26, sleep for 121.875 sec to decorrelate experience collection +[2024-06-10 18:04:41,899][46994] Worker 3, sleep for 14.062 sec to decorrelate experience collection +[2024-06-10 18:04:41,902][47019] Worker 28, sleep for 131.250 sec to decorrelate experience collection +[2024-06-10 18:04:41,902][47018] Worker 30, sleep for 140.625 sec to decorrelate experience collection +[2024-06-10 18:04:41,914][47006] Worker 15, sleep for 70.312 sec to decorrelate experience collection +[2024-06-10 18:04:41,916][46992] Worker 1, sleep for 4.688 sec to decorrelate experience collection +[2024-06-10 18:04:41,931][47021] Worker 27, sleep for 126.562 sec to decorrelate experience collection +[2024-06-10 18:04:41,941][47014] Worker 22, sleep for 103.125 sec to decorrelate experience collection +[2024-06-10 18:04:41,960][46997] Worker 7, sleep for 32.812 sec to decorrelate experience collection +[2024-06-10 18:04:41,960][47003] Worker 13, sleep for 60.938 sec to decorrelate experience collection +[2024-06-10 18:04:41,961][47000] Worker 9, sleep for 42.188 sec to decorrelate experience collection +[2024-06-10 18:04:41,967][46970] Signal inference workers to stop experience collection... +[2024-06-10 18:04:42,003][46990] InferenceWorker_p0-w0: stopping experience collection +[2024-06-10 18:04:42,039][46999] Worker 6, sleep for 28.125 sec to decorrelate experience collection +[2024-06-10 18:04:42,488][46970] Signal inference workers to resume experience collection... +[2024-06-10 18:04:42,488][46990] InferenceWorker_p0-w0: resuming experience collection +[2024-06-10 18:04:42,561][46996] Worker 5, sleep for 23.438 sec to decorrelate experience collection +[2024-06-10 18:04:42,561][46995] Worker 4, sleep for 18.750 sec to decorrelate experience collection +[2024-06-10 18:04:43,239][46753] Fps is (10 sec: 9830.5, 60 sec: 9830.5, 300 sec: 9830.5). Total num frames: 98304. Throughput: 0: 32808.4. Samples: 328080. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2024-06-10 18:04:43,598][46990] Updated weights for policy 0, policy_version 10 (0.0014) +[2024-06-10 18:04:46,279][46753] Heartbeat connected on Batcher_0 +[2024-06-10 18:04:46,281][46753] Heartbeat connected on LearnerWorker_p0 +[2024-06-10 18:04:46,301][46753] Heartbeat connected on RolloutWorker_w0 +[2024-06-10 18:04:46,322][46753] Heartbeat connected on InferenceWorker_p0-w0 +[2024-06-10 18:04:46,627][46992] Worker 1 awakens! +[2024-06-10 18:04:46,635][46753] Heartbeat connected on RolloutWorker_w1 +[2024-06-10 18:04:48,239][46753] Fps is (10 sec: 16383.7, 60 sec: 10922.6, 300 sec: 10922.6). Total num frames: 163840. Throughput: 0: 22069.3. Samples: 331040. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2024-06-10 18:04:51,305][46993] Worker 2 awakens! +[2024-06-10 18:04:51,313][46753] Heartbeat connected on RolloutWorker_w2 +[2024-06-10 18:04:53,240][46753] Fps is (10 sec: 8191.7, 60 sec: 9011.1, 300 sec: 9011.1). Total num frames: 180224. Throughput: 0: 17276.8. Samples: 345540. Policy #0 lag: (min: 0.0, avg: 9.4, max: 10.0) +[2024-06-10 18:04:56,032][46994] Worker 3 awakens! +[2024-06-10 18:04:56,047][46753] Heartbeat connected on RolloutWorker_w3 +[2024-06-10 18:04:58,240][46753] Fps is (10 sec: 3276.8, 60 sec: 7864.3, 300 sec: 7864.3). Total num frames: 196608. Throughput: 0: 14784.7. Samples: 369620. Policy #0 lag: (min: 0.0, avg: 1.1, max: 11.0) +[2024-06-10 18:05:01,405][46995] Worker 4 awakens! +[2024-06-10 18:05:01,414][46753] Heartbeat connected on RolloutWorker_w4 +[2024-06-10 18:05:03,239][46753] Fps is (10 sec: 6553.8, 60 sec: 8192.0, 300 sec: 8192.0). Total num frames: 245760. Throughput: 0: 12778.7. Samples: 383360. Policy #0 lag: (min: 0.0, avg: 2.1, max: 14.0) +[2024-06-10 18:05:03,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:03,249][46970] Saving new best policy, reward=0.000! +[2024-06-10 18:05:06,098][46996] Worker 5 awakens! +[2024-06-10 18:05:06,105][46753] Heartbeat connected on RolloutWorker_w5 +[2024-06-10 18:05:08,239][46753] Fps is (10 sec: 9830.7, 60 sec: 8426.1, 300 sec: 8426.1). Total num frames: 294912. Throughput: 0: 12956.0. Samples: 453460. Policy #0 lag: (min: 0.0, avg: 1.2, max: 3.0) +[2024-06-10 18:05:08,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:09,645][46990] Updated weights for policy 0, policy_version 20 (0.0017) +[2024-06-10 18:05:10,264][46999] Worker 6 awakens! +[2024-06-10 18:05:10,268][46753] Heartbeat connected on RolloutWorker_w6 +[2024-06-10 18:05:13,239][46753] Fps is (10 sec: 14745.6, 60 sec: 9830.4, 300 sec: 9830.4). Total num frames: 393216. Throughput: 0: 13944.5. Samples: 557780. Policy #0 lag: (min: 0.0, avg: 2.4, max: 5.0) +[2024-06-10 18:05:13,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:14,872][46997] Worker 7 awakens! +[2024-06-10 18:05:14,877][46753] Heartbeat connected on RolloutWorker_w7 +[2024-06-10 18:05:17,136][46990] Updated weights for policy 0, policy_version 30 (0.0012) +[2024-06-10 18:05:18,239][46753] Fps is (10 sec: 21299.2, 60 sec: 11286.8, 300 sec: 11286.8). Total num frames: 507904. Throughput: 0: 13840.0. Samples: 622800. Policy #0 lag: (min: 0.0, avg: 3.1, max: 5.0) +[2024-06-10 18:05:18,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:19,374][46998] Worker 8 awakens! +[2024-06-10 18:05:19,379][46753] Heartbeat connected on RolloutWorker_w8 +[2024-06-10 18:05:23,239][46753] Fps is (10 sec: 21299.2, 60 sec: 12124.2, 300 sec: 12124.2). Total num frames: 606208. Throughput: 0: 16072.0. Samples: 752680. Policy #0 lag: (min: 0.0, avg: 2.4, max: 6.0) +[2024-06-10 18:05:23,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:24,248][47000] Worker 9 awakens! +[2024-06-10 18:05:24,253][46753] Heartbeat connected on RolloutWorker_w9 +[2024-06-10 18:05:24,816][46990] Updated weights for policy 0, policy_version 40 (0.0014) +[2024-06-10 18:05:28,239][46753] Fps is (10 sec: 21299.1, 60 sec: 13107.2, 300 sec: 13107.2). Total num frames: 720896. Throughput: 0: 12700.0. Samples: 899580. Policy #0 lag: (min: 0.0, avg: 4.3, max: 7.0) +[2024-06-10 18:05:28,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:28,848][47001] Worker 10 awakens! +[2024-06-10 18:05:28,853][46753] Heartbeat connected on RolloutWorker_w10 +[2024-06-10 18:05:31,484][46990] Updated weights for policy 0, policy_version 50 (0.0013) +[2024-06-10 18:05:33,239][46753] Fps is (10 sec: 27852.5, 60 sec: 14745.6, 300 sec: 14745.6). Total num frames: 884736. Throughput: 0: 14655.1. Samples: 990520. Policy #0 lag: (min: 0.0, avg: 3.3, max: 9.0) +[2024-06-10 18:05:33,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:33,540][47002] Worker 11 awakens! +[2024-06-10 18:05:33,548][46753] Heartbeat connected on RolloutWorker_w11 +[2024-06-10 18:05:36,062][46990] Updated weights for policy 0, policy_version 60 (0.0013) +[2024-06-10 18:05:38,220][47004] Worker 12 awakens! +[2024-06-10 18:05:38,225][46753] Heartbeat connected on RolloutWorker_w12 +[2024-06-10 18:05:38,239][46753] Fps is (10 sec: 32768.3, 60 sec: 17476.3, 300 sec: 16132.0). Total num frames: 1048576. Throughput: 0: 18446.4. Samples: 1175620. Policy #0 lag: (min: 0.0, avg: 4.3, max: 9.0) +[2024-06-10 18:05:38,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:40,876][46990] Updated weights for policy 0, policy_version 70 (0.0015) +[2024-06-10 18:05:42,918][47003] Worker 13 awakens! +[2024-06-10 18:05:42,926][46753] Heartbeat connected on RolloutWorker_w13 +[2024-06-10 18:05:43,240][46753] Fps is (10 sec: 31129.6, 60 sec: 18295.4, 300 sec: 17086.2). Total num frames: 1196032. Throughput: 0: 22411.6. Samples: 1378140. Policy #0 lag: (min: 0.0, avg: 5.5, max: 10.0) +[2024-06-10 18:05:43,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:45,320][46990] Updated weights for policy 0, policy_version 80 (0.0019) +[2024-06-10 18:05:47,586][47005] Worker 14 awakens! +[2024-06-10 18:05:47,591][46753] Heartbeat connected on RolloutWorker_w14 +[2024-06-10 18:05:48,239][46753] Fps is (10 sec: 32767.7, 60 sec: 20207.0, 300 sec: 18350.1). Total num frames: 1376256. Throughput: 0: 24363.5. Samples: 1479720. Policy #0 lag: (min: 0.0, avg: 4.5, max: 10.0) +[2024-06-10 18:05:48,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:50,729][46990] Updated weights for policy 0, policy_version 90 (0.0024) +[2024-06-10 18:05:52,324][47006] Worker 15 awakens! +[2024-06-10 18:05:52,333][46753] Heartbeat connected on RolloutWorker_w15 +[2024-06-10 18:05:53,240][46753] Fps is (10 sec: 36044.7, 60 sec: 22937.7, 300 sec: 19456.0). Total num frames: 1556480. Throughput: 0: 27368.3. Samples: 1685040. Policy #0 lag: (min: 0.0, avg: 4.7, max: 10.0) +[2024-06-10 18:05:53,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:05:55,433][46990] Updated weights for policy 0, policy_version 100 (0.0030) +[2024-06-10 18:05:56,972][47007] Worker 16 awakens! +[2024-06-10 18:05:56,983][46753] Heartbeat connected on RolloutWorker_w16 +[2024-06-10 18:05:58,239][46753] Fps is (10 sec: 32767.9, 60 sec: 25122.2, 300 sec: 20046.3). Total num frames: 1703936. Throughput: 0: 29529.3. Samples: 1886600. Policy #0 lag: (min: 0.0, avg: 6.0, max: 11.0) +[2024-06-10 18:05:58,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:00,269][46990] Updated weights for policy 0, policy_version 110 (0.0020) +[2024-06-10 18:06:01,620][47009] Worker 17 awakens! +[2024-06-10 18:06:01,630][46753] Heartbeat connected on RolloutWorker_w17 +[2024-06-10 18:06:03,240][46753] Fps is (10 sec: 34406.2, 60 sec: 27579.6, 300 sec: 21117.1). Total num frames: 1900544. Throughput: 0: 30465.2. Samples: 1993740. Policy #0 lag: (min: 0.0, avg: 5.7, max: 11.0) +[2024-06-10 18:06:03,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:04,424][46990] Updated weights for policy 0, policy_version 120 (0.0029) +[2024-06-10 18:06:06,320][47008] Worker 18 awakens! +[2024-06-10 18:06:06,331][46753] Heartbeat connected on RolloutWorker_w18 +[2024-06-10 18:06:08,239][46753] Fps is (10 sec: 39321.6, 60 sec: 30037.3, 300 sec: 22075.3). Total num frames: 2097152. Throughput: 0: 32310.6. Samples: 2206660. Policy #0 lag: (min: 0.0, avg: 19.9, max: 121.0) +[2024-06-10 18:06:08,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:09,279][46990] Updated weights for policy 0, policy_version 130 (0.0032) +[2024-06-10 18:06:11,039][47010] Worker 19 awakens! +[2024-06-10 18:06:11,051][46753] Heartbeat connected on RolloutWorker_w19 +[2024-06-10 18:06:13,239][46753] Fps is (10 sec: 36045.3, 60 sec: 31129.6, 300 sec: 22609.9). Total num frames: 2260992. Throughput: 0: 33870.2. Samples: 2423740. Policy #0 lag: (min: 0.0, avg: 20.7, max: 133.0) +[2024-06-10 18:06:13,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:13,806][46990] Updated weights for policy 0, policy_version 140 (0.0022) +[2024-06-10 18:06:15,697][47011] Worker 20 awakens! +[2024-06-10 18:06:15,708][46753] Heartbeat connected on RolloutWorker_w20 +[2024-06-10 18:06:18,240][46753] Fps is (10 sec: 34406.2, 60 sec: 32221.8, 300 sec: 23249.7). Total num frames: 2441216. Throughput: 0: 34411.5. Samples: 2539040. Policy #0 lag: (min: 0.0, avg: 49.8, max: 144.0) +[2024-06-10 18:06:18,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:18,484][46990] Updated weights for policy 0, policy_version 150 (0.0034) +[2024-06-10 18:06:20,372][47012] Worker 21 awakens! +[2024-06-10 18:06:20,384][46753] Heartbeat connected on RolloutWorker_w21 +[2024-06-10 18:06:22,746][46990] Updated weights for policy 0, policy_version 160 (0.0024) +[2024-06-10 18:06:23,239][46753] Fps is (10 sec: 36044.7, 60 sec: 33587.2, 300 sec: 23831.3). Total num frames: 2621440. Throughput: 0: 35355.0. Samples: 2766600. Policy #0 lag: (min: 0.0, avg: 54.3, max: 156.0) +[2024-06-10 18:06:23,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:23,252][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000000160_2621440.pth... +[2024-06-10 18:06:25,164][47014] Worker 22 awakens! +[2024-06-10 18:06:25,176][46753] Heartbeat connected on RolloutWorker_w22 +[2024-06-10 18:06:26,538][46990] Updated weights for policy 0, policy_version 170 (0.0037) +[2024-06-10 18:06:28,240][46753] Fps is (10 sec: 37682.8, 60 sec: 34952.4, 300 sec: 24504.7). Total num frames: 2818048. Throughput: 0: 36018.6. Samples: 2998980. Policy #0 lag: (min: 0.0, avg: 7.8, max: 15.0) +[2024-06-10 18:06:28,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:29,797][47013] Worker 23 awakens! +[2024-06-10 18:06:29,810][46753] Heartbeat connected on RolloutWorker_w23 +[2024-06-10 18:06:30,847][46990] Updated weights for policy 0, policy_version 180 (0.0034) +[2024-06-10 18:06:33,240][46753] Fps is (10 sec: 42597.8, 60 sec: 36044.7, 300 sec: 25395.2). Total num frames: 3047424. Throughput: 0: 36323.9. Samples: 3114300. Policy #0 lag: (min: 0.0, avg: 8.4, max: 17.0) +[2024-06-10 18:06:33,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:34,460][47015] Worker 24 awakens! +[2024-06-10 18:06:34,470][46753] Heartbeat connected on RolloutWorker_w24 +[2024-06-10 18:06:35,309][46990] Updated weights for policy 0, policy_version 190 (0.0033) +[2024-06-10 18:06:38,239][46753] Fps is (10 sec: 40960.8, 60 sec: 36317.8, 300 sec: 25821.2). Total num frames: 3227648. Throughput: 0: 37077.4. Samples: 3353520. Policy #0 lag: (min: 0.0, avg: 7.5, max: 18.0) +[2024-06-10 18:06:38,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:38,894][46990] Updated weights for policy 0, policy_version 200 (0.0030) +[2024-06-10 18:06:39,160][47017] Worker 25 awakens! +[2024-06-10 18:06:39,173][46753] Heartbeat connected on RolloutWorker_w25 +[2024-06-10 18:06:43,240][46753] Fps is (10 sec: 36044.9, 60 sec: 36864.0, 300 sec: 26214.4). Total num frames: 3407872. Throughput: 0: 38030.6. Samples: 3597980. Policy #0 lag: (min: 0.0, avg: 7.4, max: 17.0) +[2024-06-10 18:06:43,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:43,398][46990] Updated weights for policy 0, policy_version 210 (0.0030) +[2024-06-10 18:06:43,860][47016] Worker 26 awakens! +[2024-06-10 18:06:43,872][46753] Heartbeat connected on RolloutWorker_w26 +[2024-06-10 18:06:47,390][46990] Updated weights for policy 0, policy_version 220 (0.0039) +[2024-06-10 18:06:48,240][46753] Fps is (10 sec: 39321.0, 60 sec: 37410.0, 300 sec: 26821.2). Total num frames: 3620864. Throughput: 0: 38293.8. Samples: 3716960. Policy #0 lag: (min: 0.0, avg: 10.0, max: 18.0) +[2024-06-10 18:06:48,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:48,594][47021] Worker 27 awakens! +[2024-06-10 18:06:48,608][46753] Heartbeat connected on RolloutWorker_w27 +[2024-06-10 18:06:50,631][46990] Updated weights for policy 0, policy_version 230 (0.0022) +[2024-06-10 18:06:53,239][46753] Fps is (10 sec: 45876.1, 60 sec: 38502.5, 300 sec: 27618.8). Total num frames: 3866624. Throughput: 0: 39229.4. Samples: 3971980. Policy #0 lag: (min: 0.0, avg: 9.5, max: 18.0) +[2024-06-10 18:06:53,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:53,252][47019] Worker 28 awakens! +[2024-06-10 18:06:53,269][46753] Heartbeat connected on RolloutWorker_w28 +[2024-06-10 18:06:55,398][46990] Updated weights for policy 0, policy_version 240 (0.0031) +[2024-06-10 18:06:57,920][47020] Worker 29 awakens! +[2024-06-10 18:06:57,934][46753] Heartbeat connected on RolloutWorker_w29 +[2024-06-10 18:06:58,240][46753] Fps is (10 sec: 45875.3, 60 sec: 39594.6, 300 sec: 28135.3). Total num frames: 4079616. Throughput: 0: 39926.1. Samples: 4220420. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 18:06:58,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:06:58,861][46990] Updated weights for policy 0, policy_version 250 (0.0036) +[2024-06-10 18:07:02,612][47018] Worker 30 awakens! +[2024-06-10 18:07:02,628][46753] Heartbeat connected on RolloutWorker_w30 +[2024-06-10 18:07:02,770][46990] Updated weights for policy 0, policy_version 260 (0.0028) +[2024-06-10 18:07:03,240][46753] Fps is (10 sec: 40959.3, 60 sec: 39594.7, 300 sec: 28508.1). Total num frames: 4276224. Throughput: 0: 40430.2. Samples: 4358400. Policy #0 lag: (min: 0.0, avg: 10.2, max: 19.0) +[2024-06-10 18:07:03,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:07:06,259][46990] Updated weights for policy 0, policy_version 270 (0.0036) +[2024-06-10 18:07:07,272][47022] Worker 31 awakens! +[2024-06-10 18:07:07,288][46753] Heartbeat connected on RolloutWorker_w31 +[2024-06-10 18:07:08,239][46753] Fps is (10 sec: 42598.7, 60 sec: 40140.8, 300 sec: 29068.4). Total num frames: 4505600. Throughput: 0: 41104.4. Samples: 4616300. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 18:07:08,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:07:10,407][46990] Updated weights for policy 0, policy_version 280 (0.0020) +[2024-06-10 18:07:13,239][46753] Fps is (10 sec: 44237.6, 60 sec: 40960.1, 300 sec: 29491.2). Total num frames: 4718592. Throughput: 0: 41935.4. Samples: 4886060. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 18:07:13,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:07:13,373][46990] Updated weights for policy 0, policy_version 290 (0.0035) +[2024-06-10 18:07:17,787][46990] Updated weights for policy 0, policy_version 300 (0.0025) +[2024-06-10 18:07:18,239][46753] Fps is (10 sec: 42598.5, 60 sec: 41506.2, 300 sec: 29888.4). Total num frames: 4931584. Throughput: 0: 42331.7. Samples: 5019220. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 18:07:18,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:07:21,001][46990] Updated weights for policy 0, policy_version 310 (0.0025) +[2024-06-10 18:07:22,388][46970] Signal inference workers to stop experience collection... (50 times) +[2024-06-10 18:07:22,398][46990] InferenceWorker_p0-w0: stopping experience collection (50 times) +[2024-06-10 18:07:22,446][46970] Signal inference workers to resume experience collection... (50 times) +[2024-06-10 18:07:22,446][46990] InferenceWorker_p0-w0: resuming experience collection (50 times) +[2024-06-10 18:07:23,239][46753] Fps is (10 sec: 44236.6, 60 sec: 42325.4, 300 sec: 30358.6). Total num frames: 5160960. Throughput: 0: 42668.9. Samples: 5273620. Policy #0 lag: (min: 0.0, avg: 11.6, max: 23.0) +[2024-06-10 18:07:23,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:07:25,477][46990] Updated weights for policy 0, policy_version 320 (0.0029) +[2024-06-10 18:07:28,240][46753] Fps is (10 sec: 45875.0, 60 sec: 42871.5, 300 sec: 30801.9). Total num frames: 5390336. Throughput: 0: 43023.6. Samples: 5534040. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 18:07:28,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:07:28,605][46990] Updated weights for policy 0, policy_version 330 (0.0051) +[2024-06-10 18:07:32,757][46990] Updated weights for policy 0, policy_version 340 (0.0040) +[2024-06-10 18:07:33,239][46753] Fps is (10 sec: 44236.8, 60 sec: 42598.5, 300 sec: 31129.6). Total num frames: 5603328. Throughput: 0: 43417.5. Samples: 5670740. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 18:07:33,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:07:36,003][46990] Updated weights for policy 0, policy_version 350 (0.0039) +[2024-06-10 18:07:38,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43144.5, 300 sec: 31439.6). Total num frames: 5816320. Throughput: 0: 43597.7. Samples: 5933880. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 18:07:38,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:07:40,111][46990] Updated weights for policy 0, policy_version 360 (0.0034) +[2024-06-10 18:07:43,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43963.9, 300 sec: 31819.5). Total num frames: 6045696. Throughput: 0: 44005.1. Samples: 6200640. Policy #0 lag: (min: 0.0, avg: 9.7, max: 25.0) +[2024-06-10 18:07:43,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:07:43,468][46990] Updated weights for policy 0, policy_version 370 (0.0036) +[2024-06-10 18:07:47,713][46990] Updated weights for policy 0, policy_version 380 (0.0038) +[2024-06-10 18:07:48,244][46753] Fps is (10 sec: 42579.3, 60 sec: 43687.5, 300 sec: 32011.1). Total num frames: 6242304. Throughput: 0: 43896.2. Samples: 6333920. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 18:07:48,244][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:07:50,735][46990] Updated weights for policy 0, policy_version 390 (0.0029) +[2024-06-10 18:07:53,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.6, 300 sec: 32358.4). Total num frames: 6471680. Throughput: 0: 44024.9. Samples: 6597420. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 18:07:53,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:07:55,223][46990] Updated weights for policy 0, policy_version 400 (0.0033) +[2024-06-10 18:07:58,108][46990] Updated weights for policy 0, policy_version 410 (0.0034) +[2024-06-10 18:07:58,239][46753] Fps is (10 sec: 47534.6, 60 sec: 43963.8, 300 sec: 32768.0). Total num frames: 6717440. Throughput: 0: 43809.2. Samples: 6857480. Policy #0 lag: (min: 0.0, avg: 10.7, max: 19.0) +[2024-06-10 18:07:58,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:08:02,586][46990] Updated weights for policy 0, policy_version 420 (0.0031) +[2024-06-10 18:08:03,239][46753] Fps is (10 sec: 45875.0, 60 sec: 44236.8, 300 sec: 33002.1). Total num frames: 6930432. Throughput: 0: 43838.7. Samples: 6991960. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 18:08:03,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:08:05,775][46990] Updated weights for policy 0, policy_version 430 (0.0029) +[2024-06-10 18:08:08,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.8, 300 sec: 33225.2). Total num frames: 7143424. Throughput: 0: 44102.2. Samples: 7258220. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 18:08:08,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:08:09,878][46990] Updated weights for policy 0, policy_version 440 (0.0040) +[2024-06-10 18:08:13,175][46990] Updated weights for policy 0, policy_version 450 (0.0035) +[2024-06-10 18:08:13,240][46753] Fps is (10 sec: 44236.4, 60 sec: 44236.7, 300 sec: 33512.7). Total num frames: 7372800. Throughput: 0: 44136.4. Samples: 7520180. Policy #0 lag: (min: 0.0, avg: 10.0, max: 20.0) +[2024-06-10 18:08:13,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:08:17,608][46990] Updated weights for policy 0, policy_version 460 (0.0029) +[2024-06-10 18:08:18,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43963.7, 300 sec: 33641.8). Total num frames: 7569408. Throughput: 0: 44110.2. Samples: 7655700. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 18:08:18,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:08:20,429][46990] Updated weights for policy 0, policy_version 470 (0.0033) +[2024-06-10 18:08:23,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43963.7, 300 sec: 33907.8). Total num frames: 7798784. Throughput: 0: 44187.0. Samples: 7922300. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 18:08:23,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:08:23,264][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000000476_7798784.pth... +[2024-06-10 18:08:24,875][46990] Updated weights for policy 0, policy_version 480 (0.0027) +[2024-06-10 18:08:28,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43690.8, 300 sec: 34092.7). Total num frames: 8011776. Throughput: 0: 43972.5. Samples: 8179400. Policy #0 lag: (min: 0.0, avg: 11.5, max: 25.0) +[2024-06-10 18:08:28,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:08:28,257][46990] Updated weights for policy 0, policy_version 490 (0.0028) +[2024-06-10 18:08:32,321][46990] Updated weights for policy 0, policy_version 500 (0.0038) +[2024-06-10 18:08:33,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43963.7, 300 sec: 34338.1). Total num frames: 8241152. Throughput: 0: 44079.9. Samples: 8317320. Policy #0 lag: (min: 0.0, avg: 8.7, max: 22.0) +[2024-06-10 18:08:33,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:08:35,444][46990] Updated weights for policy 0, policy_version 510 (0.0051) +[2024-06-10 18:08:38,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43963.7, 300 sec: 34506.7). Total num frames: 8454144. Throughput: 0: 44159.1. Samples: 8584580. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 18:08:38,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:08:39,442][46990] Updated weights for policy 0, policy_version 520 (0.0031) +[2024-06-10 18:08:42,498][46990] Updated weights for policy 0, policy_version 530 (0.0030) +[2024-06-10 18:08:43,240][46753] Fps is (10 sec: 45874.9, 60 sec: 44236.7, 300 sec: 34799.6). Total num frames: 8699904. Throughput: 0: 44146.6. Samples: 8844080. Policy #0 lag: (min: 0.0, avg: 12.1, max: 24.0) +[2024-06-10 18:08:43,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:08:44,200][46970] Signal inference workers to stop experience collection... (100 times) +[2024-06-10 18:08:44,201][46970] Signal inference workers to resume experience collection... (100 times) +[2024-06-10 18:08:44,220][46990] InferenceWorker_p0-w0: stopping experience collection (100 times) +[2024-06-10 18:08:44,220][46990] InferenceWorker_p0-w0: resuming experience collection (100 times) +[2024-06-10 18:08:47,105][46990] Updated weights for policy 0, policy_version 540 (0.0040) +[2024-06-10 18:08:48,239][46753] Fps is (10 sec: 45875.6, 60 sec: 44513.3, 300 sec: 34952.6). Total num frames: 8912896. Throughput: 0: 44212.6. Samples: 8981520. Policy #0 lag: (min: 0.0, avg: 8.5, max: 22.0) +[2024-06-10 18:08:48,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:08:50,107][46990] Updated weights for policy 0, policy_version 550 (0.0030) +[2024-06-10 18:08:53,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43963.7, 300 sec: 35036.6). Total num frames: 9109504. Throughput: 0: 44085.8. Samples: 9242080. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 18:08:53,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:08:54,622][46990] Updated weights for policy 0, policy_version 560 (0.0036) +[2024-06-10 18:08:57,619][46990] Updated weights for policy 0, policy_version 570 (0.0024) +[2024-06-10 18:08:58,240][46753] Fps is (10 sec: 42597.3, 60 sec: 43690.6, 300 sec: 35241.0). Total num frames: 9338880. Throughput: 0: 43974.6. Samples: 9499040. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 18:08:58,244][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:02,227][46990] Updated weights for policy 0, policy_version 580 (0.0034) +[2024-06-10 18:09:03,244][46753] Fps is (10 sec: 44217.0, 60 sec: 43687.4, 300 sec: 35376.7). Total num frames: 9551872. Throughput: 0: 43818.3. Samples: 9627720. Policy #0 lag: (min: 1.0, avg: 7.9, max: 19.0) +[2024-06-10 18:09:03,253][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:05,214][46990] Updated weights for policy 0, policy_version 590 (0.0034) +[2024-06-10 18:09:08,239][46753] Fps is (10 sec: 44237.7, 60 sec: 43963.7, 300 sec: 35568.2). Total num frames: 9781248. Throughput: 0: 43925.4. Samples: 9898940. Policy #0 lag: (min: 0.0, avg: 9.6, max: 22.0) +[2024-06-10 18:09:08,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:09,400][46990] Updated weights for policy 0, policy_version 600 (0.0038) +[2024-06-10 18:09:12,399][46990] Updated weights for policy 0, policy_version 610 (0.0029) +[2024-06-10 18:09:13,239][46753] Fps is (10 sec: 45895.9, 60 sec: 43963.9, 300 sec: 35752.2). Total num frames: 10010624. Throughput: 0: 43976.8. Samples: 10158360. Policy #0 lag: (min: 0.0, avg: 10.9, max: 20.0) +[2024-06-10 18:09:13,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:17,079][46990] Updated weights for policy 0, policy_version 620 (0.0025) +[2024-06-10 18:09:18,239][46753] Fps is (10 sec: 44236.5, 60 sec: 44236.8, 300 sec: 35872.3). Total num frames: 10223616. Throughput: 0: 43868.9. Samples: 10291420. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 18:09:18,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:19,617][46990] Updated weights for policy 0, policy_version 630 (0.0051) +[2024-06-10 18:09:23,240][46753] Fps is (10 sec: 44236.1, 60 sec: 44236.8, 300 sec: 36044.8). Total num frames: 10452992. Throughput: 0: 43840.8. Samples: 10557420. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 18:09:23,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:24,428][46990] Updated weights for policy 0, policy_version 640 (0.0036) +[2024-06-10 18:09:27,268][46990] Updated weights for policy 0, policy_version 650 (0.0034) +[2024-06-10 18:09:28,239][46753] Fps is (10 sec: 44236.9, 60 sec: 44236.7, 300 sec: 36155.9). Total num frames: 10665984. Throughput: 0: 43937.8. Samples: 10821280. Policy #0 lag: (min: 1.0, avg: 11.2, max: 22.0) +[2024-06-10 18:09:28,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:31,940][46990] Updated weights for policy 0, policy_version 660 (0.0036) +[2024-06-10 18:09:33,239][46753] Fps is (10 sec: 42599.5, 60 sec: 43963.8, 300 sec: 36877.9). Total num frames: 10878976. Throughput: 0: 43738.7. Samples: 10949760. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 18:09:33,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:34,886][46990] Updated weights for policy 0, policy_version 670 (0.0039) +[2024-06-10 18:09:38,240][46753] Fps is (10 sec: 44236.1, 60 sec: 44236.7, 300 sec: 37322.2). Total num frames: 11108352. Throughput: 0: 43868.7. Samples: 11216180. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 18:09:38,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:39,269][46990] Updated weights for policy 0, policy_version 680 (0.0031) +[2024-06-10 18:09:42,056][46990] Updated weights for policy 0, policy_version 690 (0.0037) +[2024-06-10 18:09:43,240][46753] Fps is (10 sec: 44235.8, 60 sec: 43690.6, 300 sec: 37822.0). Total num frames: 11321344. Throughput: 0: 44065.4. Samples: 11481980. Policy #0 lag: (min: 0.0, avg: 11.0, max: 20.0) +[2024-06-10 18:09:43,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:47,087][46990] Updated weights for policy 0, policy_version 700 (0.0038) +[2024-06-10 18:09:48,240][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.5, 300 sec: 38544.1). Total num frames: 11550720. Throughput: 0: 44065.0. Samples: 11610460. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 18:09:48,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:09:49,380][46990] Updated weights for policy 0, policy_version 710 (0.0029) +[2024-06-10 18:09:53,244][46753] Fps is (10 sec: 42579.3, 60 sec: 43960.4, 300 sec: 39154.4). Total num frames: 11747328. Throughput: 0: 43811.1. Samples: 11870640. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 18:09:53,245][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:09:54,492][46990] Updated weights for policy 0, policy_version 720 (0.0041) +[2024-06-10 18:09:57,140][46990] Updated weights for policy 0, policy_version 730 (0.0038) +[2024-06-10 18:09:58,240][46753] Fps is (10 sec: 42599.0, 60 sec: 43963.8, 300 sec: 39765.9). Total num frames: 11976704. Throughput: 0: 43875.0. Samples: 12132740. Policy #0 lag: (min: 0.0, avg: 11.6, max: 23.0) +[2024-06-10 18:09:58,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:10:02,086][46990] Updated weights for policy 0, policy_version 740 (0.0043) +[2024-06-10 18:10:03,239][46753] Fps is (10 sec: 45896.5, 60 sec: 44240.1, 300 sec: 40376.8). Total num frames: 12206080. Throughput: 0: 43814.7. Samples: 12263080. Policy #0 lag: (min: 1.0, avg: 9.7, max: 22.0) +[2024-06-10 18:10:03,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:10:04,963][46990] Updated weights for policy 0, policy_version 750 (0.0031) +[2024-06-10 18:10:08,063][46970] Signal inference workers to stop experience collection... (150 times) +[2024-06-10 18:10:08,111][46990] InferenceWorker_p0-w0: stopping experience collection (150 times) +[2024-06-10 18:10:08,116][46970] Signal inference workers to resume experience collection... (150 times) +[2024-06-10 18:10:08,122][46990] InferenceWorker_p0-w0: resuming experience collection (150 times) +[2024-06-10 18:10:08,239][46753] Fps is (10 sec: 42599.1, 60 sec: 43690.7, 300 sec: 40710.1). Total num frames: 12402688. Throughput: 0: 43717.9. Samples: 12524720. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 18:10:08,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:10:09,344][46990] Updated weights for policy 0, policy_version 760 (0.0041) +[2024-06-10 18:10:12,217][46990] Updated weights for policy 0, policy_version 770 (0.0025) +[2024-06-10 18:10:13,244][46753] Fps is (10 sec: 42579.1, 60 sec: 43687.4, 300 sec: 41098.2). Total num frames: 12632064. Throughput: 0: 43666.8. Samples: 12786480. Policy #0 lag: (min: 1.0, avg: 11.5, max: 22.0) +[2024-06-10 18:10:13,245][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:10:17,039][46990] Updated weights for policy 0, policy_version 780 (0.0037) +[2024-06-10 18:10:18,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43690.7, 300 sec: 41487.6). Total num frames: 12845056. Throughput: 0: 43800.4. Samples: 12920780. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 18:10:18,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:10:19,455][46990] Updated weights for policy 0, policy_version 790 (0.0039) +[2024-06-10 18:10:23,240][46753] Fps is (10 sec: 40976.1, 60 sec: 43144.2, 300 sec: 41765.2). Total num frames: 13041664. Throughput: 0: 43633.9. Samples: 13179720. Policy #0 lag: (min: 0.0, avg: 10.8, max: 20.0) +[2024-06-10 18:10:23,241][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:10:23,248][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000000796_13041664.pth... +[2024-06-10 18:10:23,308][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000000160_2621440.pth +[2024-06-10 18:10:24,562][46990] Updated weights for policy 0, policy_version 800 (0.0047) +[2024-06-10 18:10:27,029][46990] Updated weights for policy 0, policy_version 810 (0.0023) +[2024-06-10 18:10:28,240][46753] Fps is (10 sec: 42595.2, 60 sec: 43417.1, 300 sec: 41987.4). Total num frames: 13271040. Throughput: 0: 43549.2. Samples: 13441720. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 18:10:28,241][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:10:32,158][46990] Updated weights for policy 0, policy_version 820 (0.0031) +[2024-06-10 18:10:33,240][46753] Fps is (10 sec: 47515.8, 60 sec: 43963.6, 300 sec: 42265.1). Total num frames: 13516800. Throughput: 0: 43670.4. Samples: 13575620. Policy #0 lag: (min: 0.0, avg: 7.7, max: 21.0) +[2024-06-10 18:10:33,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:10:35,114][46990] Updated weights for policy 0, policy_version 830 (0.0039) +[2024-06-10 18:10:38,239][46753] Fps is (10 sec: 44239.8, 60 sec: 43417.7, 300 sec: 42431.8). Total num frames: 13713408. Throughput: 0: 43712.0. Samples: 13837480. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 18:10:38,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:10:39,500][46990] Updated weights for policy 0, policy_version 840 (0.0039) +[2024-06-10 18:10:42,286][46990] Updated weights for policy 0, policy_version 850 (0.0033) +[2024-06-10 18:10:43,240][46753] Fps is (10 sec: 42597.5, 60 sec: 43690.5, 300 sec: 42598.4). Total num frames: 13942784. Throughput: 0: 43630.1. Samples: 14096100. Policy #0 lag: (min: 0.0, avg: 10.1, max: 25.0) +[2024-06-10 18:10:43,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:10:46,763][46990] Updated weights for policy 0, policy_version 860 (0.0039) +[2024-06-10 18:10:48,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43417.8, 300 sec: 42709.5). Total num frames: 14155776. Throughput: 0: 43815.5. Samples: 14234780. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 18:10:48,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:10:49,574][46990] Updated weights for policy 0, policy_version 870 (0.0035) +[2024-06-10 18:10:53,240][46753] Fps is (10 sec: 42599.1, 60 sec: 43693.9, 300 sec: 42931.6). Total num frames: 14368768. Throughput: 0: 43674.5. Samples: 14490080. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 18:10:53,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:10:54,525][46990] Updated weights for policy 0, policy_version 880 (0.0037) +[2024-06-10 18:10:57,118][46990] Updated weights for policy 0, policy_version 890 (0.0037) +[2024-06-10 18:10:58,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43042.7). Total num frames: 14598144. Throughput: 0: 43775.0. Samples: 14756160. Policy #0 lag: (min: 0.0, avg: 9.9, max: 20.0) +[2024-06-10 18:10:58,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:11:01,997][46990] Updated weights for policy 0, policy_version 900 (0.0040) +[2024-06-10 18:11:03,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43690.6, 300 sec: 43153.8). Total num frames: 14827520. Throughput: 0: 43808.8. Samples: 14892180. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 18:11:03,246][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:11:04,921][46990] Updated weights for policy 0, policy_version 910 (0.0033) +[2024-06-10 18:11:08,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.6, 300 sec: 43320.4). Total num frames: 15040512. Throughput: 0: 43785.8. Samples: 15150060. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:11:08,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:11:09,263][46990] Updated weights for policy 0, policy_version 920 (0.0022) +[2024-06-10 18:11:12,031][46990] Updated weights for policy 0, policy_version 930 (0.0035) +[2024-06-10 18:11:13,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43694.0, 300 sec: 43431.5). Total num frames: 15253504. Throughput: 0: 43891.8. Samples: 15416820. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 18:11:13,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:11:16,535][46990] Updated weights for policy 0, policy_version 940 (0.0034) +[2024-06-10 18:11:18,239][46753] Fps is (10 sec: 44237.7, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 15482880. Throughput: 0: 44069.5. Samples: 15558740. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 18:11:18,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:11:19,760][46990] Updated weights for policy 0, policy_version 950 (0.0049) +[2024-06-10 18:11:23,244][46753] Fps is (10 sec: 42579.1, 60 sec: 43960.8, 300 sec: 43597.5). Total num frames: 15679488. Throughput: 0: 43884.0. Samples: 15812460. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 18:11:23,245][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:11:24,235][46990] Updated weights for policy 0, policy_version 960 (0.0026) +[2024-06-10 18:11:24,520][46970] Signal inference workers to stop experience collection... (200 times) +[2024-06-10 18:11:24,520][46970] Signal inference workers to resume experience collection... (200 times) +[2024-06-10 18:11:24,533][46990] InferenceWorker_p0-w0: stopping experience collection (200 times) +[2024-06-10 18:11:24,534][46990] InferenceWorker_p0-w0: resuming experience collection (200 times) +[2024-06-10 18:11:27,773][46990] Updated weights for policy 0, policy_version 970 (0.0038) +[2024-06-10 18:11:28,239][46753] Fps is (10 sec: 44236.1, 60 sec: 44237.3, 300 sec: 43653.7). Total num frames: 15925248. Throughput: 0: 44012.3. Samples: 16076640. Policy #0 lag: (min: 1.0, avg: 12.7, max: 25.0) +[2024-06-10 18:11:28,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:11:31,620][46990] Updated weights for policy 0, policy_version 980 (0.0040) +[2024-06-10 18:11:33,239][46753] Fps is (10 sec: 45896.1, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 16138240. Throughput: 0: 44026.7. Samples: 16215980. Policy #0 lag: (min: 0.0, avg: 8.6, max: 20.0) +[2024-06-10 18:11:33,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:11:34,961][46990] Updated weights for policy 0, policy_version 990 (0.0038) +[2024-06-10 18:11:38,240][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 16334848. Throughput: 0: 44074.7. Samples: 16473440. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 18:11:38,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:11:38,994][46990] Updated weights for policy 0, policy_version 1000 (0.0029) +[2024-06-10 18:11:42,123][46990] Updated weights for policy 0, policy_version 1010 (0.0029) +[2024-06-10 18:11:43,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.9, 300 sec: 43931.4). Total num frames: 16580608. Throughput: 0: 43899.1. Samples: 16731620. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 18:11:43,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:11:46,758][46990] Updated weights for policy 0, policy_version 1020 (0.0041) +[2024-06-10 18:11:48,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 16777216. Throughput: 0: 43880.1. Samples: 16866780. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 18:11:48,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:11:49,872][46990] Updated weights for policy 0, policy_version 1030 (0.0038) +[2024-06-10 18:11:53,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 16990208. Throughput: 0: 43758.8. Samples: 17119200. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 18:11:53,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:11:54,370][46990] Updated weights for policy 0, policy_version 1040 (0.0032) +[2024-06-10 18:11:57,331][46990] Updated weights for policy 0, policy_version 1050 (0.0032) +[2024-06-10 18:11:58,240][46753] Fps is (10 sec: 44235.9, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 17219584. Throughput: 0: 43728.8. Samples: 17384620. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 18:11:58,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:12:01,749][46990] Updated weights for policy 0, policy_version 1060 (0.0033) +[2024-06-10 18:12:03,240][46753] Fps is (10 sec: 45874.6, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 17448960. Throughput: 0: 43808.2. Samples: 17530120. Policy #0 lag: (min: 0.0, avg: 7.9, max: 21.0) +[2024-06-10 18:12:03,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:12:04,453][46990] Updated weights for policy 0, policy_version 1070 (0.0051) +[2024-06-10 18:12:08,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43144.6, 300 sec: 43764.7). Total num frames: 17629184. Throughput: 0: 43842.1. Samples: 17785160. Policy #0 lag: (min: 0.0, avg: 12.7, max: 24.0) +[2024-06-10 18:12:08,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:12:09,017][46990] Updated weights for policy 0, policy_version 1080 (0.0037) +[2024-06-10 18:12:11,892][46990] Updated weights for policy 0, policy_version 1090 (0.0038) +[2024-06-10 18:12:13,239][46753] Fps is (10 sec: 45875.4, 60 sec: 44236.8, 300 sec: 43986.9). Total num frames: 17907712. Throughput: 0: 43780.9. Samples: 18046780. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 18:12:13,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:12:16,676][46990] Updated weights for policy 0, policy_version 1100 (0.0030) +[2024-06-10 18:12:18,239][46753] Fps is (10 sec: 47514.2, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 18104320. Throughput: 0: 43783.6. Samples: 18186240. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 18:12:18,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:12:19,588][46990] Updated weights for policy 0, policy_version 1110 (0.0031) +[2024-06-10 18:12:23,244][46753] Fps is (10 sec: 39303.8, 60 sec: 43690.6, 300 sec: 43764.1). Total num frames: 18300928. Throughput: 0: 43832.5. Samples: 18446100. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 18:12:23,244][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:12:23,309][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000001118_18317312.pth... +[2024-06-10 18:12:23,360][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000000476_7798784.pth +[2024-06-10 18:12:24,228][46990] Updated weights for policy 0, policy_version 1120 (0.0033) +[2024-06-10 18:12:27,048][46990] Updated weights for policy 0, policy_version 1130 (0.0044) +[2024-06-10 18:12:28,240][46753] Fps is (10 sec: 45874.3, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 18563072. Throughput: 0: 43858.1. Samples: 18705240. Policy #0 lag: (min: 0.0, avg: 10.4, max: 20.0) +[2024-06-10 18:12:28,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:12:31,443][46990] Updated weights for policy 0, policy_version 1140 (0.0042) +[2024-06-10 18:12:33,244][46753] Fps is (10 sec: 45875.0, 60 sec: 43687.3, 300 sec: 43875.1). Total num frames: 18759680. Throughput: 0: 44091.0. Samples: 18851080. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 18:12:33,245][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:12:34,591][46990] Updated weights for policy 0, policy_version 1150 (0.0033) +[2024-06-10 18:12:38,239][46753] Fps is (10 sec: 39322.1, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 18956288. Throughput: 0: 44196.9. Samples: 19108060. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 18:12:38,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:12:38,295][46970] Saving new best policy, reward=0.001! +[2024-06-10 18:12:38,992][46990] Updated weights for policy 0, policy_version 1160 (0.0034) +[2024-06-10 18:12:41,925][46990] Updated weights for policy 0, policy_version 1170 (0.0038) +[2024-06-10 18:12:43,239][46753] Fps is (10 sec: 45896.7, 60 sec: 43963.8, 300 sec: 43987.6). Total num frames: 19218432. Throughput: 0: 44036.6. Samples: 19366260. Policy #0 lag: (min: 0.0, avg: 10.4, max: 23.0) +[2024-06-10 18:12:43,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:12:46,713][46990] Updated weights for policy 0, policy_version 1180 (0.0036) +[2024-06-10 18:12:48,239][46753] Fps is (10 sec: 47513.3, 60 sec: 44236.7, 300 sec: 43931.3). Total num frames: 19431424. Throughput: 0: 43889.4. Samples: 19505140. Policy #0 lag: (min: 0.0, avg: 7.7, max: 20.0) +[2024-06-10 18:12:48,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:12:49,360][46970] Signal inference workers to stop experience collection... (250 times) +[2024-06-10 18:12:49,364][46970] Signal inference workers to resume experience collection... (250 times) +[2024-06-10 18:12:49,374][46990] InferenceWorker_p0-w0: stopping experience collection (250 times) +[2024-06-10 18:12:49,384][46990] InferenceWorker_p0-w0: resuming experience collection (250 times) +[2024-06-10 18:12:49,501][46990] Updated weights for policy 0, policy_version 1190 (0.0037) +[2024-06-10 18:12:53,240][46753] Fps is (10 sec: 40959.0, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 19628032. Throughput: 0: 43975.4. Samples: 19764060. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 18:12:53,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:12:53,982][46990] Updated weights for policy 0, policy_version 1200 (0.0031) +[2024-06-10 18:12:57,289][46990] Updated weights for policy 0, policy_version 1210 (0.0044) +[2024-06-10 18:12:58,239][46753] Fps is (10 sec: 45875.4, 60 sec: 44509.9, 300 sec: 43931.3). Total num frames: 19890176. Throughput: 0: 43821.4. Samples: 20018740. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 18:12:58,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:13:01,298][46990] Updated weights for policy 0, policy_version 1220 (0.0031) +[2024-06-10 18:13:03,239][46753] Fps is (10 sec: 44237.7, 60 sec: 43690.8, 300 sec: 43820.3). Total num frames: 20070400. Throughput: 0: 43901.3. Samples: 20161800. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 18:13:03,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:13:04,542][46990] Updated weights for policy 0, policy_version 1230 (0.0040) +[2024-06-10 18:13:08,243][46753] Fps is (10 sec: 39307.3, 60 sec: 44234.1, 300 sec: 43764.2). Total num frames: 20283392. Throughput: 0: 43794.7. Samples: 20416820. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 18:13:08,243][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:13:09,163][46990] Updated weights for policy 0, policy_version 1240 (0.0042) +[2024-06-10 18:13:12,069][46990] Updated weights for policy 0, policy_version 1250 (0.0035) +[2024-06-10 18:13:13,240][46753] Fps is (10 sec: 47512.9, 60 sec: 43963.7, 300 sec: 43986.9). Total num frames: 20545536. Throughput: 0: 43833.8. Samples: 20677760. Policy #0 lag: (min: 1.0, avg: 10.1, max: 23.0) +[2024-06-10 18:13:13,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:13:16,596][46990] Updated weights for policy 0, policy_version 1260 (0.0030) +[2024-06-10 18:13:18,240][46753] Fps is (10 sec: 45889.6, 60 sec: 43963.3, 300 sec: 43875.7). Total num frames: 20742144. Throughput: 0: 43604.9. Samples: 20813120. Policy #0 lag: (min: 0.0, avg: 7.9, max: 22.0) +[2024-06-10 18:13:18,241][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:13:19,699][46990] Updated weights for policy 0, policy_version 1270 (0.0028) +[2024-06-10 18:13:23,240][46753] Fps is (10 sec: 39321.6, 60 sec: 43967.0, 300 sec: 43820.2). Total num frames: 20938752. Throughput: 0: 43610.6. Samples: 21070540. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:13:23,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:13:24,105][46990] Updated weights for policy 0, policy_version 1280 (0.0033) +[2024-06-10 18:13:27,293][46990] Updated weights for policy 0, policy_version 1290 (0.0033) +[2024-06-10 18:13:28,239][46753] Fps is (10 sec: 45877.4, 60 sec: 43963.8, 300 sec: 43931.3). Total num frames: 21200896. Throughput: 0: 43530.6. Samples: 21325140. Policy #0 lag: (min: 0.0, avg: 10.7, max: 25.0) +[2024-06-10 18:13:28,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:13:31,271][46990] Updated weights for policy 0, policy_version 1300 (0.0032) +[2024-06-10 18:13:33,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43694.0, 300 sec: 43820.3). Total num frames: 21381120. Throughput: 0: 43486.3. Samples: 21462020. Policy #0 lag: (min: 0.0, avg: 8.1, max: 21.0) +[2024-06-10 18:13:33,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:13:34,402][46990] Updated weights for policy 0, policy_version 1310 (0.0030) +[2024-06-10 18:13:38,240][46753] Fps is (10 sec: 40958.3, 60 sec: 44236.5, 300 sec: 43764.7). Total num frames: 21610496. Throughput: 0: 43689.5. Samples: 21730100. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 18:13:38,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:13:38,957][46990] Updated weights for policy 0, policy_version 1320 (0.0033) +[2024-06-10 18:13:41,728][46990] Updated weights for policy 0, policy_version 1330 (0.0045) +[2024-06-10 18:13:43,239][46753] Fps is (10 sec: 47513.5, 60 sec: 43963.6, 300 sec: 43875.8). Total num frames: 21856256. Throughput: 0: 43834.2. Samples: 21991280. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 18:13:43,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:13:46,868][46990] Updated weights for policy 0, policy_version 1340 (0.0034) +[2024-06-10 18:13:48,239][46753] Fps is (10 sec: 44238.8, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 22052864. Throughput: 0: 43668.8. Samples: 22126900. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 18:13:48,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:13:49,373][46990] Updated weights for policy 0, policy_version 1350 (0.0032) +[2024-06-10 18:13:53,241][46753] Fps is (10 sec: 39316.5, 60 sec: 43689.8, 300 sec: 43764.5). Total num frames: 22249472. Throughput: 0: 43725.8. Samples: 22384380. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 18:13:53,241][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:13:54,072][46990] Updated weights for policy 0, policy_version 1360 (0.0028) +[2024-06-10 18:13:55,952][46970] Signal inference workers to stop experience collection... (300 times) +[2024-06-10 18:13:55,975][46990] InferenceWorker_p0-w0: stopping experience collection (300 times) +[2024-06-10 18:13:56,011][46970] Signal inference workers to resume experience collection... (300 times) +[2024-06-10 18:13:56,011][46990] InferenceWorker_p0-w0: resuming experience collection (300 times) +[2024-06-10 18:13:57,009][46990] Updated weights for policy 0, policy_version 1370 (0.0024) +[2024-06-10 18:13:58,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.7, 300 sec: 43932.0). Total num frames: 22511616. Throughput: 0: 43694.3. Samples: 22644000. Policy #0 lag: (min: 1.0, avg: 10.3, max: 21.0) +[2024-06-10 18:13:58,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:14:01,286][46990] Updated weights for policy 0, policy_version 1380 (0.0025) +[2024-06-10 18:14:03,239][46753] Fps is (10 sec: 45881.3, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 22708224. Throughput: 0: 43645.4. Samples: 22777140. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 18:14:03,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:14:04,380][46990] Updated weights for policy 0, policy_version 1390 (0.0047) +[2024-06-10 18:14:08,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43693.4, 300 sec: 43709.2). Total num frames: 22904832. Throughput: 0: 43766.8. Samples: 23040040. Policy #0 lag: (min: 1.0, avg: 11.9, max: 21.0) +[2024-06-10 18:14:08,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:14:08,985][46990] Updated weights for policy 0, policy_version 1400 (0.0039) +[2024-06-10 18:14:11,655][46990] Updated weights for policy 0, policy_version 1410 (0.0042) +[2024-06-10 18:14:13,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 23166976. Throughput: 0: 44002.8. Samples: 23305260. Policy #0 lag: (min: 0.0, avg: 11.0, max: 20.0) +[2024-06-10 18:14:13,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:14:16,510][46990] Updated weights for policy 0, policy_version 1420 (0.0041) +[2024-06-10 18:14:18,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43691.1, 300 sec: 43764.7). Total num frames: 23363584. Throughput: 0: 43955.6. Samples: 23440020. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 18:14:18,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:14:19,069][46990] Updated weights for policy 0, policy_version 1430 (0.0041) +[2024-06-10 18:14:23,240][46753] Fps is (10 sec: 40959.3, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 23576576. Throughput: 0: 43687.5. Samples: 23696020. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 18:14:23,243][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:14:23,248][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000001439_23576576.pth... +[2024-06-10 18:14:23,299][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000000796_13041664.pth +[2024-06-10 18:14:23,704][46990] Updated weights for policy 0, policy_version 1440 (0.0049) +[2024-06-10 18:14:26,824][46990] Updated weights for policy 0, policy_version 1450 (0.0037) +[2024-06-10 18:14:28,239][46753] Fps is (10 sec: 45874.6, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 23822336. Throughput: 0: 43678.2. Samples: 23956800. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 18:14:28,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:14:30,893][46990] Updated weights for policy 0, policy_version 1460 (0.0032) +[2024-06-10 18:14:33,240][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 24018944. Throughput: 0: 43649.6. Samples: 24091140. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 18:14:33,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:14:34,395][46990] Updated weights for policy 0, policy_version 1470 (0.0029) +[2024-06-10 18:14:38,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43417.9, 300 sec: 43709.2). Total num frames: 24215552. Throughput: 0: 43844.8. Samples: 24357340. Policy #0 lag: (min: 0.0, avg: 10.2, max: 20.0) +[2024-06-10 18:14:38,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:14:38,593][46990] Updated weights for policy 0, policy_version 1480 (0.0038) +[2024-06-10 18:14:41,709][46990] Updated weights for policy 0, policy_version 1490 (0.0036) +[2024-06-10 18:14:43,244][46753] Fps is (10 sec: 45855.2, 60 sec: 43687.4, 300 sec: 43819.6). Total num frames: 24477696. Throughput: 0: 43788.9. Samples: 24614700. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 18:14:43,245][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:14:43,294][46970] Saving new best policy, reward=0.003! +[2024-06-10 18:14:46,037][46990] Updated weights for policy 0, policy_version 1500 (0.0035) +[2024-06-10 18:14:48,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.7, 300 sec: 43820.9). Total num frames: 24674304. Throughput: 0: 43844.5. Samples: 24750140. Policy #0 lag: (min: 0.0, avg: 7.2, max: 20.0) +[2024-06-10 18:14:48,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:14:48,995][46990] Updated weights for policy 0, policy_version 1510 (0.0034) +[2024-06-10 18:14:53,178][46990] Updated weights for policy 0, policy_version 1520 (0.0039) +[2024-06-10 18:14:53,239][46753] Fps is (10 sec: 42617.6, 60 sec: 44237.8, 300 sec: 43820.3). Total num frames: 24903680. Throughput: 0: 43781.7. Samples: 25010220. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 18:14:53,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:14:56,752][46990] Updated weights for policy 0, policy_version 1530 (0.0052) +[2024-06-10 18:14:58,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 25133056. Throughput: 0: 43664.9. Samples: 25270180. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 18:14:58,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:15:01,037][46990] Updated weights for policy 0, policy_version 1540 (0.0034) +[2024-06-10 18:15:03,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 25329664. Throughput: 0: 43762.2. Samples: 25409320. Policy #0 lag: (min: 0.0, avg: 8.5, max: 22.0) +[2024-06-10 18:15:03,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:15:04,052][46990] Updated weights for policy 0, policy_version 1550 (0.0036) +[2024-06-10 18:15:08,239][46753] Fps is (10 sec: 40959.5, 60 sec: 43963.7, 300 sec: 43765.4). Total num frames: 25542656. Throughput: 0: 43894.7. Samples: 25671280. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 18:15:08,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:15:08,701][46990] Updated weights for policy 0, policy_version 1560 (0.0038) +[2024-06-10 18:15:11,402][46990] Updated weights for policy 0, policy_version 1570 (0.0033) +[2024-06-10 18:15:13,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 25788416. Throughput: 0: 43845.0. Samples: 25929820. Policy #0 lag: (min: 0.0, avg: 11.9, max: 24.0) +[2024-06-10 18:15:13,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:15:14,420][46970] Signal inference workers to stop experience collection... (350 times) +[2024-06-10 18:15:14,421][46970] Signal inference workers to resume experience collection... (350 times) +[2024-06-10 18:15:14,443][46990] InferenceWorker_p0-w0: stopping experience collection (350 times) +[2024-06-10 18:15:14,443][46990] InferenceWorker_p0-w0: resuming experience collection (350 times) +[2024-06-10 18:15:16,165][46990] Updated weights for policy 0, policy_version 1580 (0.0043) +[2024-06-10 18:15:18,240][46753] Fps is (10 sec: 45873.3, 60 sec: 43963.4, 300 sec: 43931.4). Total num frames: 26001408. Throughput: 0: 43875.3. Samples: 26065540. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 18:15:18,241][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:15:18,793][46990] Updated weights for policy 0, policy_version 1590 (0.0031) +[2024-06-10 18:15:23,240][46753] Fps is (10 sec: 40959.5, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 26198016. Throughput: 0: 43729.3. Samples: 26325160. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 18:15:23,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:15:23,801][46990] Updated weights for policy 0, policy_version 1600 (0.0038) +[2024-06-10 18:15:26,278][46990] Updated weights for policy 0, policy_version 1610 (0.0035) +[2024-06-10 18:15:28,240][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.4, 300 sec: 43820.2). Total num frames: 26443776. Throughput: 0: 43809.3. Samples: 26585940. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 18:15:28,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:15:30,865][46990] Updated weights for policy 0, policy_version 1620 (0.0035) +[2024-06-10 18:15:33,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43690.8, 300 sec: 43820.3). Total num frames: 26640384. Throughput: 0: 43848.9. Samples: 26723340. Policy #0 lag: (min: 0.0, avg: 8.7, max: 22.0) +[2024-06-10 18:15:33,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:15:34,035][46990] Updated weights for policy 0, policy_version 1630 (0.0045) +[2024-06-10 18:15:38,239][46753] Fps is (10 sec: 40961.9, 60 sec: 43963.8, 300 sec: 43764.8). Total num frames: 26853376. Throughput: 0: 43909.4. Samples: 26986140. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 18:15:38,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:15:38,633][46990] Updated weights for policy 0, policy_version 1640 (0.0050) +[2024-06-10 18:15:41,281][46990] Updated weights for policy 0, policy_version 1650 (0.0039) +[2024-06-10 18:15:43,239][46753] Fps is (10 sec: 47513.2, 60 sec: 43967.0, 300 sec: 43931.3). Total num frames: 27115520. Throughput: 0: 43891.5. Samples: 27245300. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 18:15:43,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:15:46,138][46990] Updated weights for policy 0, policy_version 1660 (0.0037) +[2024-06-10 18:15:48,240][46753] Fps is (10 sec: 47511.3, 60 sec: 44236.4, 300 sec: 43931.3). Total num frames: 27328512. Throughput: 0: 43942.7. Samples: 27386760. Policy #0 lag: (min: 0.0, avg: 8.6, max: 22.0) +[2024-06-10 18:15:48,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:15:48,718][46990] Updated weights for policy 0, policy_version 1670 (0.0033) +[2024-06-10 18:15:53,239][46753] Fps is (10 sec: 37683.3, 60 sec: 43144.6, 300 sec: 43709.2). Total num frames: 27492352. Throughput: 0: 43862.3. Samples: 27645080. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 18:15:53,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:15:53,550][46990] Updated weights for policy 0, policy_version 1680 (0.0024) +[2024-06-10 18:15:56,323][46990] Updated weights for policy 0, policy_version 1690 (0.0026) +[2024-06-10 18:15:58,239][46753] Fps is (10 sec: 44239.0, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 27770880. Throughput: 0: 43822.2. Samples: 27901820. Policy #0 lag: (min: 0.0, avg: 9.6, max: 24.0) +[2024-06-10 18:15:58,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:00,831][46990] Updated weights for policy 0, policy_version 1700 (0.0030) +[2024-06-10 18:16:03,240][46753] Fps is (10 sec: 47513.0, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 27967488. Throughput: 0: 43864.8. Samples: 28039440. Policy #0 lag: (min: 0.0, avg: 9.4, max: 23.0) +[2024-06-10 18:16:03,244][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:03,746][46990] Updated weights for policy 0, policy_version 1710 (0.0050) +[2024-06-10 18:16:08,240][46753] Fps is (10 sec: 37681.1, 60 sec: 43417.3, 300 sec: 43709.1). Total num frames: 28147712. Throughput: 0: 43911.1. Samples: 28301180. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 18:16:08,241][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:08,598][46990] Updated weights for policy 0, policy_version 1720 (0.0038) +[2024-06-10 18:16:11,030][46990] Updated weights for policy 0, policy_version 1730 (0.0025) +[2024-06-10 18:16:13,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 28426240. Throughput: 0: 43732.8. Samples: 28553900. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 18:16:13,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:16:16,315][46990] Updated weights for policy 0, policy_version 1740 (0.0035) +[2024-06-10 18:16:18,239][46753] Fps is (10 sec: 49154.4, 60 sec: 43964.1, 300 sec: 43932.0). Total num frames: 28639232. Throughput: 0: 43982.1. Samples: 28702540. Policy #0 lag: (min: 0.0, avg: 7.4, max: 21.0) +[2024-06-10 18:16:18,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:18,613][46990] Updated weights for policy 0, policy_version 1750 (0.0028) +[2024-06-10 18:16:23,240][46753] Fps is (10 sec: 39319.5, 60 sec: 43690.3, 300 sec: 43709.1). Total num frames: 28819456. Throughput: 0: 43790.1. Samples: 28956720. Policy #0 lag: (min: 0.0, avg: 8.1, max: 21.0) +[2024-06-10 18:16:23,241][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:23,255][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000001759_28819456.pth... +[2024-06-10 18:16:23,323][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000001118_18317312.pth +[2024-06-10 18:16:23,483][46990] Updated weights for policy 0, policy_version 1760 (0.0036) +[2024-06-10 18:16:26,171][46990] Updated weights for policy 0, policy_version 1770 (0.0032) +[2024-06-10 18:16:28,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43691.1, 300 sec: 43820.3). Total num frames: 29065216. Throughput: 0: 43918.8. Samples: 29221640. Policy #0 lag: (min: 0.0, avg: 12.3, max: 22.0) +[2024-06-10 18:16:28,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:30,860][46990] Updated weights for policy 0, policy_version 1780 (0.0037) +[2024-06-10 18:16:33,240][46753] Fps is (10 sec: 47515.5, 60 sec: 44236.6, 300 sec: 43931.3). Total num frames: 29294592. Throughput: 0: 43758.9. Samples: 29355900. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 18:16:33,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:33,922][46990] Updated weights for policy 0, policy_version 1790 (0.0056) +[2024-06-10 18:16:38,239][46753] Fps is (10 sec: 39321.0, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 29458432. Throughput: 0: 43728.4. Samples: 29612860. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 18:16:38,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:38,730][46990] Updated weights for policy 0, policy_version 1800 (0.0038) +[2024-06-10 18:16:40,530][46970] Signal inference workers to stop experience collection... (400 times) +[2024-06-10 18:16:40,549][46990] InferenceWorker_p0-w0: stopping experience collection (400 times) +[2024-06-10 18:16:40,634][46970] Signal inference workers to resume experience collection... (400 times) +[2024-06-10 18:16:40,634][46990] InferenceWorker_p0-w0: resuming experience collection (400 times) +[2024-06-10 18:16:41,152][46990] Updated weights for policy 0, policy_version 1810 (0.0044) +[2024-06-10 18:16:43,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43417.6, 300 sec: 43875.8). Total num frames: 29720576. Throughput: 0: 43802.6. Samples: 29872940. Policy #0 lag: (min: 0.0, avg: 10.5, max: 20.0) +[2024-06-10 18:16:43,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:46,180][46990] Updated weights for policy 0, policy_version 1820 (0.0030) +[2024-06-10 18:16:48,239][46753] Fps is (10 sec: 49152.3, 60 sec: 43691.0, 300 sec: 43931.3). Total num frames: 29949952. Throughput: 0: 43926.3. Samples: 30016120. Policy #0 lag: (min: 0.0, avg: 8.2, max: 22.0) +[2024-06-10 18:16:48,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:48,545][46990] Updated weights for policy 0, policy_version 1830 (0.0045) +[2024-06-10 18:16:53,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 30130176. Throughput: 0: 43749.0. Samples: 30269860. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 18:16:53,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:16:53,372][46990] Updated weights for policy 0, policy_version 1840 (0.0038) +[2024-06-10 18:16:56,187][46990] Updated weights for policy 0, policy_version 1850 (0.0041) +[2024-06-10 18:16:58,242][46753] Fps is (10 sec: 42586.7, 60 sec: 43415.6, 300 sec: 43819.9). Total num frames: 30375936. Throughput: 0: 43911.2. Samples: 30530020. Policy #0 lag: (min: 0.0, avg: 12.1, max: 21.0) +[2024-06-10 18:16:58,243][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:17:00,679][46990] Updated weights for policy 0, policy_version 1860 (0.0046) +[2024-06-10 18:17:03,240][46753] Fps is (10 sec: 44235.9, 60 sec: 43417.6, 300 sec: 43875.8). Total num frames: 30572544. Throughput: 0: 43674.1. Samples: 30667880. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:17:03,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:17:03,761][46990] Updated weights for policy 0, policy_version 1870 (0.0033) +[2024-06-10 18:17:08,240][46753] Fps is (10 sec: 39331.8, 60 sec: 43690.9, 300 sec: 43598.1). Total num frames: 30769152. Throughput: 0: 43666.2. Samples: 30921680. Policy #0 lag: (min: 0.0, avg: 10.1, max: 24.0) +[2024-06-10 18:17:08,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:17:08,880][46990] Updated weights for policy 0, policy_version 1880 (0.0029) +[2024-06-10 18:17:11,047][46990] Updated weights for policy 0, policy_version 1890 (0.0027) +[2024-06-10 18:17:13,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43417.6, 300 sec: 43820.2). Total num frames: 31031296. Throughput: 0: 43541.2. Samples: 31181000. Policy #0 lag: (min: 0.0, avg: 12.3, max: 22.0) +[2024-06-10 18:17:13,240][46753] Avg episode reward: [(0, '0.000')] +[2024-06-10 18:17:16,242][46990] Updated weights for policy 0, policy_version 1900 (0.0038) +[2024-06-10 18:17:18,240][46753] Fps is (10 sec: 49151.8, 60 sec: 43690.6, 300 sec: 43932.0). Total num frames: 31260672. Throughput: 0: 43694.7. Samples: 31322160. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 18:17:18,250][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:17:18,613][46990] Updated weights for policy 0, policy_version 1910 (0.0041) +[2024-06-10 18:17:23,240][46753] Fps is (10 sec: 40958.1, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 31440896. Throughput: 0: 43728.9. Samples: 31580680. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 18:17:23,241][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:17:23,566][46990] Updated weights for policy 0, policy_version 1920 (0.0038) +[2024-06-10 18:17:26,374][46990] Updated weights for policy 0, policy_version 1930 (0.0034) +[2024-06-10 18:17:28,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43963.6, 300 sec: 43876.5). Total num frames: 31703040. Throughput: 0: 43639.9. Samples: 31836740. Policy #0 lag: (min: 1.0, avg: 10.5, max: 22.0) +[2024-06-10 18:17:28,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:17:31,153][46990] Updated weights for policy 0, policy_version 1940 (0.0034) +[2024-06-10 18:17:33,240][46753] Fps is (10 sec: 44238.0, 60 sec: 43144.5, 300 sec: 43820.2). Total num frames: 31883264. Throughput: 0: 43555.7. Samples: 31976140. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 18:17:33,244][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:17:34,035][46990] Updated weights for policy 0, policy_version 1950 (0.0031) +[2024-06-10 18:17:38,244][46753] Fps is (10 sec: 37666.5, 60 sec: 43687.4, 300 sec: 43597.4). Total num frames: 32079872. Throughput: 0: 43544.9. Samples: 32229580. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 18:17:38,244][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:17:38,861][46990] Updated weights for policy 0, policy_version 1960 (0.0032) +[2024-06-10 18:17:41,434][46990] Updated weights for policy 0, policy_version 1970 (0.0032) +[2024-06-10 18:17:43,239][46753] Fps is (10 sec: 44238.1, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 32325632. Throughput: 0: 43598.2. Samples: 32491820. Policy #0 lag: (min: 1.0, avg: 7.8, max: 21.0) +[2024-06-10 18:17:43,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:17:46,174][46990] Updated weights for policy 0, policy_version 1980 (0.0034) +[2024-06-10 18:17:48,240][46753] Fps is (10 sec: 49173.5, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 32571392. Throughput: 0: 43645.8. Samples: 32631940. Policy #0 lag: (min: 0.0, avg: 9.0, max: 22.0) +[2024-06-10 18:17:48,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:17:48,990][46990] Updated weights for policy 0, policy_version 1990 (0.0041) +[2024-06-10 18:17:50,522][46970] Signal inference workers to stop experience collection... (450 times) +[2024-06-10 18:17:50,543][46990] InferenceWorker_p0-w0: stopping experience collection (450 times) +[2024-06-10 18:17:50,632][46970] Signal inference workers to resume experience collection... (450 times) +[2024-06-10 18:17:50,632][46990] InferenceWorker_p0-w0: resuming experience collection (450 times) +[2024-06-10 18:17:53,244][46753] Fps is (10 sec: 42578.9, 60 sec: 43687.3, 300 sec: 43597.4). Total num frames: 32751616. Throughput: 0: 43749.1. Samples: 32890580. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 18:17:53,245][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:17:53,423][46990] Updated weights for policy 0, policy_version 2000 (0.0038) +[2024-06-10 18:17:56,641][46990] Updated weights for policy 0, policy_version 2010 (0.0045) +[2024-06-10 18:17:58,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43965.7, 300 sec: 43875.8). Total num frames: 33013760. Throughput: 0: 43672.5. Samples: 33146260. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 18:17:58,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:18:01,334][46990] Updated weights for policy 0, policy_version 2020 (0.0031) +[2024-06-10 18:18:03,239][46753] Fps is (10 sec: 44256.8, 60 sec: 43690.8, 300 sec: 43765.3). Total num frames: 33193984. Throughput: 0: 43602.9. Samples: 33284280. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 18:18:03,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:18:04,347][46990] Updated weights for policy 0, policy_version 2030 (0.0031) +[2024-06-10 18:18:08,239][46753] Fps is (10 sec: 37683.2, 60 sec: 43690.8, 300 sec: 43542.6). Total num frames: 33390592. Throughput: 0: 43712.5. Samples: 33547720. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 18:18:08,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:18:08,566][46990] Updated weights for policy 0, policy_version 2040 (0.0030) +[2024-06-10 18:18:11,734][46990] Updated weights for policy 0, policy_version 2050 (0.0031) +[2024-06-10 18:18:13,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43690.7, 300 sec: 43764.8). Total num frames: 33652736. Throughput: 0: 43602.7. Samples: 33798860. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 18:18:13,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:18:16,081][46990] Updated weights for policy 0, policy_version 2060 (0.0036) +[2024-06-10 18:18:18,239][46753] Fps is (10 sec: 49151.7, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 33882112. Throughput: 0: 43614.0. Samples: 33938760. Policy #0 lag: (min: 0.0, avg: 12.8, max: 23.0) +[2024-06-10 18:18:18,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:18:18,950][46990] Updated weights for policy 0, policy_version 2070 (0.0040) +[2024-06-10 18:18:23,240][46753] Fps is (10 sec: 40959.6, 60 sec: 43690.9, 300 sec: 43598.1). Total num frames: 34062336. Throughput: 0: 43783.8. Samples: 34199660. Policy #0 lag: (min: 0.0, avg: 12.6, max: 22.0) +[2024-06-10 18:18:23,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:18:23,248][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000002079_34062336.pth... +[2024-06-10 18:18:23,294][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000001439_23576576.pth +[2024-06-10 18:18:23,544][46990] Updated weights for policy 0, policy_version 2080 (0.0039) +[2024-06-10 18:18:26,795][46990] Updated weights for policy 0, policy_version 2090 (0.0028) +[2024-06-10 18:18:28,244][46753] Fps is (10 sec: 44217.1, 60 sec: 43687.4, 300 sec: 43875.1). Total num frames: 34324480. Throughput: 0: 43633.3. Samples: 34455520. Policy #0 lag: (min: 0.0, avg: 11.6, max: 23.0) +[2024-06-10 18:18:28,245][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:18:31,001][46990] Updated weights for policy 0, policy_version 2100 (0.0042) +[2024-06-10 18:18:33,240][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 34504704. Throughput: 0: 43513.3. Samples: 34590040. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 18:18:33,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:18:34,411][46990] Updated weights for policy 0, policy_version 2110 (0.0022) +[2024-06-10 18:18:38,239][46753] Fps is (10 sec: 39339.4, 60 sec: 43967.0, 300 sec: 43598.1). Total num frames: 34717696. Throughput: 0: 43571.5. Samples: 34851100. Policy #0 lag: (min: 0.0, avg: 11.5, max: 24.0) +[2024-06-10 18:18:38,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:18:38,346][46990] Updated weights for policy 0, policy_version 2120 (0.0029) +[2024-06-10 18:18:41,807][46990] Updated weights for policy 0, policy_version 2130 (0.0045) +[2024-06-10 18:18:43,239][46753] Fps is (10 sec: 44237.9, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 34947072. Throughput: 0: 43668.0. Samples: 35111320. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 18:18:43,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:18:46,215][46990] Updated weights for policy 0, policy_version 2140 (0.0036) +[2024-06-10 18:18:48,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43417.7, 300 sec: 43820.5). Total num frames: 35176448. Throughput: 0: 43510.7. Samples: 35242260. Policy #0 lag: (min: 0.0, avg: 9.0, max: 22.0) +[2024-06-10 18:18:48,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:18:48,975][46990] Updated weights for policy 0, policy_version 2150 (0.0026) +[2024-06-10 18:18:53,240][46753] Fps is (10 sec: 40959.2, 60 sec: 43420.8, 300 sec: 43542.5). Total num frames: 35356672. Throughput: 0: 43435.0. Samples: 35502300. Policy #0 lag: (min: 0.0, avg: 10.6, max: 20.0) +[2024-06-10 18:18:53,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:18:53,422][46990] Updated weights for policy 0, policy_version 2160 (0.0022) +[2024-06-10 18:18:56,662][46990] Updated weights for policy 0, policy_version 2170 (0.0032) +[2024-06-10 18:18:58,240][46753] Fps is (10 sec: 44236.1, 60 sec: 43417.5, 300 sec: 43764.7). Total num frames: 35618816. Throughput: 0: 43773.7. Samples: 35768680. Policy #0 lag: (min: 0.0, avg: 10.4, max: 20.0) +[2024-06-10 18:18:58,249][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:19:01,089][46990] Updated weights for policy 0, policy_version 2180 (0.0031) +[2024-06-10 18:19:03,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 35815424. Throughput: 0: 43576.0. Samples: 35899680. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 18:19:03,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:19:03,460][46970] Signal inference workers to stop experience collection... (500 times) +[2024-06-10 18:19:03,499][46990] InferenceWorker_p0-w0: stopping experience collection (500 times) +[2024-06-10 18:19:03,518][46970] Signal inference workers to resume experience collection... (500 times) +[2024-06-10 18:19:03,520][46990] InferenceWorker_p0-w0: resuming experience collection (500 times) +[2024-06-10 18:19:04,221][46990] Updated weights for policy 0, policy_version 2190 (0.0038) +[2024-06-10 18:19:08,239][46753] Fps is (10 sec: 40960.8, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 36028416. Throughput: 0: 43559.8. Samples: 36159840. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 18:19:08,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:19:08,348][46990] Updated weights for policy 0, policy_version 2200 (0.0041) +[2024-06-10 18:19:11,585][46990] Updated weights for policy 0, policy_version 2210 (0.0029) +[2024-06-10 18:19:13,244][46753] Fps is (10 sec: 44217.2, 60 sec: 43414.4, 300 sec: 43708.5). Total num frames: 36257792. Throughput: 0: 43777.4. Samples: 36425500. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 18:19:13,244][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:19:16,122][46990] Updated weights for policy 0, policy_version 2220 (0.0036) +[2024-06-10 18:19:18,239][46753] Fps is (10 sec: 45874.7, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 36487168. Throughput: 0: 43701.9. Samples: 36556620. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 18:19:18,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:19:18,771][46990] Updated weights for policy 0, policy_version 2230 (0.0031) +[2024-06-10 18:19:23,240][46753] Fps is (10 sec: 42617.1, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 36683776. Throughput: 0: 43684.8. Samples: 36816920. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 18:19:23,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:19:23,250][46990] Updated weights for policy 0, policy_version 2240 (0.0030) +[2024-06-10 18:19:26,315][46990] Updated weights for policy 0, policy_version 2250 (0.0031) +[2024-06-10 18:19:28,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43420.9, 300 sec: 43764.7). Total num frames: 36929536. Throughput: 0: 43738.2. Samples: 37079540. Policy #0 lag: (min: 0.0, avg: 10.4, max: 20.0) +[2024-06-10 18:19:28,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:19:30,470][46990] Updated weights for policy 0, policy_version 2260 (0.0036) +[2024-06-10 18:19:33,240][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.8, 300 sec: 43820.2). Total num frames: 37142528. Throughput: 0: 43806.5. Samples: 37213560. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 18:19:33,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:19:33,835][46990] Updated weights for policy 0, policy_version 2270 (0.0034) +[2024-06-10 18:19:38,102][46990] Updated weights for policy 0, policy_version 2280 (0.0042) +[2024-06-10 18:19:38,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43963.7, 300 sec: 43654.3). Total num frames: 37355520. Throughput: 0: 43833.0. Samples: 37474780. Policy #0 lag: (min: 0.0, avg: 10.3, max: 23.0) +[2024-06-10 18:19:38,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:19:41,343][46990] Updated weights for policy 0, policy_version 2290 (0.0034) +[2024-06-10 18:19:43,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 37568512. Throughput: 0: 43718.3. Samples: 37736000. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 18:19:43,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:19:45,823][46990] Updated weights for policy 0, policy_version 2300 (0.0039) +[2024-06-10 18:19:48,240][46753] Fps is (10 sec: 45874.7, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 37814272. Throughput: 0: 43690.2. Samples: 37865740. Policy #0 lag: (min: 0.0, avg: 8.5, max: 20.0) +[2024-06-10 18:19:48,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:19:48,676][46990] Updated weights for policy 0, policy_version 2310 (0.0035) +[2024-06-10 18:19:53,018][46990] Updated weights for policy 0, policy_version 2320 (0.0044) +[2024-06-10 18:19:53,239][46753] Fps is (10 sec: 44236.5, 60 sec: 44236.8, 300 sec: 43653.6). Total num frames: 38010880. Throughput: 0: 43760.3. Samples: 38129060. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 18:19:53,241][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:19:56,336][46990] Updated weights for policy 0, policy_version 2330 (0.0034) +[2024-06-10 18:19:58,241][46753] Fps is (10 sec: 42592.4, 60 sec: 43689.6, 300 sec: 43764.5). Total num frames: 38240256. Throughput: 0: 43698.9. Samples: 38391820. Policy #0 lag: (min: 0.0, avg: 10.9, max: 20.0) +[2024-06-10 18:19:58,242][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:20:00,418][46990] Updated weights for policy 0, policy_version 2340 (0.0039) +[2024-06-10 18:20:03,240][46753] Fps is (10 sec: 45874.9, 60 sec: 44236.7, 300 sec: 43820.2). Total num frames: 38469632. Throughput: 0: 43735.4. Samples: 38524720. Policy #0 lag: (min: 0.0, avg: 9.6, max: 20.0) +[2024-06-10 18:20:03,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:20:03,567][46990] Updated weights for policy 0, policy_version 2350 (0.0037) +[2024-06-10 18:20:08,240][46753] Fps is (10 sec: 40965.6, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 38649856. Throughput: 0: 43802.2. Samples: 38788020. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 18:20:08,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:20:08,366][46990] Updated weights for policy 0, policy_version 2360 (0.0041) +[2024-06-10 18:20:11,279][46990] Updated weights for policy 0, policy_version 2370 (0.0038) +[2024-06-10 18:20:13,240][46753] Fps is (10 sec: 42598.6, 60 sec: 43966.9, 300 sec: 43709.2). Total num frames: 38895616. Throughput: 0: 43737.7. Samples: 39047740. Policy #0 lag: (min: 0.0, avg: 10.8, max: 20.0) +[2024-06-10 18:20:13,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:20:15,818][46990] Updated weights for policy 0, policy_version 2380 (0.0038) +[2024-06-10 18:20:18,244][46753] Fps is (10 sec: 47493.1, 60 sec: 43960.5, 300 sec: 43819.6). Total num frames: 39124992. Throughput: 0: 43735.8. Samples: 39181860. Policy #0 lag: (min: 1.0, avg: 8.6, max: 20.0) +[2024-06-10 18:20:18,245][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:20:18,632][46990] Updated weights for policy 0, policy_version 2390 (0.0041) +[2024-06-10 18:20:23,239][46990] Updated weights for policy 0, policy_version 2400 (0.0036) +[2024-06-10 18:20:23,240][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.7, 300 sec: 43653.7). Total num frames: 39321600. Throughput: 0: 43701.3. Samples: 39441340. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 18:20:23,241][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:20:23,258][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000002400_39321600.pth... +[2024-06-10 18:20:23,313][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000001759_28819456.pth +[2024-06-10 18:20:26,253][46990] Updated weights for policy 0, policy_version 2410 (0.0033) +[2024-06-10 18:20:27,193][46970] Signal inference workers to stop experience collection... (550 times) +[2024-06-10 18:20:27,194][46970] Signal inference workers to resume experience collection... (550 times) +[2024-06-10 18:20:27,203][46990] InferenceWorker_p0-w0: stopping experience collection (550 times) +[2024-06-10 18:20:27,212][46990] InferenceWorker_p0-w0: resuming experience collection (550 times) +[2024-06-10 18:20:28,239][46753] Fps is (10 sec: 42617.2, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 39550976. Throughput: 0: 43639.5. Samples: 39699780. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 18:20:28,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:20:30,750][46990] Updated weights for policy 0, policy_version 2420 (0.0028) +[2024-06-10 18:20:33,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43963.8, 300 sec: 43820.2). Total num frames: 39780352. Throughput: 0: 43821.0. Samples: 39837680. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 18:20:33,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:20:33,441][46990] Updated weights for policy 0, policy_version 2430 (0.0034) +[2024-06-10 18:20:38,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 39960576. Throughput: 0: 43762.8. Samples: 40098380. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 18:20:38,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:20:38,371][46990] Updated weights for policy 0, policy_version 2440 (0.0023) +[2024-06-10 18:20:41,244][46990] Updated weights for policy 0, policy_version 2450 (0.0036) +[2024-06-10 18:20:43,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43963.8, 300 sec: 43653.7). Total num frames: 40206336. Throughput: 0: 43722.9. Samples: 40359280. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 18:20:43,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:20:46,140][46990] Updated weights for policy 0, policy_version 2460 (0.0031) +[2024-06-10 18:20:48,239][46753] Fps is (10 sec: 47514.0, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 40435712. Throughput: 0: 43819.4. Samples: 40496580. Policy #0 lag: (min: 0.0, avg: 7.9, max: 21.0) +[2024-06-10 18:20:48,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:20:48,693][46990] Updated weights for policy 0, policy_version 2470 (0.0034) +[2024-06-10 18:20:53,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43144.7, 300 sec: 43487.0). Total num frames: 40599552. Throughput: 0: 43685.6. Samples: 40753860. Policy #0 lag: (min: 0.0, avg: 9.7, max: 20.0) +[2024-06-10 18:20:53,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:20:53,419][46990] Updated weights for policy 0, policy_version 2480 (0.0047) +[2024-06-10 18:20:56,393][46990] Updated weights for policy 0, policy_version 2490 (0.0042) +[2024-06-10 18:20:58,240][46753] Fps is (10 sec: 42597.4, 60 sec: 43691.7, 300 sec: 43709.2). Total num frames: 40861696. Throughput: 0: 43651.5. Samples: 41012060. Policy #0 lag: (min: 0.0, avg: 10.5, max: 20.0) +[2024-06-10 18:20:58,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:21:00,635][46990] Updated weights for policy 0, policy_version 2500 (0.0042) +[2024-06-10 18:21:03,239][46753] Fps is (10 sec: 49151.3, 60 sec: 43690.8, 300 sec: 43875.9). Total num frames: 41091072. Throughput: 0: 43697.7. Samples: 41148060. Policy #0 lag: (min: 0.0, avg: 8.7, max: 20.0) +[2024-06-10 18:21:03,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:21:03,249][46970] Saving new best policy, reward=0.005! +[2024-06-10 18:21:04,126][46990] Updated weights for policy 0, policy_version 2510 (0.0036) +[2024-06-10 18:21:08,244][46753] Fps is (10 sec: 40941.8, 60 sec: 43687.5, 300 sec: 43541.9). Total num frames: 41271296. Throughput: 0: 43703.2. Samples: 41408180. Policy #0 lag: (min: 0.0, avg: 11.2, max: 20.0) +[2024-06-10 18:21:08,245][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:21:08,420][46990] Updated weights for policy 0, policy_version 2520 (0.0035) +[2024-06-10 18:21:11,264][46990] Updated weights for policy 0, policy_version 2530 (0.0033) +[2024-06-10 18:21:13,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 41517056. Throughput: 0: 43677.4. Samples: 41665260. Policy #0 lag: (min: 0.0, avg: 10.8, max: 20.0) +[2024-06-10 18:21:13,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:21:16,108][46990] Updated weights for policy 0, policy_version 2540 (0.0037) +[2024-06-10 18:21:18,239][46753] Fps is (10 sec: 47535.2, 60 sec: 43693.9, 300 sec: 43820.3). Total num frames: 41746432. Throughput: 0: 43713.4. Samples: 41804780. Policy #0 lag: (min: 0.0, avg: 7.8, max: 22.0) +[2024-06-10 18:21:18,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:21:18,529][46990] Updated weights for policy 0, policy_version 2550 (0.0033) +[2024-06-10 18:21:23,239][46753] Fps is (10 sec: 39321.4, 60 sec: 43144.5, 300 sec: 43542.5). Total num frames: 41910272. Throughput: 0: 43622.1. Samples: 42061380. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 18:21:23,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:21:23,630][46990] Updated weights for policy 0, policy_version 2560 (0.0022) +[2024-06-10 18:21:26,261][46990] Updated weights for policy 0, policy_version 2570 (0.0047) +[2024-06-10 18:21:28,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 42156032. Throughput: 0: 43604.4. Samples: 42321480. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 18:21:28,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:21:30,846][46990] Updated weights for policy 0, policy_version 2580 (0.0027) +[2024-06-10 18:21:33,239][46753] Fps is (10 sec: 49152.2, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 42401792. Throughput: 0: 43555.4. Samples: 42456580. Policy #0 lag: (min: 0.0, avg: 8.5, max: 21.0) +[2024-06-10 18:21:33,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:21:33,954][46990] Updated weights for policy 0, policy_version 2590 (0.0027) +[2024-06-10 18:21:38,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43417.5, 300 sec: 43542.6). Total num frames: 42565632. Throughput: 0: 43578.9. Samples: 42714920. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 18:21:38,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:21:38,443][46990] Updated weights for policy 0, policy_version 2600 (0.0032) +[2024-06-10 18:21:40,221][46970] Signal inference workers to stop experience collection... (600 times) +[2024-06-10 18:21:40,277][46970] Signal inference workers to resume experience collection... (600 times) +[2024-06-10 18:21:40,278][46990] InferenceWorker_p0-w0: stopping experience collection (600 times) +[2024-06-10 18:21:40,311][46990] InferenceWorker_p0-w0: resuming experience collection (600 times) +[2024-06-10 18:21:41,226][46990] Updated weights for policy 0, policy_version 2610 (0.0040) +[2024-06-10 18:21:43,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 42811392. Throughput: 0: 43699.3. Samples: 42978520. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 18:21:43,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:21:46,126][46990] Updated weights for policy 0, policy_version 2620 (0.0038) +[2024-06-10 18:21:48,239][46753] Fps is (10 sec: 47514.3, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 43040768. Throughput: 0: 43698.8. Samples: 43114500. Policy #0 lag: (min: 1.0, avg: 10.9, max: 22.0) +[2024-06-10 18:21:48,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:21:48,710][46990] Updated weights for policy 0, policy_version 2630 (0.0027) +[2024-06-10 18:21:53,239][46753] Fps is (10 sec: 40959.6, 60 sec: 43690.6, 300 sec: 43543.0). Total num frames: 43220992. Throughput: 0: 43524.0. Samples: 43366560. Policy #0 lag: (min: 1.0, avg: 9.9, max: 20.0) +[2024-06-10 18:21:53,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:21:53,450][46990] Updated weights for policy 0, policy_version 2640 (0.0024) +[2024-06-10 18:21:56,585][46990] Updated weights for policy 0, policy_version 2650 (0.0035) +[2024-06-10 18:21:58,240][46753] Fps is (10 sec: 42597.4, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 43466752. Throughput: 0: 43639.9. Samples: 43629060. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 18:21:58,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:22:00,843][46990] Updated weights for policy 0, policy_version 2660 (0.0035) +[2024-06-10 18:22:03,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43144.6, 300 sec: 43764.7). Total num frames: 43679744. Throughput: 0: 43516.0. Samples: 43763000. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 18:22:03,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:22:03,998][46990] Updated weights for policy 0, policy_version 2670 (0.0040) +[2024-06-10 18:22:08,240][46753] Fps is (10 sec: 40960.2, 60 sec: 43420.8, 300 sec: 43542.6). Total num frames: 43876352. Throughput: 0: 43541.8. Samples: 44020760. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 18:22:08,244][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:22:08,585][46990] Updated weights for policy 0, policy_version 2680 (0.0041) +[2024-06-10 18:22:11,469][46990] Updated weights for policy 0, policy_version 2690 (0.0032) +[2024-06-10 18:22:13,240][46753] Fps is (10 sec: 44236.4, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 44122112. Throughput: 0: 43661.7. Samples: 44286260. Policy #0 lag: (min: 0.0, avg: 9.1, max: 23.0) +[2024-06-10 18:22:13,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:22:16,246][46990] Updated weights for policy 0, policy_version 2700 (0.0032) +[2024-06-10 18:22:18,240][46753] Fps is (10 sec: 45875.0, 60 sec: 43144.4, 300 sec: 43709.2). Total num frames: 44335104. Throughput: 0: 43759.0. Samples: 44425740. Policy #0 lag: (min: 0.0, avg: 9.2, max: 25.0) +[2024-06-10 18:22:18,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:22:19,059][46990] Updated weights for policy 0, policy_version 2710 (0.0034) +[2024-06-10 18:22:23,240][46753] Fps is (10 sec: 40958.4, 60 sec: 43690.4, 300 sec: 43487.0). Total num frames: 44531712. Throughput: 0: 43582.3. Samples: 44676140. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 18:22:23,249][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:22:23,385][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000002719_44548096.pth... +[2024-06-10 18:22:23,430][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000002079_34062336.pth +[2024-06-10 18:22:23,566][46990] Updated weights for policy 0, policy_version 2720 (0.0037) +[2024-06-10 18:22:26,493][46990] Updated weights for policy 0, policy_version 2730 (0.0043) +[2024-06-10 18:22:28,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 44777472. Throughput: 0: 43505.2. Samples: 44936260. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 18:22:28,244][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:22:30,934][46990] Updated weights for policy 0, policy_version 2740 (0.0037) +[2024-06-10 18:22:33,244][46753] Fps is (10 sec: 47494.3, 60 sec: 43414.3, 300 sec: 43820.2). Total num frames: 45006848. Throughput: 0: 43536.4. Samples: 45073840. Policy #0 lag: (min: 1.0, avg: 9.7, max: 22.0) +[2024-06-10 18:22:33,245][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:22:33,967][46990] Updated weights for policy 0, policy_version 2750 (0.0046) +[2024-06-10 18:22:38,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 45187072. Throughput: 0: 43629.7. Samples: 45329900. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 18:22:38,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:22:38,570][46990] Updated weights for policy 0, policy_version 2760 (0.0038) +[2024-06-10 18:22:38,577][46970] Signal inference workers to stop experience collection... (650 times) +[2024-06-10 18:22:38,577][46970] Signal inference workers to resume experience collection... (650 times) +[2024-06-10 18:22:38,603][46990] InferenceWorker_p0-w0: stopping experience collection (650 times) +[2024-06-10 18:22:38,603][46990] InferenceWorker_p0-w0: resuming experience collection (650 times) +[2024-06-10 18:22:41,638][46990] Updated weights for policy 0, policy_version 2770 (0.0036) +[2024-06-10 18:22:43,239][46753] Fps is (10 sec: 40979.1, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 45416448. Throughput: 0: 43597.1. Samples: 45590920. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 18:22:43,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:22:46,239][46990] Updated weights for policy 0, policy_version 2780 (0.0045) +[2024-06-10 18:22:48,240][46753] Fps is (10 sec: 45874.7, 60 sec: 43417.4, 300 sec: 43709.8). Total num frames: 45645824. Throughput: 0: 43757.6. Samples: 45732100. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 18:22:48,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:22:48,956][46990] Updated weights for policy 0, policy_version 2790 (0.0035) +[2024-06-10 18:22:53,240][46753] Fps is (10 sec: 42597.0, 60 sec: 43690.5, 300 sec: 43487.0). Total num frames: 45842432. Throughput: 0: 43619.4. Samples: 45983640. Policy #0 lag: (min: 0.0, avg: 9.7, max: 23.0) +[2024-06-10 18:22:53,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:22:53,532][46990] Updated weights for policy 0, policy_version 2800 (0.0033) +[2024-06-10 18:22:56,843][46990] Updated weights for policy 0, policy_version 2810 (0.0043) +[2024-06-10 18:22:58,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 46088192. Throughput: 0: 43522.3. Samples: 46244760. Policy #0 lag: (min: 0.0, avg: 11.6, max: 24.0) +[2024-06-10 18:22:58,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:23:00,926][46990] Updated weights for policy 0, policy_version 2820 (0.0039) +[2024-06-10 18:23:03,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 46284800. Throughput: 0: 43495.2. Samples: 46383020. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:23:03,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:23:04,085][46990] Updated weights for policy 0, policy_version 2830 (0.0034) +[2024-06-10 18:23:08,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 46497792. Throughput: 0: 43507.6. Samples: 46633960. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 18:23:08,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:23:08,538][46990] Updated weights for policy 0, policy_version 2840 (0.0029) +[2024-06-10 18:23:11,752][46990] Updated weights for policy 0, policy_version 2850 (0.0025) +[2024-06-10 18:23:13,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 46743552. Throughput: 0: 43626.3. Samples: 46899440. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 18:23:13,240][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:23:16,259][46990] Updated weights for policy 0, policy_version 2860 (0.0034) +[2024-06-10 18:23:18,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 46940160. Throughput: 0: 43639.4. Samples: 47037420. Policy #0 lag: (min: 0.0, avg: 8.9, max: 20.0) +[2024-06-10 18:23:18,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:23:19,337][46990] Updated weights for policy 0, policy_version 2870 (0.0038) +[2024-06-10 18:23:23,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43691.1, 300 sec: 43487.7). Total num frames: 47153152. Throughput: 0: 43663.2. Samples: 47294740. Policy #0 lag: (min: 0.0, avg: 11.6, max: 23.0) +[2024-06-10 18:23:23,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:23:23,827][46990] Updated weights for policy 0, policy_version 2880 (0.0035) +[2024-06-10 18:23:26,739][46990] Updated weights for policy 0, policy_version 2890 (0.0033) +[2024-06-10 18:23:28,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 47398912. Throughput: 0: 43554.1. Samples: 47550860. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 18:23:28,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:23:31,108][46990] Updated weights for policy 0, policy_version 2900 (0.0031) +[2024-06-10 18:23:33,240][46753] Fps is (10 sec: 44235.8, 60 sec: 43147.7, 300 sec: 43653.6). Total num frames: 47595520. Throughput: 0: 43476.9. Samples: 47688560. Policy #0 lag: (min: 0.0, avg: 11.6, max: 23.0) +[2024-06-10 18:23:33,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:23:34,261][46990] Updated weights for policy 0, policy_version 2910 (0.0041) +[2024-06-10 18:23:38,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 47824896. Throughput: 0: 43549.9. Samples: 47943380. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 18:23:38,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:23:38,354][46990] Updated weights for policy 0, policy_version 2920 (0.0044) +[2024-06-10 18:23:41,933][46990] Updated weights for policy 0, policy_version 2930 (0.0041) +[2024-06-10 18:23:43,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43963.6, 300 sec: 43653.6). Total num frames: 48054272. Throughput: 0: 43517.7. Samples: 48203060. Policy #0 lag: (min: 0.0, avg: 10.7, max: 20.0) +[2024-06-10 18:23:43,252][46753] Avg episode reward: [(0, '0.001')] +[2024-06-10 18:23:46,111][46990] Updated weights for policy 0, policy_version 2940 (0.0033) +[2024-06-10 18:23:48,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43144.7, 300 sec: 43653.7). Total num frames: 48234496. Throughput: 0: 43437.4. Samples: 48337700. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 18:23:48,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:23:49,653][46990] Updated weights for policy 0, policy_version 2950 (0.0030) +[2024-06-10 18:23:53,240][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 48463872. Throughput: 0: 43677.2. Samples: 48599440. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 18:23:53,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:23:53,627][46990] Updated weights for policy 0, policy_version 2960 (0.0047) +[2024-06-10 18:23:56,877][46990] Updated weights for policy 0, policy_version 2970 (0.0051) +[2024-06-10 18:23:58,239][46753] Fps is (10 sec: 47513.7, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 48709632. Throughput: 0: 43592.9. Samples: 48861120. Policy #0 lag: (min: 0.0, avg: 8.9, max: 20.0) +[2024-06-10 18:23:58,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:24:00,201][46970] Signal inference workers to stop experience collection... (700 times) +[2024-06-10 18:24:00,244][46990] InferenceWorker_p0-w0: stopping experience collection (700 times) +[2024-06-10 18:24:00,310][46970] Signal inference workers to resume experience collection... (700 times) +[2024-06-10 18:24:00,310][46990] InferenceWorker_p0-w0: resuming experience collection (700 times) +[2024-06-10 18:24:00,863][46990] Updated weights for policy 0, policy_version 2980 (0.0032) +[2024-06-10 18:24:03,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 48906240. Throughput: 0: 43464.5. Samples: 48993320. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 18:24:03,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:24:04,159][46990] Updated weights for policy 0, policy_version 2990 (0.0027) +[2024-06-10 18:24:08,134][46990] Updated weights for policy 0, policy_version 3000 (0.0060) +[2024-06-10 18:24:08,239][46753] Fps is (10 sec: 44236.4, 60 sec: 44236.8, 300 sec: 43709.8). Total num frames: 49152000. Throughput: 0: 43570.6. Samples: 49255420. Policy #0 lag: (min: 1.0, avg: 12.0, max: 23.0) +[2024-06-10 18:24:08,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:24:11,903][46990] Updated weights for policy 0, policy_version 3010 (0.0027) +[2024-06-10 18:24:13,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 49364992. Throughput: 0: 43744.0. Samples: 49519340. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 18:24:13,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:24:15,807][46990] Updated weights for policy 0, policy_version 3020 (0.0043) +[2024-06-10 18:24:18,240][46753] Fps is (10 sec: 42597.8, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 49577984. Throughput: 0: 43611.6. Samples: 49651080. Policy #0 lag: (min: 1.0, avg: 8.2, max: 20.0) +[2024-06-10 18:24:18,248][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:24:19,238][46990] Updated weights for policy 0, policy_version 3030 (0.0035) +[2024-06-10 18:24:23,240][46753] Fps is (10 sec: 42597.4, 60 sec: 43963.5, 300 sec: 43598.1). Total num frames: 49790976. Throughput: 0: 43855.4. Samples: 49916880. Policy #0 lag: (min: 0.0, avg: 12.5, max: 23.0) +[2024-06-10 18:24:23,249][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:24:23,251][46990] Updated weights for policy 0, policy_version 3040 (0.0034) +[2024-06-10 18:24:23,264][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000003040_49807360.pth... +[2024-06-10 18:24:23,335][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000002400_39321600.pth +[2024-06-10 18:24:26,900][46990] Updated weights for policy 0, policy_version 3050 (0.0037) +[2024-06-10 18:24:28,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 50020352. Throughput: 0: 43895.6. Samples: 50178360. Policy #0 lag: (min: 0.0, avg: 11.9, max: 24.0) +[2024-06-10 18:24:28,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:24:30,884][46990] Updated weights for policy 0, policy_version 3060 (0.0038) +[2024-06-10 18:24:33,239][46753] Fps is (10 sec: 42599.8, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 50216960. Throughput: 0: 43711.6. Samples: 50304720. Policy #0 lag: (min: 0.0, avg: 10.2, max: 20.0) +[2024-06-10 18:24:33,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:24:34,422][46990] Updated weights for policy 0, policy_version 3070 (0.0033) +[2024-06-10 18:24:38,167][46990] Updated weights for policy 0, policy_version 3080 (0.0028) +[2024-06-10 18:24:38,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 50462720. Throughput: 0: 43822.2. Samples: 50571440. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 18:24:38,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:24:41,748][46990] Updated weights for policy 0, policy_version 3090 (0.0037) +[2024-06-10 18:24:43,239][46753] Fps is (10 sec: 45874.6, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 50675712. Throughput: 0: 43636.3. Samples: 50824760. Policy #0 lag: (min: 0.0, avg: 9.8, max: 20.0) +[2024-06-10 18:24:43,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:24:45,715][46990] Updated weights for policy 0, policy_version 3100 (0.0047) +[2024-06-10 18:24:48,240][46753] Fps is (10 sec: 40959.7, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 50872320. Throughput: 0: 43755.4. Samples: 50962320. Policy #0 lag: (min: 0.0, avg: 7.7, max: 21.0) +[2024-06-10 18:24:48,243][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:24:49,198][46990] Updated weights for policy 0, policy_version 3110 (0.0038) +[2024-06-10 18:24:53,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43690.7, 300 sec: 43542.8). Total num frames: 51085312. Throughput: 0: 43703.6. Samples: 51222080. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 18:24:53,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:24:53,484][46990] Updated weights for policy 0, policy_version 3120 (0.0038) +[2024-06-10 18:24:56,994][46990] Updated weights for policy 0, policy_version 3130 (0.0035) +[2024-06-10 18:24:58,239][46753] Fps is (10 sec: 47514.8, 60 sec: 43963.7, 300 sec: 43653.7). Total num frames: 51347456. Throughput: 0: 43547.6. Samples: 51478980. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 18:24:58,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:25:00,744][46990] Updated weights for policy 0, policy_version 3140 (0.0038) +[2024-06-10 18:25:03,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 51527680. Throughput: 0: 43543.2. Samples: 51610520. Policy #0 lag: (min: 1.0, avg: 10.1, max: 21.0) +[2024-06-10 18:25:03,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:25:04,471][46990] Updated weights for policy 0, policy_version 3150 (0.0036) +[2024-06-10 18:25:08,063][46990] Updated weights for policy 0, policy_version 3160 (0.0049) +[2024-06-10 18:25:08,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 51773440. Throughput: 0: 43501.6. Samples: 51874440. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 18:25:08,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:25:11,835][46990] Updated weights for policy 0, policy_version 3170 (0.0047) +[2024-06-10 18:25:13,240][46753] Fps is (10 sec: 45874.7, 60 sec: 43690.6, 300 sec: 43598.7). Total num frames: 51986432. Throughput: 0: 43483.0. Samples: 52135100. Policy #0 lag: (min: 0.0, avg: 9.1, max: 20.0) +[2024-06-10 18:25:13,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:25:15,968][46990] Updated weights for policy 0, policy_version 3180 (0.0045) +[2024-06-10 18:25:18,240][46753] Fps is (10 sec: 40959.5, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 52183040. Throughput: 0: 43602.1. Samples: 52266820. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 18:25:18,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:25:19,218][46970] Signal inference workers to stop experience collection... (750 times) +[2024-06-10 18:25:19,218][46970] Signal inference workers to resume experience collection... (750 times) +[2024-06-10 18:25:19,258][46990] InferenceWorker_p0-w0: stopping experience collection (750 times) +[2024-06-10 18:25:19,258][46990] InferenceWorker_p0-w0: resuming experience collection (750 times) +[2024-06-10 18:25:19,364][46990] Updated weights for policy 0, policy_version 3190 (0.0038) +[2024-06-10 18:25:23,240][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 52412416. Throughput: 0: 43439.5. Samples: 52526220. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 18:25:23,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:25:23,430][46990] Updated weights for policy 0, policy_version 3200 (0.0023) +[2024-06-10 18:25:26,911][46990] Updated weights for policy 0, policy_version 3210 (0.0057) +[2024-06-10 18:25:28,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 52641792. Throughput: 0: 43682.3. Samples: 52790460. Policy #0 lag: (min: 0.0, avg: 8.7, max: 22.0) +[2024-06-10 18:25:28,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:25:30,920][46990] Updated weights for policy 0, policy_version 3220 (0.0027) +[2024-06-10 18:25:33,240][46753] Fps is (10 sec: 42598.7, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 52838400. Throughput: 0: 43477.9. Samples: 52918820. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 18:25:33,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:25:34,451][46990] Updated weights for policy 0, policy_version 3230 (0.0034) +[2024-06-10 18:25:38,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 53067776. Throughput: 0: 43492.9. Samples: 53179260. Policy #0 lag: (min: 0.0, avg: 10.8, max: 24.0) +[2024-06-10 18:25:38,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:25:38,781][46990] Updated weights for policy 0, policy_version 3240 (0.0039) +[2024-06-10 18:25:41,757][46990] Updated weights for policy 0, policy_version 3250 (0.0046) +[2024-06-10 18:25:43,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 53297152. Throughput: 0: 43550.7. Samples: 53438760. Policy #0 lag: (min: 1.0, avg: 11.9, max: 22.0) +[2024-06-10 18:25:43,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:25:45,885][46990] Updated weights for policy 0, policy_version 3260 (0.0036) +[2024-06-10 18:25:48,244][46753] Fps is (10 sec: 42579.4, 60 sec: 43687.5, 300 sec: 43708.5). Total num frames: 53493760. Throughput: 0: 43676.1. Samples: 53576140. Policy #0 lag: (min: 0.0, avg: 9.2, max: 20.0) +[2024-06-10 18:25:48,245][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:25:49,362][46990] Updated weights for policy 0, policy_version 3270 (0.0038) +[2024-06-10 18:25:53,240][46753] Fps is (10 sec: 42597.6, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 53723136. Throughput: 0: 43540.3. Samples: 53833760. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 18:25:53,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:25:53,555][46990] Updated weights for policy 0, policy_version 3280 (0.0052) +[2024-06-10 18:25:56,811][46990] Updated weights for policy 0, policy_version 3290 (0.0029) +[2024-06-10 18:25:58,240][46753] Fps is (10 sec: 45895.1, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 53952512. Throughput: 0: 43579.5. Samples: 54096180. Policy #0 lag: (min: 0.0, avg: 8.6, max: 22.0) +[2024-06-10 18:25:58,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:26:00,867][46990] Updated weights for policy 0, policy_version 3300 (0.0034) +[2024-06-10 18:26:03,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43690.7, 300 sec: 43654.3). Total num frames: 54149120. Throughput: 0: 43466.3. Samples: 54222800. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 18:26:03,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:26:04,373][46990] Updated weights for policy 0, policy_version 3310 (0.0039) +[2024-06-10 18:26:08,240][46753] Fps is (10 sec: 40960.2, 60 sec: 43144.4, 300 sec: 43542.6). Total num frames: 54362112. Throughput: 0: 43578.7. Samples: 54487260. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 18:26:08,244][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:26:08,559][46990] Updated weights for policy 0, policy_version 3320 (0.0034) +[2024-06-10 18:26:12,052][46990] Updated weights for policy 0, policy_version 3330 (0.0047) +[2024-06-10 18:26:13,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 54607872. Throughput: 0: 43430.6. Samples: 54744840. Policy #0 lag: (min: 0.0, avg: 11.2, max: 24.0) +[2024-06-10 18:26:13,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:26:16,026][46990] Updated weights for policy 0, policy_version 3340 (0.0021) +[2024-06-10 18:26:18,240][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 54804480. Throughput: 0: 43651.1. Samples: 54883120. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 18:26:18,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:26:19,478][46990] Updated weights for policy 0, policy_version 3350 (0.0034) +[2024-06-10 18:26:23,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 55033856. Throughput: 0: 43738.2. Samples: 55147480. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 18:26:23,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:26:23,248][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000003359_55033856.pth... +[2024-06-10 18:26:23,292][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000002719_44548096.pth +[2024-06-10 18:26:23,489][46990] Updated weights for policy 0, policy_version 3360 (0.0033) +[2024-06-10 18:26:26,956][46990] Updated weights for policy 0, policy_version 3370 (0.0034) +[2024-06-10 18:26:28,240][46753] Fps is (10 sec: 45875.1, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 55263232. Throughput: 0: 43745.6. Samples: 55407320. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 18:26:28,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:26:31,491][46990] Updated weights for policy 0, policy_version 3380 (0.0040) +[2024-06-10 18:26:32,490][46970] Signal inference workers to stop experience collection... (800 times) +[2024-06-10 18:26:32,520][46990] InferenceWorker_p0-w0: stopping experience collection (800 times) +[2024-06-10 18:26:32,556][46970] Signal inference workers to resume experience collection... (800 times) +[2024-06-10 18:26:32,557][46990] InferenceWorker_p0-w0: resuming experience collection (800 times) +[2024-06-10 18:26:33,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 55459840. Throughput: 0: 43603.5. Samples: 55538100. Policy #0 lag: (min: 1.0, avg: 9.0, max: 21.0) +[2024-06-10 18:26:33,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:26:34,350][46990] Updated weights for policy 0, policy_version 3390 (0.0028) +[2024-06-10 18:26:38,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 55672832. Throughput: 0: 43765.0. Samples: 55803180. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 18:26:38,240][46753] Avg episode reward: [(0, '0.002')] +[2024-06-10 18:26:38,777][46990] Updated weights for policy 0, policy_version 3400 (0.0038) +[2024-06-10 18:26:41,877][46990] Updated weights for policy 0, policy_version 3410 (0.0041) +[2024-06-10 18:26:43,239][46753] Fps is (10 sec: 47513.6, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 55934976. Throughput: 0: 43620.2. Samples: 56059080. Policy #0 lag: (min: 1.0, avg: 10.9, max: 26.0) +[2024-06-10 18:26:43,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:26:46,235][46990] Updated weights for policy 0, policy_version 3420 (0.0030) +[2024-06-10 18:26:48,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43967.0, 300 sec: 43764.7). Total num frames: 56131584. Throughput: 0: 44017.3. Samples: 56203580. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 18:26:48,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:26:49,418][46990] Updated weights for policy 0, policy_version 3430 (0.0045) +[2024-06-10 18:26:53,240][46753] Fps is (10 sec: 39320.2, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 56328192. Throughput: 0: 43955.8. Samples: 56465280. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 18:26:53,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:26:53,995][46990] Updated weights for policy 0, policy_version 3440 (0.0042) +[2024-06-10 18:26:56,708][46990] Updated weights for policy 0, policy_version 3450 (0.0040) +[2024-06-10 18:26:58,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 56590336. Throughput: 0: 43832.4. Samples: 56717300. Policy #0 lag: (min: 0.0, avg: 12.1, max: 22.0) +[2024-06-10 18:26:58,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:27:01,643][46990] Updated weights for policy 0, policy_version 3460 (0.0040) +[2024-06-10 18:27:03,239][46753] Fps is (10 sec: 45876.8, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 56786944. Throughput: 0: 43883.3. Samples: 56857860. Policy #0 lag: (min: 1.0, avg: 8.4, max: 22.0) +[2024-06-10 18:27:03,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:27:03,960][46990] Updated weights for policy 0, policy_version 3470 (0.0049) +[2024-06-10 18:27:08,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 56983552. Throughput: 0: 43772.4. Samples: 57117240. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 18:27:08,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:27:08,949][46990] Updated weights for policy 0, policy_version 3480 (0.0031) +[2024-06-10 18:27:11,675][46990] Updated weights for policy 0, policy_version 3490 (0.0025) +[2024-06-10 18:27:13,239][46753] Fps is (10 sec: 45874.6, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 57245696. Throughput: 0: 43777.8. Samples: 57377320. Policy #0 lag: (min: 1.0, avg: 10.6, max: 23.0) +[2024-06-10 18:27:13,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:27:13,262][46970] Saving new best policy, reward=0.006! +[2024-06-10 18:27:16,474][46990] Updated weights for policy 0, policy_version 3500 (0.0030) +[2024-06-10 18:27:18,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43963.9, 300 sec: 43764.8). Total num frames: 57442304. Throughput: 0: 44106.2. Samples: 57522880. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 18:27:18,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:27:19,394][46990] Updated weights for policy 0, policy_version 3510 (0.0032) +[2024-06-10 18:27:23,240][46753] Fps is (10 sec: 39321.5, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 57638912. Throughput: 0: 43891.0. Samples: 57778280. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 18:27:23,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:27:23,936][46990] Updated weights for policy 0, policy_version 3520 (0.0026) +[2024-06-10 18:27:26,660][46990] Updated weights for policy 0, policy_version 3530 (0.0038) +[2024-06-10 18:27:28,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.8, 300 sec: 43654.3). Total num frames: 57884672. Throughput: 0: 43843.1. Samples: 58032020. Policy #0 lag: (min: 0.0, avg: 11.0, max: 22.0) +[2024-06-10 18:27:28,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:27:31,385][46990] Updated weights for policy 0, policy_version 3540 (0.0043) +[2024-06-10 18:27:33,240][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 58097664. Throughput: 0: 43732.4. Samples: 58171540. Policy #0 lag: (min: 0.0, avg: 8.5, max: 23.0) +[2024-06-10 18:27:33,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:27:33,949][46990] Updated weights for policy 0, policy_version 3550 (0.0039) +[2024-06-10 18:27:38,244][46753] Fps is (10 sec: 40941.3, 60 sec: 43687.4, 300 sec: 43653.0). Total num frames: 58294272. Throughput: 0: 43648.8. Samples: 58429660. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 18:27:38,244][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:27:39,260][46990] Updated weights for policy 0, policy_version 3560 (0.0039) +[2024-06-10 18:27:40,345][46970] Signal inference workers to stop experience collection... (850 times) +[2024-06-10 18:27:40,345][46970] Signal inference workers to resume experience collection... (850 times) +[2024-06-10 18:27:40,393][46990] InferenceWorker_p0-w0: stopping experience collection (850 times) +[2024-06-10 18:27:40,393][46990] InferenceWorker_p0-w0: resuming experience collection (850 times) +[2024-06-10 18:27:41,480][46990] Updated weights for policy 0, policy_version 3570 (0.0031) +[2024-06-10 18:27:43,244][46753] Fps is (10 sec: 45855.3, 60 sec: 43687.4, 300 sec: 43764.1). Total num frames: 58556416. Throughput: 0: 43757.1. Samples: 58686560. Policy #0 lag: (min: 0.0, avg: 12.1, max: 23.0) +[2024-06-10 18:27:43,253][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:27:46,387][46990] Updated weights for policy 0, policy_version 3580 (0.0040) +[2024-06-10 18:27:48,239][46753] Fps is (10 sec: 45895.7, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 58753024. Throughput: 0: 43800.4. Samples: 58828880. Policy #0 lag: (min: 0.0, avg: 9.0, max: 22.0) +[2024-06-10 18:27:48,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:27:49,004][46990] Updated weights for policy 0, policy_version 3590 (0.0042) +[2024-06-10 18:27:53,239][46753] Fps is (10 sec: 39339.4, 60 sec: 43690.9, 300 sec: 43598.1). Total num frames: 58949632. Throughput: 0: 43857.9. Samples: 59090840. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 18:27:53,240][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:27:54,001][46990] Updated weights for policy 0, policy_version 3600 (0.0031) +[2024-06-10 18:27:56,676][46990] Updated weights for policy 0, policy_version 3610 (0.0051) +[2024-06-10 18:27:58,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 59195392. Throughput: 0: 43789.9. Samples: 59347860. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 18:27:58,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:28:01,294][46990] Updated weights for policy 0, policy_version 3620 (0.0042) +[2024-06-10 18:28:03,240][46753] Fps is (10 sec: 45874.1, 60 sec: 43690.5, 300 sec: 43764.7). Total num frames: 59408384. Throughput: 0: 43522.4. Samples: 59481400. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 18:28:03,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:28:04,047][46990] Updated weights for policy 0, policy_version 3630 (0.0030) +[2024-06-10 18:28:08,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 59604992. Throughput: 0: 43642.4. Samples: 59742180. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 18:28:08,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:28:08,792][46990] Updated weights for policy 0, policy_version 3640 (0.0034) +[2024-06-10 18:28:11,409][46990] Updated weights for policy 0, policy_version 3650 (0.0038) +[2024-06-10 18:28:13,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 59850752. Throughput: 0: 43881.2. Samples: 60006680. Policy #0 lag: (min: 0.0, avg: 12.3, max: 20.0) +[2024-06-10 18:28:13,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:28:16,245][46990] Updated weights for policy 0, policy_version 3660 (0.0037) +[2024-06-10 18:28:18,239][46753] Fps is (10 sec: 45874.5, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 60063744. Throughput: 0: 43699.1. Samples: 60138000. Policy #0 lag: (min: 0.0, avg: 8.6, max: 22.0) +[2024-06-10 18:28:18,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:28:19,113][46990] Updated weights for policy 0, policy_version 3670 (0.0044) +[2024-06-10 18:28:23,240][46753] Fps is (10 sec: 42596.3, 60 sec: 43963.4, 300 sec: 43653.6). Total num frames: 60276736. Throughput: 0: 43819.4. Samples: 60401360. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 18:28:23,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:28:23,259][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000003679_60276736.pth... +[2024-06-10 18:28:23,319][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000003040_49807360.pth +[2024-06-10 18:28:23,784][46990] Updated weights for policy 0, policy_version 3680 (0.0034) +[2024-06-10 18:28:26,625][46990] Updated weights for policy 0, policy_version 3690 (0.0036) +[2024-06-10 18:28:28,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43144.4, 300 sec: 43653.7). Total num frames: 60473344. Throughput: 0: 43971.4. Samples: 60665080. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 18:28:28,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:28:31,021][46990] Updated weights for policy 0, policy_version 3700 (0.0039) +[2024-06-10 18:28:33,239][46753] Fps is (10 sec: 44239.4, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 60719104. Throughput: 0: 43668.1. Samples: 60793940. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 18:28:33,240][46753] Avg episode reward: [(0, '0.009')] +[2024-06-10 18:28:33,355][46970] Saving new best policy, reward=0.009! +[2024-06-10 18:28:34,165][46990] Updated weights for policy 0, policy_version 3710 (0.0045) +[2024-06-10 18:28:38,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43694.0, 300 sec: 43598.1). Total num frames: 60915712. Throughput: 0: 43580.1. Samples: 61051940. Policy #0 lag: (min: 0.0, avg: 12.3, max: 22.0) +[2024-06-10 18:28:38,244][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:28:38,997][46990] Updated weights for policy 0, policy_version 3720 (0.0031) +[2024-06-10 18:28:41,662][46990] Updated weights for policy 0, policy_version 3730 (0.0028) +[2024-06-10 18:28:43,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43147.8, 300 sec: 43764.7). Total num frames: 61145088. Throughput: 0: 43754.2. Samples: 61316800. Policy #0 lag: (min: 0.0, avg: 8.5, max: 19.0) +[2024-06-10 18:28:43,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:28:46,171][46990] Updated weights for policy 0, policy_version 3740 (0.0038) +[2024-06-10 18:28:48,239][46753] Fps is (10 sec: 47512.9, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 61390848. Throughput: 0: 43762.8. Samples: 61450720. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 18:28:48,240][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:28:49,100][46990] Updated weights for policy 0, policy_version 3750 (0.0030) +[2024-06-10 18:28:53,240][46753] Fps is (10 sec: 42597.8, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 61571072. Throughput: 0: 43682.0. Samples: 61707880. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 18:28:53,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:28:54,021][46990] Updated weights for policy 0, policy_version 3760 (0.0040) +[2024-06-10 18:28:56,466][46990] Updated weights for policy 0, policy_version 3770 (0.0038) +[2024-06-10 18:28:58,240][46753] Fps is (10 sec: 39320.7, 60 sec: 43144.3, 300 sec: 43653.6). Total num frames: 61784064. Throughput: 0: 43625.6. Samples: 61969840. Policy #0 lag: (min: 1.0, avg: 12.3, max: 21.0) +[2024-06-10 18:28:58,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:29:01,213][46990] Updated weights for policy 0, policy_version 3780 (0.0029) +[2024-06-10 18:29:02,392][46970] Signal inference workers to stop experience collection... (900 times) +[2024-06-10 18:29:02,429][46990] InferenceWorker_p0-w0: stopping experience collection (900 times) +[2024-06-10 18:29:02,440][46970] Signal inference workers to resume experience collection... (900 times) +[2024-06-10 18:29:02,450][46990] InferenceWorker_p0-w0: resuming experience collection (900 times) +[2024-06-10 18:29:03,239][46753] Fps is (10 sec: 45876.0, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 62029824. Throughput: 0: 43725.5. Samples: 62105640. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 18:29:03,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:29:04,174][46990] Updated weights for policy 0, policy_version 3790 (0.0044) +[2024-06-10 18:29:08,239][46753] Fps is (10 sec: 44237.9, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 62226432. Throughput: 0: 43506.7. Samples: 62359140. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 18:29:08,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:29:08,722][46990] Updated weights for policy 0, policy_version 3800 (0.0043) +[2024-06-10 18:29:11,922][46990] Updated weights for policy 0, policy_version 3810 (0.0035) +[2024-06-10 18:29:13,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43144.5, 300 sec: 43598.1). Total num frames: 62439424. Throughput: 0: 43459.1. Samples: 62620740. Policy #0 lag: (min: 0.0, avg: 10.9, max: 23.0) +[2024-06-10 18:29:13,242][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:29:16,381][46990] Updated weights for policy 0, policy_version 3820 (0.0032) +[2024-06-10 18:29:18,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 62685184. Throughput: 0: 43546.6. Samples: 62753540. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 18:29:18,240][46753] Avg episode reward: [(0, '0.010')] +[2024-06-10 18:29:18,256][46970] Saving new best policy, reward=0.010! +[2024-06-10 18:29:19,490][46990] Updated weights for policy 0, policy_version 3830 (0.0029) +[2024-06-10 18:29:23,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43417.9, 300 sec: 43598.1). Total num frames: 62881792. Throughput: 0: 43654.9. Samples: 63016420. Policy #0 lag: (min: 0.0, avg: 11.4, max: 21.0) +[2024-06-10 18:29:23,240][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:29:23,993][46990] Updated weights for policy 0, policy_version 3840 (0.0030) +[2024-06-10 18:29:27,043][46990] Updated weights for policy 0, policy_version 3850 (0.0034) +[2024-06-10 18:29:28,244][46753] Fps is (10 sec: 40941.4, 60 sec: 43687.4, 300 sec: 43653.0). Total num frames: 63094784. Throughput: 0: 43580.1. Samples: 63278100. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 18:29:28,244][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:29:31,101][46990] Updated weights for policy 0, policy_version 3860 (0.0026) +[2024-06-10 18:29:33,244][46753] Fps is (10 sec: 45854.8, 60 sec: 43687.3, 300 sec: 43653.0). Total num frames: 63340544. Throughput: 0: 43478.3. Samples: 63407440. Policy #0 lag: (min: 0.0, avg: 12.1, max: 23.0) +[2024-06-10 18:29:33,244][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:29:34,436][46990] Updated weights for policy 0, policy_version 3870 (0.0046) +[2024-06-10 18:29:38,239][46753] Fps is (10 sec: 44256.8, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 63537152. Throughput: 0: 43485.5. Samples: 63664720. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 18:29:38,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:29:38,625][46990] Updated weights for policy 0, policy_version 3880 (0.0041) +[2024-06-10 18:29:42,505][46990] Updated weights for policy 0, policy_version 3890 (0.0030) +[2024-06-10 18:29:43,240][46753] Fps is (10 sec: 40978.0, 60 sec: 43417.5, 300 sec: 43653.7). Total num frames: 63750144. Throughput: 0: 43549.9. Samples: 63929580. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 18:29:43,240][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:29:46,093][46990] Updated weights for policy 0, policy_version 3900 (0.0039) +[2024-06-10 18:29:48,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 64012288. Throughput: 0: 43453.4. Samples: 64061040. Policy #0 lag: (min: 2.0, avg: 11.3, max: 22.0) +[2024-06-10 18:29:48,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:29:50,167][46990] Updated weights for policy 0, policy_version 3910 (0.0038) +[2024-06-10 18:29:53,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.6, 300 sec: 43542.5). Total num frames: 64192512. Throughput: 0: 43621.1. Samples: 64322100. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 18:29:53,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:29:53,675][46990] Updated weights for policy 0, policy_version 3920 (0.0040) +[2024-06-10 18:29:57,501][46990] Updated weights for policy 0, policy_version 3930 (0.0040) +[2024-06-10 18:29:58,244][46753] Fps is (10 sec: 39303.4, 60 sec: 43687.5, 300 sec: 43653.0). Total num frames: 64405504. Throughput: 0: 43621.4. Samples: 64583900. Policy #0 lag: (min: 0.0, avg: 12.8, max: 23.0) +[2024-06-10 18:29:58,253][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:30:00,989][46990] Updated weights for policy 0, policy_version 3940 (0.0042) +[2024-06-10 18:30:03,240][46753] Fps is (10 sec: 47512.2, 60 sec: 43963.3, 300 sec: 43709.1). Total num frames: 64667648. Throughput: 0: 43487.9. Samples: 64710520. Policy #0 lag: (min: 1.0, avg: 9.7, max: 21.0) +[2024-06-10 18:30:03,241][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:30:05,178][46990] Updated weights for policy 0, policy_version 3950 (0.0025) +[2024-06-10 18:30:08,239][46753] Fps is (10 sec: 45896.3, 60 sec: 43963.8, 300 sec: 43653.7). Total num frames: 64864256. Throughput: 0: 43661.9. Samples: 64981200. Policy #0 lag: (min: 0.0, avg: 10.4, max: 23.0) +[2024-06-10 18:30:08,244][46753] Avg episode reward: [(0, '0.003')] +[2024-06-10 18:30:08,609][46990] Updated weights for policy 0, policy_version 3960 (0.0035) +[2024-06-10 18:30:13,005][46990] Updated weights for policy 0, policy_version 3970 (0.0036) +[2024-06-10 18:30:13,239][46753] Fps is (10 sec: 37684.8, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 65044480. Throughput: 0: 43611.8. Samples: 65240440. Policy #0 lag: (min: 0.0, avg: 10.9, max: 23.0) +[2024-06-10 18:30:13,240][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:30:16,087][46990] Updated weights for policy 0, policy_version 3980 (0.0035) +[2024-06-10 18:30:18,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 65306624. Throughput: 0: 43490.5. Samples: 65364320. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 18:30:18,240][46753] Avg episode reward: [(0, '0.008')] +[2024-06-10 18:30:20,473][46990] Updated weights for policy 0, policy_version 3990 (0.0043) +[2024-06-10 18:30:23,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 65503232. Throughput: 0: 43576.4. Samples: 65625660. Policy #0 lag: (min: 0.0, avg: 9.6, max: 23.0) +[2024-06-10 18:30:23,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:30:23,270][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000003999_65519616.pth... +[2024-06-10 18:30:23,322][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000003359_55033856.pth +[2024-06-10 18:30:23,624][46990] Updated weights for policy 0, policy_version 4000 (0.0037) +[2024-06-10 18:30:27,895][46990] Updated weights for policy 0, policy_version 4010 (0.0037) +[2024-06-10 18:30:28,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43420.8, 300 sec: 43598.1). Total num frames: 65699840. Throughput: 0: 43428.1. Samples: 65883840. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 18:30:28,240][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:30:30,559][46970] Signal inference workers to stop experience collection... (950 times) +[2024-06-10 18:30:30,559][46970] Signal inference workers to resume experience collection... (950 times) +[2024-06-10 18:30:30,589][46990] InferenceWorker_p0-w0: stopping experience collection (950 times) +[2024-06-10 18:30:30,589][46990] InferenceWorker_p0-w0: resuming experience collection (950 times) +[2024-06-10 18:30:31,179][46990] Updated weights for policy 0, policy_version 4020 (0.0030) +[2024-06-10 18:30:33,244][46753] Fps is (10 sec: 44216.8, 60 sec: 43417.6, 300 sec: 43653.0). Total num frames: 65945600. Throughput: 0: 43296.1. Samples: 66009560. Policy #0 lag: (min: 1.0, avg: 8.2, max: 22.0) +[2024-06-10 18:30:33,245][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:30:35,662][46990] Updated weights for policy 0, policy_version 4030 (0.0022) +[2024-06-10 18:30:38,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 66158592. Throughput: 0: 43512.7. Samples: 66280160. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 18:30:38,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:30:38,820][46990] Updated weights for policy 0, policy_version 4040 (0.0027) +[2024-06-10 18:30:43,098][46990] Updated weights for policy 0, policy_version 4050 (0.0035) +[2024-06-10 18:30:43,239][46753] Fps is (10 sec: 40978.6, 60 sec: 43417.7, 300 sec: 43598.8). Total num frames: 66355200. Throughput: 0: 43573.3. Samples: 66544500. Policy #0 lag: (min: 0.0, avg: 12.3, max: 23.0) +[2024-06-10 18:30:43,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:30:46,201][46990] Updated weights for policy 0, policy_version 4060 (0.0035) +[2024-06-10 18:30:48,239][46753] Fps is (10 sec: 47513.4, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 66633728. Throughput: 0: 43660.5. Samples: 66675220. Policy #0 lag: (min: 0.0, avg: 9.2, max: 23.0) +[2024-06-10 18:30:48,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:30:50,571][46990] Updated weights for policy 0, policy_version 4070 (0.0034) +[2024-06-10 18:30:53,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43690.9, 300 sec: 43598.1). Total num frames: 66813952. Throughput: 0: 43538.7. Samples: 66940440. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 18:30:53,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:30:53,407][46990] Updated weights for policy 0, policy_version 4080 (0.0046) +[2024-06-10 18:30:57,726][46990] Updated weights for policy 0, policy_version 4090 (0.0024) +[2024-06-10 18:30:58,240][46753] Fps is (10 sec: 37682.8, 60 sec: 43420.8, 300 sec: 43598.1). Total num frames: 67010560. Throughput: 0: 43532.0. Samples: 67199380. Policy #0 lag: (min: 0.0, avg: 11.9, max: 24.0) +[2024-06-10 18:30:58,240][46753] Avg episode reward: [(0, '0.009')] +[2024-06-10 18:31:01,141][46990] Updated weights for policy 0, policy_version 4100 (0.0022) +[2024-06-10 18:31:03,239][46753] Fps is (10 sec: 47513.1, 60 sec: 43691.0, 300 sec: 43820.3). Total num frames: 67289088. Throughput: 0: 43689.4. Samples: 67330340. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 18:31:03,240][46753] Avg episode reward: [(0, '0.009')] +[2024-06-10 18:31:05,319][46990] Updated weights for policy 0, policy_version 4110 (0.0041) +[2024-06-10 18:31:08,240][46753] Fps is (10 sec: 45875.2, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 67469312. Throughput: 0: 43851.9. Samples: 67599000. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 18:31:08,240][46753] Avg episode reward: [(0, '0.010')] +[2024-06-10 18:31:08,477][46990] Updated weights for policy 0, policy_version 4120 (0.0037) +[2024-06-10 18:31:12,813][46990] Updated weights for policy 0, policy_version 4130 (0.0036) +[2024-06-10 18:31:13,239][46753] Fps is (10 sec: 37683.5, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 67665920. Throughput: 0: 43901.8. Samples: 67859420. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 18:31:13,240][46753] Avg episode reward: [(0, '0.008')] +[2024-06-10 18:31:16,105][46990] Updated weights for policy 0, policy_version 4140 (0.0032) +[2024-06-10 18:31:18,239][46753] Fps is (10 sec: 47514.3, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 67944448. Throughput: 0: 44077.3. Samples: 67992840. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 18:31:18,240][46753] Avg episode reward: [(0, '0.008')] +[2024-06-10 18:31:20,171][46990] Updated weights for policy 0, policy_version 4150 (0.0034) +[2024-06-10 18:31:23,240][46753] Fps is (10 sec: 45874.0, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 68124672. Throughput: 0: 43965.1. Samples: 68258600. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:31:23,249][46753] Avg episode reward: [(0, '0.004')] +[2024-06-10 18:31:23,378][46990] Updated weights for policy 0, policy_version 4160 (0.0026) +[2024-06-10 18:31:27,994][46990] Updated weights for policy 0, policy_version 4170 (0.0036) +[2024-06-10 18:31:28,239][46753] Fps is (10 sec: 37682.9, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 68321280. Throughput: 0: 43746.6. Samples: 68513100. Policy #0 lag: (min: 0.0, avg: 12.3, max: 23.0) +[2024-06-10 18:31:28,240][46753] Avg episode reward: [(0, '0.010')] +[2024-06-10 18:31:31,118][46990] Updated weights for policy 0, policy_version 4180 (0.0032) +[2024-06-10 18:31:33,239][46753] Fps is (10 sec: 45876.2, 60 sec: 43967.0, 300 sec: 43764.7). Total num frames: 68583424. Throughput: 0: 43630.7. Samples: 68638600. Policy #0 lag: (min: 1.0, avg: 8.4, max: 22.0) +[2024-06-10 18:31:33,240][46753] Avg episode reward: [(0, '0.008')] +[2024-06-10 18:31:35,579][46990] Updated weights for policy 0, policy_version 4190 (0.0027) +[2024-06-10 18:31:38,028][46970] Signal inference workers to stop experience collection... (1000 times) +[2024-06-10 18:31:38,028][46970] Signal inference workers to resume experience collection... (1000 times) +[2024-06-10 18:31:38,043][46990] InferenceWorker_p0-w0: stopping experience collection (1000 times) +[2024-06-10 18:31:38,048][46990] InferenceWorker_p0-w0: resuming experience collection (1000 times) +[2024-06-10 18:31:38,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 68780032. Throughput: 0: 43811.1. Samples: 68911940. Policy #0 lag: (min: 0.0, avg: 11.7, max: 22.0) +[2024-06-10 18:31:38,240][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:31:38,617][46990] Updated weights for policy 0, policy_version 4200 (0.0032) +[2024-06-10 18:31:42,701][46990] Updated weights for policy 0, policy_version 4210 (0.0028) +[2024-06-10 18:31:43,240][46753] Fps is (10 sec: 40959.5, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 68993024. Throughput: 0: 43800.9. Samples: 69170420. Policy #0 lag: (min: 0.0, avg: 10.3, max: 20.0) +[2024-06-10 18:31:43,240][46753] Avg episode reward: [(0, '0.009')] +[2024-06-10 18:31:46,162][46990] Updated weights for policy 0, policy_version 4220 (0.0028) +[2024-06-10 18:31:48,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43417.6, 300 sec: 43764.8). Total num frames: 69238784. Throughput: 0: 43900.5. Samples: 69305860. Policy #0 lag: (min: 0.0, avg: 8.5, max: 21.0) +[2024-06-10 18:31:48,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:31:49,889][46990] Updated weights for policy 0, policy_version 4230 (0.0039) +[2024-06-10 18:31:53,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 69435392. Throughput: 0: 43769.5. Samples: 69568620. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 18:31:53,240][46753] Avg episode reward: [(0, '0.005')] +[2024-06-10 18:31:53,457][46990] Updated weights for policy 0, policy_version 4240 (0.0025) +[2024-06-10 18:31:57,533][46990] Updated weights for policy 0, policy_version 4250 (0.0039) +[2024-06-10 18:31:58,240][46753] Fps is (10 sec: 40957.7, 60 sec: 43963.4, 300 sec: 43598.0). Total num frames: 69648384. Throughput: 0: 43694.6. Samples: 69825700. Policy #0 lag: (min: 0.0, avg: 10.2, max: 20.0) +[2024-06-10 18:31:58,241][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:31:58,241][46970] Saving new best policy, reward=0.012! +[2024-06-10 18:32:00,995][46990] Updated weights for policy 0, policy_version 4260 (0.0034) +[2024-06-10 18:32:03,239][46753] Fps is (10 sec: 47513.2, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 69910528. Throughput: 0: 43660.0. Samples: 69957540. Policy #0 lag: (min: 0.0, avg: 12.2, max: 23.0) +[2024-06-10 18:32:03,240][46753] Avg episode reward: [(0, '0.009')] +[2024-06-10 18:32:05,139][46990] Updated weights for policy 0, policy_version 4270 (0.0032) +[2024-06-10 18:32:08,239][46753] Fps is (10 sec: 42600.8, 60 sec: 43417.7, 300 sec: 43487.0). Total num frames: 70074368. Throughput: 0: 43754.9. Samples: 70227560. Policy #0 lag: (min: 0.0, avg: 7.7, max: 21.0) +[2024-06-10 18:32:08,240][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:32:08,555][46990] Updated weights for policy 0, policy_version 4280 (0.0028) +[2024-06-10 18:32:12,237][46990] Updated weights for policy 0, policy_version 4290 (0.0032) +[2024-06-10 18:32:13,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 70303744. Throughput: 0: 43833.9. Samples: 70485620. Policy #0 lag: (min: 0.0, avg: 9.4, max: 20.0) +[2024-06-10 18:32:13,240][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:32:16,009][46990] Updated weights for policy 0, policy_version 4300 (0.0045) +[2024-06-10 18:32:18,239][46753] Fps is (10 sec: 49152.2, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 70565888. Throughput: 0: 44047.6. Samples: 70620740. Policy #0 lag: (min: 0.0, avg: 12.7, max: 20.0) +[2024-06-10 18:32:18,240][46753] Avg episode reward: [(0, '0.008')] +[2024-06-10 18:32:19,918][46990] Updated weights for policy 0, policy_version 4310 (0.0031) +[2024-06-10 18:32:23,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 70746112. Throughput: 0: 43928.8. Samples: 70888740. Policy #0 lag: (min: 0.0, avg: 7.6, max: 21.0) +[2024-06-10 18:32:23,240][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:32:23,308][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000004319_70762496.pth... +[2024-06-10 18:32:23,378][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000003679_60276736.pth +[2024-06-10 18:32:23,530][46990] Updated weights for policy 0, policy_version 4320 (0.0030) +[2024-06-10 18:32:27,624][46990] Updated weights for policy 0, policy_version 4330 (0.0031) +[2024-06-10 18:32:28,240][46753] Fps is (10 sec: 39320.6, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 70959104. Throughput: 0: 43859.9. Samples: 71144120. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 18:32:28,240][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:32:30,765][46990] Updated weights for policy 0, policy_version 4340 (0.0033) +[2024-06-10 18:32:33,239][46753] Fps is (10 sec: 47514.0, 60 sec: 43963.8, 300 sec: 43820.9). Total num frames: 71221248. Throughput: 0: 43663.1. Samples: 71270700. Policy #0 lag: (min: 1.0, avg: 11.3, max: 19.0) +[2024-06-10 18:32:33,240][46753] Avg episode reward: [(0, '0.009')] +[2024-06-10 18:32:35,114][46990] Updated weights for policy 0, policy_version 4350 (0.0032) +[2024-06-10 18:32:38,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43690.6, 300 sec: 43543.2). Total num frames: 71401472. Throughput: 0: 43828.3. Samples: 71540900. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 18:32:38,240][46753] Avg episode reward: [(0, '0.008')] +[2024-06-10 18:32:38,539][46990] Updated weights for policy 0, policy_version 4360 (0.0037) +[2024-06-10 18:32:42,290][46990] Updated weights for policy 0, policy_version 4370 (0.0048) +[2024-06-10 18:32:43,240][46753] Fps is (10 sec: 39320.7, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 71614464. Throughput: 0: 43871.5. Samples: 71799900. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 18:32:43,243][46753] Avg episode reward: [(0, '0.008')] +[2024-06-10 18:32:45,962][46990] Updated weights for policy 0, policy_version 4380 (0.0038) +[2024-06-10 18:32:48,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 71876608. Throughput: 0: 43885.8. Samples: 71932400. Policy #0 lag: (min: 1.0, avg: 10.1, max: 20.0) +[2024-06-10 18:32:48,240][46753] Avg episode reward: [(0, '0.006')] +[2024-06-10 18:32:49,644][46990] Updated weights for policy 0, policy_version 4390 (0.0046) +[2024-06-10 18:32:53,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 72073216. Throughput: 0: 43865.7. Samples: 72201520. Policy #0 lag: (min: 0.0, avg: 8.1, max: 21.0) +[2024-06-10 18:32:53,240][46753] Avg episode reward: [(0, '0.008')] +[2024-06-10 18:32:53,385][46990] Updated weights for policy 0, policy_version 4400 (0.0040) +[2024-06-10 18:32:57,309][46990] Updated weights for policy 0, policy_version 4410 (0.0036) +[2024-06-10 18:32:58,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43964.2, 300 sec: 43653.7). Total num frames: 72286208. Throughput: 0: 43848.4. Samples: 72458800. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 18:32:58,240][46753] Avg episode reward: [(0, '0.007')] +[2024-06-10 18:33:00,703][46990] Updated weights for policy 0, policy_version 4420 (0.0029) +[2024-06-10 18:33:03,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 72531968. Throughput: 0: 43793.3. Samples: 72591440. Policy #0 lag: (min: 0.0, avg: 11.7, max: 22.0) +[2024-06-10 18:33:03,240][46753] Avg episode reward: [(0, '0.010')] +[2024-06-10 18:33:04,793][46990] Updated weights for policy 0, policy_version 4430 (0.0042) +[2024-06-10 18:33:07,635][46970] Signal inference workers to stop experience collection... (1050 times) +[2024-06-10 18:33:07,636][46970] Signal inference workers to resume experience collection... (1050 times) +[2024-06-10 18:33:07,651][46990] InferenceWorker_p0-w0: stopping experience collection (1050 times) +[2024-06-10 18:33:07,651][46990] InferenceWorker_p0-w0: resuming experience collection (1050 times) +[2024-06-10 18:33:08,228][46990] Updated weights for policy 0, policy_version 4440 (0.0028) +[2024-06-10 18:33:08,240][46753] Fps is (10 sec: 45874.4, 60 sec: 44509.7, 300 sec: 43709.2). Total num frames: 72744960. Throughput: 0: 43696.3. Samples: 72855080. Policy #0 lag: (min: 0.0, avg: 8.2, max: 21.0) +[2024-06-10 18:33:08,240][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:33:11,982][46990] Updated weights for policy 0, policy_version 4450 (0.0025) +[2024-06-10 18:33:13,240][46753] Fps is (10 sec: 40958.0, 60 sec: 43963.4, 300 sec: 43653.6). Total num frames: 72941568. Throughput: 0: 43948.6. Samples: 73121820. Policy #0 lag: (min: 0.0, avg: 11.9, max: 23.0) +[2024-06-10 18:33:13,240][46753] Avg episode reward: [(0, '0.010')] +[2024-06-10 18:33:15,737][46990] Updated weights for policy 0, policy_version 4460 (0.0038) +[2024-06-10 18:33:18,240][46753] Fps is (10 sec: 45875.4, 60 sec: 43963.6, 300 sec: 43820.3). Total num frames: 73203712. Throughput: 0: 43923.0. Samples: 73247240. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 18:33:18,251][46753] Avg episode reward: [(0, '0.008')] +[2024-06-10 18:33:19,274][46990] Updated weights for policy 0, policy_version 4470 (0.0049) +[2024-06-10 18:33:23,239][46753] Fps is (10 sec: 44238.8, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 73383936. Throughput: 0: 43873.4. Samples: 73515200. Policy #0 lag: (min: 0.0, avg: 7.9, max: 22.0) +[2024-06-10 18:33:23,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:33:23,350][46990] Updated weights for policy 0, policy_version 4480 (0.0038) +[2024-06-10 18:33:27,163][46990] Updated weights for policy 0, policy_version 4490 (0.0026) +[2024-06-10 18:33:28,240][46753] Fps is (10 sec: 39321.7, 60 sec: 43963.8, 300 sec: 43653.6). Total num frames: 73596928. Throughput: 0: 43737.0. Samples: 73768060. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 18:33:28,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:33:30,793][46990] Updated weights for policy 0, policy_version 4500 (0.0032) +[2024-06-10 18:33:33,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43417.5, 300 sec: 43764.7). Total num frames: 73826304. Throughput: 0: 43673.3. Samples: 73897700. Policy #0 lag: (min: 1.0, avg: 10.8, max: 21.0) +[2024-06-10 18:33:33,240][46753] Avg episode reward: [(0, '0.009')] +[2024-06-10 18:33:34,756][46990] Updated weights for policy 0, policy_version 4510 (0.0030) +[2024-06-10 18:33:38,239][46753] Fps is (10 sec: 45875.3, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 74055680. Throughput: 0: 43736.5. Samples: 74169660. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 18:33:38,240][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:33:38,245][46990] Updated weights for policy 0, policy_version 4520 (0.0041) +[2024-06-10 18:33:41,883][46990] Updated weights for policy 0, policy_version 4530 (0.0037) +[2024-06-10 18:33:43,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 74252288. Throughput: 0: 43797.2. Samples: 74429680. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 18:33:43,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:33:43,342][46970] Saving new best policy, reward=0.013! +[2024-06-10 18:33:45,955][46990] Updated weights for policy 0, policy_version 4540 (0.0040) +[2024-06-10 18:33:48,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 74498048. Throughput: 0: 43695.4. Samples: 74557740. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 18:33:48,240][46753] Avg episode reward: [(0, '0.014')] +[2024-06-10 18:33:48,241][46970] Saving new best policy, reward=0.014! +[2024-06-10 18:33:49,390][46990] Updated weights for policy 0, policy_version 4550 (0.0031) +[2024-06-10 18:33:53,240][46753] Fps is (10 sec: 42597.6, 60 sec: 43417.5, 300 sec: 43709.2). Total num frames: 74678272. Throughput: 0: 43616.8. Samples: 74817840. Policy #0 lag: (min: 0.0, avg: 8.0, max: 22.0) +[2024-06-10 18:33:53,241][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:33:53,561][46990] Updated weights for policy 0, policy_version 4560 (0.0040) +[2024-06-10 18:33:56,902][46990] Updated weights for policy 0, policy_version 4570 (0.0041) +[2024-06-10 18:33:58,239][46753] Fps is (10 sec: 40960.9, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 74907648. Throughput: 0: 43433.8. Samples: 75076320. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 18:33:58,240][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:34:01,254][46990] Updated weights for policy 0, policy_version 4580 (0.0040) +[2024-06-10 18:34:03,239][46753] Fps is (10 sec: 45876.1, 60 sec: 43417.5, 300 sec: 43764.7). Total num frames: 75137024. Throughput: 0: 43530.3. Samples: 75206100. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 18:34:03,240][46753] Avg episode reward: [(0, '0.011')] +[2024-06-10 18:34:04,496][46990] Updated weights for policy 0, policy_version 4590 (0.0030) +[2024-06-10 18:34:08,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43144.6, 300 sec: 43709.2). Total num frames: 75333632. Throughput: 0: 43484.4. Samples: 75472000. Policy #0 lag: (min: 0.0, avg: 9.2, max: 22.0) +[2024-06-10 18:34:08,240][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:34:08,470][46990] Updated weights for policy 0, policy_version 4600 (0.0028) +[2024-06-10 18:34:11,671][46990] Updated weights for policy 0, policy_version 4610 (0.0032) +[2024-06-10 18:34:13,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43691.0, 300 sec: 43653.6). Total num frames: 75563008. Throughput: 0: 43624.6. Samples: 75731160. Policy #0 lag: (min: 0.0, avg: 12.2, max: 23.0) +[2024-06-10 18:34:13,240][46753] Avg episode reward: [(0, '0.009')] +[2024-06-10 18:34:15,966][46990] Updated weights for policy 0, policy_version 4620 (0.0040) +[2024-06-10 18:34:18,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43144.6, 300 sec: 43764.7). Total num frames: 75792384. Throughput: 0: 43531.2. Samples: 75856600. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 18:34:18,240][46753] Avg episode reward: [(0, '0.011')] +[2024-06-10 18:34:19,515][46990] Updated weights for policy 0, policy_version 4630 (0.0038) +[2024-06-10 18:34:23,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43417.5, 300 sec: 43709.8). Total num frames: 75988992. Throughput: 0: 43169.3. Samples: 76112280. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 18:34:23,240][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:34:23,249][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000004638_75988992.pth... +[2024-06-10 18:34:23,308][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000003999_65519616.pth +[2024-06-10 18:34:23,819][46990] Updated weights for policy 0, policy_version 4640 (0.0034) +[2024-06-10 18:34:26,975][46990] Updated weights for policy 0, policy_version 4650 (0.0042) +[2024-06-10 18:34:28,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.7, 300 sec: 43654.3). Total num frames: 76218368. Throughput: 0: 43186.3. Samples: 76373060. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 18:34:28,240][46753] Avg episode reward: [(0, '0.011')] +[2024-06-10 18:34:31,566][46990] Updated weights for policy 0, policy_version 4660 (0.0040) +[2024-06-10 18:34:33,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 76447744. Throughput: 0: 43319.6. Samples: 76507120. Policy #0 lag: (min: 0.0, avg: 9.9, max: 20.0) +[2024-06-10 18:34:33,240][46753] Avg episode reward: [(0, '0.014')] +[2024-06-10 18:34:34,547][46990] Updated weights for policy 0, policy_version 4670 (0.0034) +[2024-06-10 18:34:38,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43144.5, 300 sec: 43709.2). Total num frames: 76644352. Throughput: 0: 43490.4. Samples: 76774900. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 18:34:38,240][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:34:38,803][46990] Updated weights for policy 0, policy_version 4680 (0.0046) +[2024-06-10 18:34:41,963][46990] Updated weights for policy 0, policy_version 4690 (0.0033) +[2024-06-10 18:34:43,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 76873728. Throughput: 0: 43522.2. Samples: 77034820. Policy #0 lag: (min: 0.0, avg: 11.4, max: 21.0) +[2024-06-10 18:34:43,240][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:34:46,042][46990] Updated weights for policy 0, policy_version 4700 (0.0025) +[2024-06-10 18:34:48,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43417.7, 300 sec: 43764.8). Total num frames: 77103104. Throughput: 0: 43541.9. Samples: 77165480. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 18:34:48,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:34:49,484][46990] Updated weights for policy 0, policy_version 4710 (0.0032) +[2024-06-10 18:34:53,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.9, 300 sec: 43709.9). Total num frames: 77299712. Throughput: 0: 43327.6. Samples: 77421740. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 18:34:53,240][46753] Avg episode reward: [(0, '0.016')] +[2024-06-10 18:34:53,250][46970] Saving new best policy, reward=0.016! +[2024-06-10 18:34:54,001][46990] Updated weights for policy 0, policy_version 4720 (0.0052) +[2024-06-10 18:34:54,945][46970] Signal inference workers to stop experience collection... (1100 times) +[2024-06-10 18:34:54,946][46970] Signal inference workers to resume experience collection... (1100 times) +[2024-06-10 18:34:54,955][46990] InferenceWorker_p0-w0: stopping experience collection (1100 times) +[2024-06-10 18:34:54,969][46990] InferenceWorker_p0-w0: resuming experience collection (1100 times) +[2024-06-10 18:34:57,210][46990] Updated weights for policy 0, policy_version 4730 (0.0039) +[2024-06-10 18:34:58,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.6, 300 sec: 43598.2). Total num frames: 77529088. Throughput: 0: 43252.3. Samples: 77677520. Policy #0 lag: (min: 0.0, avg: 11.0, max: 22.0) +[2024-06-10 18:34:58,242][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:35:01,325][46990] Updated weights for policy 0, policy_version 4740 (0.0031) +[2024-06-10 18:35:03,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43417.5, 300 sec: 43653.6). Total num frames: 77742080. Throughput: 0: 43497.6. Samples: 77814000. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 18:35:03,240][46753] Avg episode reward: [(0, '0.016')] +[2024-06-10 18:35:04,712][46990] Updated weights for policy 0, policy_version 4750 (0.0052) +[2024-06-10 18:35:08,240][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.5, 300 sec: 43764.7). Total num frames: 77955072. Throughput: 0: 43700.4. Samples: 78078800. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 18:35:08,240][46753] Avg episode reward: [(0, '0.017')] +[2024-06-10 18:35:08,818][46990] Updated weights for policy 0, policy_version 4760 (0.0042) +[2024-06-10 18:35:12,006][46990] Updated weights for policy 0, policy_version 4770 (0.0041) +[2024-06-10 18:35:13,244][46753] Fps is (10 sec: 44217.5, 60 sec: 43687.3, 300 sec: 43653.0). Total num frames: 78184448. Throughput: 0: 43605.4. Samples: 78335500. Policy #0 lag: (min: 0.0, avg: 11.6, max: 23.0) +[2024-06-10 18:35:13,245][46753] Avg episode reward: [(0, '0.011')] +[2024-06-10 18:35:16,195][46990] Updated weights for policy 0, policy_version 4780 (0.0039) +[2024-06-10 18:35:18,240][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.5, 300 sec: 43764.7). Total num frames: 78413824. Throughput: 0: 43762.2. Samples: 78476420. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 18:35:18,243][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:35:19,536][46990] Updated weights for policy 0, policy_version 4790 (0.0034) +[2024-06-10 18:35:23,240][46753] Fps is (10 sec: 42616.9, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 78610432. Throughput: 0: 43558.2. Samples: 78735020. Policy #0 lag: (min: 0.0, avg: 8.4, max: 20.0) +[2024-06-10 18:35:23,249][46753] Avg episode reward: [(0, '0.009')] +[2024-06-10 18:35:23,978][46990] Updated weights for policy 0, policy_version 4800 (0.0039) +[2024-06-10 18:35:27,172][46990] Updated weights for policy 0, policy_version 4810 (0.0033) +[2024-06-10 18:35:28,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43963.7, 300 sec: 43765.4). Total num frames: 78856192. Throughput: 0: 43627.1. Samples: 78998040. Policy #0 lag: (min: 0.0, avg: 12.1, max: 21.0) +[2024-06-10 18:35:28,248][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:35:31,209][46990] Updated weights for policy 0, policy_version 4820 (0.0040) +[2024-06-10 18:35:33,239][46753] Fps is (10 sec: 44237.7, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 79052800. Throughput: 0: 43638.2. Samples: 79129200. Policy #0 lag: (min: 0.0, avg: 9.4, max: 20.0) +[2024-06-10 18:35:33,240][46753] Avg episode reward: [(0, '0.016')] +[2024-06-10 18:35:34,656][46990] Updated weights for policy 0, policy_version 4830 (0.0039) +[2024-06-10 18:35:38,240][46753] Fps is (10 sec: 40959.6, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 79265792. Throughput: 0: 43906.5. Samples: 79397540. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 18:35:38,249][46753] Avg episode reward: [(0, '0.014')] +[2024-06-10 18:35:38,693][46990] Updated weights for policy 0, policy_version 4840 (0.0041) +[2024-06-10 18:35:41,916][46990] Updated weights for policy 0, policy_version 4850 (0.0027) +[2024-06-10 18:35:43,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 79495168. Throughput: 0: 43963.2. Samples: 79655860. Policy #0 lag: (min: 0.0, avg: 11.0, max: 22.0) +[2024-06-10 18:35:43,240][46753] Avg episode reward: [(0, '0.010')] +[2024-06-10 18:35:46,078][46990] Updated weights for policy 0, policy_version 4860 (0.0029) +[2024-06-10 18:35:48,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 79724544. Throughput: 0: 43854.7. Samples: 79787460. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 18:35:48,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:35:49,324][46990] Updated weights for policy 0, policy_version 4870 (0.0027) +[2024-06-10 18:35:53,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43963.6, 300 sec: 43820.3). Total num frames: 79937536. Throughput: 0: 43838.8. Samples: 80051540. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 18:35:53,240][46753] Avg episode reward: [(0, '0.011')] +[2024-06-10 18:35:53,602][46990] Updated weights for policy 0, policy_version 4880 (0.0028) +[2024-06-10 18:35:56,931][46990] Updated weights for policy 0, policy_version 4890 (0.0031) +[2024-06-10 18:35:58,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 80150528. Throughput: 0: 43971.1. Samples: 80314000. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 18:35:58,240][46753] Avg episode reward: [(0, '0.010')] +[2024-06-10 18:36:00,932][46990] Updated weights for policy 0, policy_version 4900 (0.0032) +[2024-06-10 18:36:03,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 80363520. Throughput: 0: 43602.2. Samples: 80438520. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 18:36:03,240][46753] Avg episode reward: [(0, '0.008')] +[2024-06-10 18:36:04,632][46990] Updated weights for policy 0, policy_version 4910 (0.0046) +[2024-06-10 18:36:08,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43963.8, 300 sec: 43820.2). Total num frames: 80592896. Throughput: 0: 43736.5. Samples: 80703160. Policy #0 lag: (min: 0.0, avg: 9.7, max: 20.0) +[2024-06-10 18:36:08,240][46753] Avg episode reward: [(0, '0.014')] +[2024-06-10 18:36:08,701][46990] Updated weights for policy 0, policy_version 4920 (0.0030) +[2024-06-10 18:36:11,921][46990] Updated weights for policy 0, policy_version 4930 (0.0032) +[2024-06-10 18:36:13,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43967.0, 300 sec: 43653.6). Total num frames: 80822272. Throughput: 0: 43786.6. Samples: 80968440. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 18:36:13,240][46753] Avg episode reward: [(0, '0.015')] +[2024-06-10 18:36:16,051][46990] Updated weights for policy 0, policy_version 4940 (0.0036) +[2024-06-10 18:36:18,240][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 81035264. Throughput: 0: 43798.5. Samples: 81100140. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 18:36:18,240][46753] Avg episode reward: [(0, '0.010')] +[2024-06-10 18:36:19,325][46990] Updated weights for policy 0, policy_version 4950 (0.0029) +[2024-06-10 18:36:23,244][46753] Fps is (10 sec: 40941.7, 60 sec: 43687.5, 300 sec: 43764.1). Total num frames: 81231872. Throughput: 0: 43549.9. Samples: 81357480. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 18:36:23,244][46753] Avg episode reward: [(0, '0.015')] +[2024-06-10 18:36:23,260][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000004958_81231872.pth... +[2024-06-10 18:36:23,328][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000004319_70762496.pth +[2024-06-10 18:36:23,801][46990] Updated weights for policy 0, policy_version 4960 (0.0033) +[2024-06-10 18:36:27,115][46990] Updated weights for policy 0, policy_version 4970 (0.0035) +[2024-06-10 18:36:28,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 81477632. Throughput: 0: 43700.4. Samples: 81622380. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 18:36:28,240][46753] Avg episode reward: [(0, '0.014')] +[2024-06-10 18:36:31,082][46990] Updated weights for policy 0, policy_version 4980 (0.0041) +[2024-06-10 18:36:33,078][46970] Signal inference workers to stop experience collection... (1150 times) +[2024-06-10 18:36:33,078][46970] Signal inference workers to resume experience collection... (1150 times) +[2024-06-10 18:36:33,126][46990] InferenceWorker_p0-w0: stopping experience collection (1150 times) +[2024-06-10 18:36:33,126][46990] InferenceWorker_p0-w0: resuming experience collection (1150 times) +[2024-06-10 18:36:33,239][46753] Fps is (10 sec: 45895.8, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 81690624. Throughput: 0: 43684.5. Samples: 81753260. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 18:36:33,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:36:34,436][46990] Updated weights for policy 0, policy_version 4990 (0.0052) +[2024-06-10 18:36:38,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 81903616. Throughput: 0: 43573.8. Samples: 82012360. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 18:36:38,240][46753] Avg episode reward: [(0, '0.014')] +[2024-06-10 18:36:38,323][46990] Updated weights for policy 0, policy_version 5000 (0.0037) +[2024-06-10 18:36:42,118][46990] Updated weights for policy 0, policy_version 5010 (0.0037) +[2024-06-10 18:36:43,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 82116608. Throughput: 0: 43522.6. Samples: 82272520. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 18:36:43,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:36:46,125][46990] Updated weights for policy 0, policy_version 5020 (0.0042) +[2024-06-10 18:36:48,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43417.5, 300 sec: 43709.2). Total num frames: 82329600. Throughput: 0: 43761.8. Samples: 82407800. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:36:48,240][46753] Avg episode reward: [(0, '0.016')] +[2024-06-10 18:36:49,426][46990] Updated weights for policy 0, policy_version 5030 (0.0040) +[2024-06-10 18:36:53,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43417.6, 300 sec: 43709.3). Total num frames: 82542592. Throughput: 0: 43550.7. Samples: 82662940. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 18:36:53,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:36:53,721][46990] Updated weights for policy 0, policy_version 5040 (0.0043) +[2024-06-10 18:36:57,184][46990] Updated weights for policy 0, policy_version 5050 (0.0031) +[2024-06-10 18:36:58,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 82771968. Throughput: 0: 43579.1. Samples: 82929500. Policy #0 lag: (min: 0.0, avg: 8.4, max: 20.0) +[2024-06-10 18:36:58,240][46753] Avg episode reward: [(0, '0.015')] +[2024-06-10 18:37:01,030][46990] Updated weights for policy 0, policy_version 5060 (0.0037) +[2024-06-10 18:37:03,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 82984960. Throughput: 0: 43579.2. Samples: 83061200. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 18:37:03,240][46753] Avg episode reward: [(0, '0.016')] +[2024-06-10 18:37:04,587][46990] Updated weights for policy 0, policy_version 5070 (0.0025) +[2024-06-10 18:37:08,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 83197952. Throughput: 0: 43696.4. Samples: 83323620. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 18:37:08,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:37:08,580][46990] Updated weights for policy 0, policy_version 5080 (0.0044) +[2024-06-10 18:37:11,824][46990] Updated weights for policy 0, policy_version 5090 (0.0026) +[2024-06-10 18:37:13,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 83443712. Throughput: 0: 43671.1. Samples: 83587580. Policy #0 lag: (min: 0.0, avg: 12.1, max: 22.0) +[2024-06-10 18:37:13,240][46753] Avg episode reward: [(0, '0.018')] +[2024-06-10 18:37:13,245][46970] Saving new best policy, reward=0.018! +[2024-06-10 18:37:16,099][46990] Updated weights for policy 0, policy_version 5100 (0.0042) +[2024-06-10 18:37:18,239][46753] Fps is (10 sec: 45874.8, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 83656704. Throughput: 0: 43761.7. Samples: 83722540. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 18:37:18,240][46753] Avg episode reward: [(0, '0.018')] +[2024-06-10 18:37:19,290][46990] Updated weights for policy 0, policy_version 5110 (0.0042) +[2024-06-10 18:37:23,240][46753] Fps is (10 sec: 40959.9, 60 sec: 43693.9, 300 sec: 43709.2). Total num frames: 83853312. Throughput: 0: 43792.0. Samples: 83983000. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 18:37:23,243][46753] Avg episode reward: [(0, '0.016')] +[2024-06-10 18:37:23,528][46990] Updated weights for policy 0, policy_version 5120 (0.0027) +[2024-06-10 18:37:26,877][46990] Updated weights for policy 0, policy_version 5130 (0.0035) +[2024-06-10 18:37:28,244][46753] Fps is (10 sec: 42579.5, 60 sec: 43414.4, 300 sec: 43597.4). Total num frames: 84082688. Throughput: 0: 43898.8. Samples: 84248160. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 18:37:28,245][46753] Avg episode reward: [(0, '0.022')] +[2024-06-10 18:37:28,369][46970] Saving new best policy, reward=0.022! +[2024-06-10 18:37:30,862][46990] Updated weights for policy 0, policy_version 5140 (0.0042) +[2024-06-10 18:37:33,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 84312064. Throughput: 0: 43837.8. Samples: 84380500. Policy #0 lag: (min: 0.0, avg: 10.0, max: 20.0) +[2024-06-10 18:37:33,240][46753] Avg episode reward: [(0, '0.010')] +[2024-06-10 18:37:34,554][46990] Updated weights for policy 0, policy_version 5150 (0.0037) +[2024-06-10 18:37:38,239][46753] Fps is (10 sec: 42618.0, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 84508672. Throughput: 0: 43897.4. Samples: 84638320. Policy #0 lag: (min: 0.0, avg: 10.0, max: 20.0) +[2024-06-10 18:37:38,240][46753] Avg episode reward: [(0, '0.015')] +[2024-06-10 18:37:38,573][46990] Updated weights for policy 0, policy_version 5160 (0.0029) +[2024-06-10 18:37:41,825][46990] Updated weights for policy 0, policy_version 5170 (0.0042) +[2024-06-10 18:37:43,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 84738048. Throughput: 0: 43759.1. Samples: 84898660. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 18:37:43,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:37:46,202][46990] Updated weights for policy 0, policy_version 5180 (0.0027) +[2024-06-10 18:37:48,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.8, 300 sec: 43653.7). Total num frames: 84951040. Throughput: 0: 43855.1. Samples: 85034680. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 18:37:48,240][46753] Avg episode reward: [(0, '0.023')] +[2024-06-10 18:37:48,297][46970] Saving new best policy, reward=0.023! +[2024-06-10 18:37:49,456][46990] Updated weights for policy 0, policy_version 5190 (0.0054) +[2024-06-10 18:37:53,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 85164032. Throughput: 0: 43695.5. Samples: 85289920. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 18:37:53,240][46753] Avg episode reward: [(0, '0.020')] +[2024-06-10 18:37:53,745][46990] Updated weights for policy 0, policy_version 5200 (0.0040) +[2024-06-10 18:37:56,044][46970] Signal inference workers to stop experience collection... (1200 times) +[2024-06-10 18:37:56,089][46990] InferenceWorker_p0-w0: stopping experience collection (1200 times) +[2024-06-10 18:37:56,096][46970] Signal inference workers to resume experience collection... (1200 times) +[2024-06-10 18:37:56,099][46990] InferenceWorker_p0-w0: resuming experience collection (1200 times) +[2024-06-10 18:37:57,081][46990] Updated weights for policy 0, policy_version 5210 (0.0040) +[2024-06-10 18:37:58,240][46753] Fps is (10 sec: 42597.3, 60 sec: 43417.5, 300 sec: 43542.5). Total num frames: 85377024. Throughput: 0: 43597.2. Samples: 85549460. Policy #0 lag: (min: 0.0, avg: 10.1, max: 22.0) +[2024-06-10 18:37:58,240][46753] Avg episode reward: [(0, '0.016')] +[2024-06-10 18:38:00,967][46990] Updated weights for policy 0, policy_version 5220 (0.0028) +[2024-06-10 18:38:03,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 85606400. Throughput: 0: 43639.8. Samples: 85686320. Policy #0 lag: (min: 0.0, avg: 10.0, max: 20.0) +[2024-06-10 18:38:03,240][46753] Avg episode reward: [(0, '0.019')] +[2024-06-10 18:38:04,419][46990] Updated weights for policy 0, policy_version 5230 (0.0025) +[2024-06-10 18:38:08,239][46753] Fps is (10 sec: 44238.1, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 85819392. Throughput: 0: 43647.7. Samples: 85947140. Policy #0 lag: (min: 0.0, avg: 11.3, max: 20.0) +[2024-06-10 18:38:08,240][46753] Avg episode reward: [(0, '0.019')] +[2024-06-10 18:38:08,629][46990] Updated weights for policy 0, policy_version 5240 (0.0034) +[2024-06-10 18:38:11,746][46990] Updated weights for policy 0, policy_version 5250 (0.0037) +[2024-06-10 18:38:13,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43417.7, 300 sec: 43542.6). Total num frames: 86048768. Throughput: 0: 43541.8. Samples: 86207340. Policy #0 lag: (min: 0.0, avg: 9.3, max: 22.0) +[2024-06-10 18:38:13,240][46753] Avg episode reward: [(0, '0.012')] +[2024-06-10 18:38:15,996][46990] Updated weights for policy 0, policy_version 5260 (0.0037) +[2024-06-10 18:38:18,244][46753] Fps is (10 sec: 44216.6, 60 sec: 43414.4, 300 sec: 43653.0). Total num frames: 86261760. Throughput: 0: 43663.7. Samples: 86345560. Policy #0 lag: (min: 0.0, avg: 8.2, max: 21.0) +[2024-06-10 18:38:18,245][46753] Avg episode reward: [(0, '0.024')] +[2024-06-10 18:38:18,245][46970] Saving new best policy, reward=0.024! +[2024-06-10 18:38:19,246][46990] Updated weights for policy 0, policy_version 5270 (0.0046) +[2024-06-10 18:38:23,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 86474752. Throughput: 0: 43730.1. Samples: 86606180. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 18:38:23,240][46753] Avg episode reward: [(0, '0.019')] +[2024-06-10 18:38:23,252][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000005278_86474752.pth... +[2024-06-10 18:38:23,300][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000004638_75988992.pth +[2024-06-10 18:38:23,729][46990] Updated weights for policy 0, policy_version 5280 (0.0050) +[2024-06-10 18:38:26,993][46990] Updated weights for policy 0, policy_version 5290 (0.0028) +[2024-06-10 18:38:28,240][46753] Fps is (10 sec: 44256.2, 60 sec: 43693.9, 300 sec: 43653.6). Total num frames: 86704128. Throughput: 0: 43686.2. Samples: 86864540. Policy #0 lag: (min: 0.0, avg: 10.8, max: 24.0) +[2024-06-10 18:38:28,240][46753] Avg episode reward: [(0, '0.022')] +[2024-06-10 18:38:30,958][46990] Updated weights for policy 0, policy_version 5300 (0.0027) +[2024-06-10 18:38:33,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 86917120. Throughput: 0: 43676.2. Samples: 87000120. Policy #0 lag: (min: 0.0, avg: 9.7, max: 20.0) +[2024-06-10 18:38:33,240][46753] Avg episode reward: [(0, '0.020')] +[2024-06-10 18:38:34,226][46990] Updated weights for policy 0, policy_version 5310 (0.0048) +[2024-06-10 18:38:38,244][46753] Fps is (10 sec: 42579.6, 60 sec: 43687.3, 300 sec: 43653.0). Total num frames: 87130112. Throughput: 0: 43777.9. Samples: 87260120. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 18:38:38,244][46753] Avg episode reward: [(0, '0.018')] +[2024-06-10 18:38:39,004][46990] Updated weights for policy 0, policy_version 5320 (0.0037) +[2024-06-10 18:38:41,873][46990] Updated weights for policy 0, policy_version 5330 (0.0034) +[2024-06-10 18:38:43,240][46753] Fps is (10 sec: 45875.6, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 87375872. Throughput: 0: 43734.8. Samples: 87517520. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 18:38:43,240][46753] Avg episode reward: [(0, '0.023')] +[2024-06-10 18:38:46,457][46990] Updated weights for policy 0, policy_version 5340 (0.0031) +[2024-06-10 18:38:48,239][46753] Fps is (10 sec: 44256.7, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 87572480. Throughput: 0: 43812.3. Samples: 87657880. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 18:38:48,240][46753] Avg episode reward: [(0, '0.019')] +[2024-06-10 18:38:49,425][46990] Updated weights for policy 0, policy_version 5350 (0.0037) +[2024-06-10 18:38:53,240][46753] Fps is (10 sec: 39321.2, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 87769088. Throughput: 0: 43823.3. Samples: 87919200. Policy #0 lag: (min: 0.0, avg: 7.9, max: 20.0) +[2024-06-10 18:38:53,249][46753] Avg episode reward: [(0, '0.025')] +[2024-06-10 18:38:53,724][46990] Updated weights for policy 0, policy_version 5360 (0.0037) +[2024-06-10 18:38:56,819][46990] Updated weights for policy 0, policy_version 5370 (0.0033) +[2024-06-10 18:38:58,239][46753] Fps is (10 sec: 45875.0, 60 sec: 44236.9, 300 sec: 43709.2). Total num frames: 88031232. Throughput: 0: 43685.2. Samples: 88173180. Policy #0 lag: (min: 0.0, avg: 12.6, max: 22.0) +[2024-06-10 18:38:58,240][46753] Avg episode reward: [(0, '0.017')] +[2024-06-10 18:39:01,143][46990] Updated weights for policy 0, policy_version 5380 (0.0037) +[2024-06-10 18:39:03,239][46753] Fps is (10 sec: 47514.6, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 88244224. Throughput: 0: 43765.7. Samples: 88314820. Policy #0 lag: (min: 1.0, avg: 9.9, max: 23.0) +[2024-06-10 18:39:03,240][46753] Avg episode reward: [(0, '0.022')] +[2024-06-10 18:39:04,153][46990] Updated weights for policy 0, policy_version 5390 (0.0035) +[2024-06-10 18:39:08,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 88424448. Throughput: 0: 43732.5. Samples: 88574140. Policy #0 lag: (min: 1.0, avg: 9.9, max: 23.0) +[2024-06-10 18:39:08,240][46753] Avg episode reward: [(0, '0.014')] +[2024-06-10 18:39:08,906][46990] Updated weights for policy 0, policy_version 5400 (0.0034) +[2024-06-10 18:39:11,686][46990] Updated weights for policy 0, policy_version 5410 (0.0031) +[2024-06-10 18:39:13,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 88686592. Throughput: 0: 43702.3. Samples: 88831140. Policy #0 lag: (min: 0.0, avg: 8.5, max: 21.0) +[2024-06-10 18:39:13,240][46753] Avg episode reward: [(0, '0.020')] +[2024-06-10 18:39:16,705][46990] Updated weights for policy 0, policy_version 5420 (0.0036) +[2024-06-10 18:39:17,600][46970] Signal inference workers to stop experience collection... (1250 times) +[2024-06-10 18:39:17,600][46970] Signal inference workers to resume experience collection... (1250 times) +[2024-06-10 18:39:17,614][46990] InferenceWorker_p0-w0: stopping experience collection (1250 times) +[2024-06-10 18:39:17,643][46990] InferenceWorker_p0-w0: resuming experience collection (1250 times) +[2024-06-10 18:39:18,240][46753] Fps is (10 sec: 47512.9, 60 sec: 43966.9, 300 sec: 43764.7). Total num frames: 88899584. Throughput: 0: 43827.2. Samples: 88972340. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 18:39:18,240][46753] Avg episode reward: [(0, '0.019')] +[2024-06-10 18:39:19,213][46990] Updated weights for policy 0, policy_version 5430 (0.0038) +[2024-06-10 18:39:23,239][46753] Fps is (10 sec: 39321.3, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 89079808. Throughput: 0: 43885.2. Samples: 89234760. Policy #0 lag: (min: 0.0, avg: 12.5, max: 22.0) +[2024-06-10 18:39:23,240][46753] Avg episode reward: [(0, '0.021')] +[2024-06-10 18:39:23,951][46990] Updated weights for policy 0, policy_version 5440 (0.0037) +[2024-06-10 18:39:26,676][46990] Updated weights for policy 0, policy_version 5450 (0.0024) +[2024-06-10 18:39:28,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 89341952. Throughput: 0: 43631.6. Samples: 89480940. Policy #0 lag: (min: 0.0, avg: 8.5, max: 22.0) +[2024-06-10 18:39:28,240][46753] Avg episode reward: [(0, '0.024')] +[2024-06-10 18:39:31,377][46990] Updated weights for policy 0, policy_version 5460 (0.0029) +[2024-06-10 18:39:33,239][46753] Fps is (10 sec: 47513.5, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 89554944. Throughput: 0: 43719.5. Samples: 89625260. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 18:39:33,240][46753] Avg episode reward: [(0, '0.026')] +[2024-06-10 18:39:33,252][46970] Saving new best policy, reward=0.026! +[2024-06-10 18:39:34,243][46990] Updated weights for policy 0, policy_version 5470 (0.0038) +[2024-06-10 18:39:38,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43420.8, 300 sec: 43598.1). Total num frames: 89735168. Throughput: 0: 43711.3. Samples: 89886200. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 18:39:38,240][46753] Avg episode reward: [(0, '0.013')] +[2024-06-10 18:39:38,867][46990] Updated weights for policy 0, policy_version 5480 (0.0032) +[2024-06-10 18:39:41,471][46990] Updated weights for policy 0, policy_version 5490 (0.0032) +[2024-06-10 18:39:43,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 89997312. Throughput: 0: 43741.3. Samples: 90141540. Policy #0 lag: (min: 0.0, avg: 12.6, max: 21.0) +[2024-06-10 18:39:43,240][46753] Avg episode reward: [(0, '0.024')] +[2024-06-10 18:39:46,640][46990] Updated weights for policy 0, policy_version 5500 (0.0038) +[2024-06-10 18:39:48,239][46753] Fps is (10 sec: 47513.3, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 90210304. Throughput: 0: 43686.6. Samples: 90280720. Policy #0 lag: (min: 1.0, avg: 10.8, max: 22.0) +[2024-06-10 18:39:48,240][46753] Avg episode reward: [(0, '0.017')] +[2024-06-10 18:39:49,032][46990] Updated weights for policy 0, policy_version 5510 (0.0037) +[2024-06-10 18:39:53,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 90390528. Throughput: 0: 43588.0. Samples: 90535600. Policy #0 lag: (min: 1.0, avg: 10.8, max: 22.0) +[2024-06-10 18:39:53,240][46753] Avg episode reward: [(0, '0.024')] +[2024-06-10 18:39:54,008][46990] Updated weights for policy 0, policy_version 5520 (0.0038) +[2024-06-10 18:39:56,805][46990] Updated weights for policy 0, policy_version 5530 (0.0029) +[2024-06-10 18:39:58,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 90652672. Throughput: 0: 43532.8. Samples: 90790120. Policy #0 lag: (min: 0.0, avg: 8.6, max: 22.0) +[2024-06-10 18:39:58,240][46753] Avg episode reward: [(0, '0.018')] +[2024-06-10 18:40:01,283][46990] Updated weights for policy 0, policy_version 5540 (0.0038) +[2024-06-10 18:40:03,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43690.7, 300 sec: 43764.8). Total num frames: 90865664. Throughput: 0: 43561.9. Samples: 90932620. Policy #0 lag: (min: 0.0, avg: 11.7, max: 22.0) +[2024-06-10 18:40:03,240][46753] Avg episode reward: [(0, '0.027')] +[2024-06-10 18:40:03,253][46970] Saving new best policy, reward=0.027! +[2024-06-10 18:40:04,439][46990] Updated weights for policy 0, policy_version 5550 (0.0032) +[2024-06-10 18:40:08,244][46753] Fps is (10 sec: 40941.8, 60 sec: 43960.4, 300 sec: 43653.6). Total num frames: 91062272. Throughput: 0: 43524.1. Samples: 91193540. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 18:40:08,245][46753] Avg episode reward: [(0, '0.019')] +[2024-06-10 18:40:08,936][46990] Updated weights for policy 0, policy_version 5560 (0.0047) +[2024-06-10 18:40:11,669][46990] Updated weights for policy 0, policy_version 5570 (0.0034) +[2024-06-10 18:40:13,240][46753] Fps is (10 sec: 44236.1, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 91308032. Throughput: 0: 43769.7. Samples: 91450580. Policy #0 lag: (min: 0.0, avg: 8.9, max: 23.0) +[2024-06-10 18:40:13,240][46753] Avg episode reward: [(0, '0.018')] +[2024-06-10 18:40:16,598][46990] Updated weights for policy 0, policy_version 5580 (0.0033) +[2024-06-10 18:40:18,239][46753] Fps is (10 sec: 44256.3, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 91504640. Throughput: 0: 43616.9. Samples: 91588020. Policy #0 lag: (min: 0.0, avg: 7.9, max: 20.0) +[2024-06-10 18:40:18,240][46753] Avg episode reward: [(0, '0.021')] +[2024-06-10 18:40:19,015][46970] Signal inference workers to stop experience collection... (1300 times) +[2024-06-10 18:40:19,016][46970] Signal inference workers to resume experience collection... (1300 times) +[2024-06-10 18:40:19,060][46990] InferenceWorker_p0-w0: stopping experience collection (1300 times) +[2024-06-10 18:40:19,060][46990] InferenceWorker_p0-w0: resuming experience collection (1300 times) +[2024-06-10 18:40:19,150][46990] Updated weights for policy 0, policy_version 5590 (0.0042) +[2024-06-10 18:40:23,240][46753] Fps is (10 sec: 39321.4, 60 sec: 43690.6, 300 sec: 43542.5). Total num frames: 91701248. Throughput: 0: 43533.2. Samples: 91845200. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 18:40:23,240][46753] Avg episode reward: [(0, '0.023')] +[2024-06-10 18:40:23,251][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000005598_91717632.pth... +[2024-06-10 18:40:23,299][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000004958_81231872.pth +[2024-06-10 18:40:24,103][46990] Updated weights for policy 0, policy_version 5600 (0.0033) +[2024-06-10 18:40:26,888][46990] Updated weights for policy 0, policy_version 5610 (0.0032) +[2024-06-10 18:40:28,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 91963392. Throughput: 0: 43561.8. Samples: 92101820. Policy #0 lag: (min: 2.0, avg: 10.5, max: 24.0) +[2024-06-10 18:40:28,240][46753] Avg episode reward: [(0, '0.022')] +[2024-06-10 18:40:31,414][46990] Updated weights for policy 0, policy_version 5620 (0.0034) +[2024-06-10 18:40:33,239][46753] Fps is (10 sec: 47514.5, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 92176384. Throughput: 0: 43564.1. Samples: 92241100. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 18:40:33,240][46753] Avg episode reward: [(0, '0.023')] +[2024-06-10 18:40:34,258][46990] Updated weights for policy 0, policy_version 5630 (0.0031) +[2024-06-10 18:40:38,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 92356608. Throughput: 0: 43758.7. Samples: 92504740. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 18:40:38,240][46753] Avg episode reward: [(0, '0.022')] +[2024-06-10 18:40:39,091][46990] Updated weights for policy 0, policy_version 5640 (0.0046) +[2024-06-10 18:40:41,659][46990] Updated weights for policy 0, policy_version 5650 (0.0033) +[2024-06-10 18:40:43,240][46753] Fps is (10 sec: 44236.1, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 92618752. Throughput: 0: 43747.0. Samples: 92758740. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 18:40:43,240][46753] Avg episode reward: [(0, '0.024')] +[2024-06-10 18:40:46,413][46990] Updated weights for policy 0, policy_version 5660 (0.0032) +[2024-06-10 18:40:48,240][46753] Fps is (10 sec: 45874.4, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 92815360. Throughput: 0: 43551.0. Samples: 92892420. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 18:40:48,240][46753] Avg episode reward: [(0, '0.025')] +[2024-06-10 18:40:49,391][46990] Updated weights for policy 0, policy_version 5670 (0.0026) +[2024-06-10 18:40:53,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 93011968. Throughput: 0: 43564.7. Samples: 93153760. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 18:40:53,249][46753] Avg episode reward: [(0, '0.030')] +[2024-06-10 18:40:53,372][46970] Saving new best policy, reward=0.030! +[2024-06-10 18:40:54,077][46990] Updated weights for policy 0, policy_version 5680 (0.0032) +[2024-06-10 18:40:56,908][46990] Updated weights for policy 0, policy_version 5690 (0.0041) +[2024-06-10 18:40:58,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 93274112. Throughput: 0: 43608.9. Samples: 93412980. Policy #0 lag: (min: 1.0, avg: 9.7, max: 22.0) +[2024-06-10 18:40:58,240][46753] Avg episode reward: [(0, '0.022')] +[2024-06-10 18:41:01,310][46990] Updated weights for policy 0, policy_version 5700 (0.0026) +[2024-06-10 18:41:03,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 93470720. Throughput: 0: 43615.6. Samples: 93550720. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 18:41:03,240][46753] Avg episode reward: [(0, '0.023')] +[2024-06-10 18:41:04,145][46990] Updated weights for policy 0, policy_version 5710 (0.0035) +[2024-06-10 18:41:08,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43693.9, 300 sec: 43598.1). Total num frames: 93683712. Throughput: 0: 43764.1. Samples: 93814580. Policy #0 lag: (min: 0.0, avg: 10.6, max: 20.0) +[2024-06-10 18:41:08,240][46753] Avg episode reward: [(0, '0.023')] +[2024-06-10 18:41:08,875][46990] Updated weights for policy 0, policy_version 5720 (0.0042) +[2024-06-10 18:41:11,685][46990] Updated weights for policy 0, policy_version 5730 (0.0027) +[2024-06-10 18:41:13,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 93913088. Throughput: 0: 43725.8. Samples: 94069480. Policy #0 lag: (min: 0.0, avg: 10.2, max: 20.0) +[2024-06-10 18:41:13,240][46753] Avg episode reward: [(0, '0.023')] +[2024-06-10 18:41:16,273][46990] Updated weights for policy 0, policy_version 5740 (0.0035) +[2024-06-10 18:41:18,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43690.8, 300 sec: 43709.9). Total num frames: 94126080. Throughput: 0: 43599.6. Samples: 94203080. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 18:41:18,240][46753] Avg episode reward: [(0, '0.025')] +[2024-06-10 18:41:19,398][46990] Updated weights for policy 0, policy_version 5750 (0.0040) +[2024-06-10 18:41:23,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 94339072. Throughput: 0: 43658.1. Samples: 94469360. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 18:41:23,240][46753] Avg episode reward: [(0, '0.024')] +[2024-06-10 18:41:23,551][46990] Updated weights for policy 0, policy_version 5760 (0.0037) +[2024-06-10 18:41:26,856][46990] Updated weights for policy 0, policy_version 5770 (0.0036) +[2024-06-10 18:41:28,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 94584832. Throughput: 0: 43719.7. Samples: 94726120. Policy #0 lag: (min: 0.0, avg: 10.9, max: 20.0) +[2024-06-10 18:41:28,240][46753] Avg episode reward: [(0, '0.025')] +[2024-06-10 18:41:31,113][46990] Updated weights for policy 0, policy_version 5780 (0.0042) +[2024-06-10 18:41:33,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 94797824. Throughput: 0: 43889.0. Samples: 94867420. Policy #0 lag: (min: 0.0, avg: 9.8, max: 19.0) +[2024-06-10 18:41:33,244][46753] Avg episode reward: [(0, '0.025')] +[2024-06-10 18:41:34,160][46990] Updated weights for policy 0, policy_version 5790 (0.0050) +[2024-06-10 18:41:38,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 94994432. Throughput: 0: 43859.1. Samples: 95127420. Policy #0 lag: (min: 0.0, avg: 9.8, max: 19.0) +[2024-06-10 18:41:38,240][46753] Avg episode reward: [(0, '0.031')] +[2024-06-10 18:41:38,376][46970] Saving new best policy, reward=0.031! +[2024-06-10 18:41:38,378][46990] Updated weights for policy 0, policy_version 5800 (0.0030) +[2024-06-10 18:41:41,759][46990] Updated weights for policy 0, policy_version 5810 (0.0031) +[2024-06-10 18:41:43,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 95240192. Throughput: 0: 43923.1. Samples: 95389520. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 18:41:43,240][46753] Avg episode reward: [(0, '0.028')] +[2024-06-10 18:41:45,249][46970] Signal inference workers to stop experience collection... (1350 times) +[2024-06-10 18:41:45,250][46970] Signal inference workers to resume experience collection... (1350 times) +[2024-06-10 18:41:45,289][46990] InferenceWorker_p0-w0: stopping experience collection (1350 times) +[2024-06-10 18:41:45,289][46990] InferenceWorker_p0-w0: resuming experience collection (1350 times) +[2024-06-10 18:41:46,012][46990] Updated weights for policy 0, policy_version 5820 (0.0029) +[2024-06-10 18:41:48,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 95453184. Throughput: 0: 43765.7. Samples: 95520180. Policy #0 lag: (min: 0.0, avg: 8.0, max: 20.0) +[2024-06-10 18:41:48,240][46753] Avg episode reward: [(0, '0.024')] +[2024-06-10 18:41:49,210][46990] Updated weights for policy 0, policy_version 5830 (0.0031) +[2024-06-10 18:41:53,239][46753] Fps is (10 sec: 44237.2, 60 sec: 44509.9, 300 sec: 43764.7). Total num frames: 95682560. Throughput: 0: 43741.4. Samples: 95782940. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 18:41:53,240][46753] Avg episode reward: [(0, '0.025')] +[2024-06-10 18:41:53,251][46990] Updated weights for policy 0, policy_version 5840 (0.0038) +[2024-06-10 18:41:56,879][46990] Updated weights for policy 0, policy_version 5850 (0.0033) +[2024-06-10 18:41:58,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 95895552. Throughput: 0: 43908.9. Samples: 96045380. Policy #0 lag: (min: 1.0, avg: 11.4, max: 21.0) +[2024-06-10 18:41:58,240][46753] Avg episode reward: [(0, '0.030')] +[2024-06-10 18:42:00,923][46990] Updated weights for policy 0, policy_version 5860 (0.0031) +[2024-06-10 18:42:03,244][46753] Fps is (10 sec: 44217.0, 60 sec: 44233.5, 300 sec: 43819.6). Total num frames: 96124928. Throughput: 0: 43883.1. Samples: 96178020. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 18:42:03,244][46753] Avg episode reward: [(0, '0.029')] +[2024-06-10 18:42:04,142][46990] Updated weights for policy 0, policy_version 5870 (0.0029) +[2024-06-10 18:42:08,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 96305152. Throughput: 0: 43716.5. Samples: 96436600. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 18:42:08,240][46753] Avg episode reward: [(0, '0.028')] +[2024-06-10 18:42:08,452][46990] Updated weights for policy 0, policy_version 5880 (0.0041) +[2024-06-10 18:42:11,547][46990] Updated weights for policy 0, policy_version 5890 (0.0037) +[2024-06-10 18:42:13,239][46753] Fps is (10 sec: 42617.3, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 96550912. Throughput: 0: 43811.5. Samples: 96697640. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 18:42:13,240][46753] Avg episode reward: [(0, '0.030')] +[2024-06-10 18:42:15,963][46990] Updated weights for policy 0, policy_version 5900 (0.0033) +[2024-06-10 18:42:18,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 96763904. Throughput: 0: 43522.7. Samples: 96825940. Policy #0 lag: (min: 0.0, avg: 9.6, max: 23.0) +[2024-06-10 18:42:18,240][46753] Avg episode reward: [(0, '0.030')] +[2024-06-10 18:42:19,042][46990] Updated weights for policy 0, policy_version 5910 (0.0036) +[2024-06-10 18:42:23,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43963.8, 300 sec: 43709.9). Total num frames: 96976896. Throughput: 0: 43587.7. Samples: 97088860. Policy #0 lag: (min: 0.0, avg: 11.5, max: 20.0) +[2024-06-10 18:42:23,240][46753] Avg episode reward: [(0, '0.036')] +[2024-06-10 18:42:23,254][46990] Updated weights for policy 0, policy_version 5920 (0.0030) +[2024-06-10 18:42:23,263][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000005920_96993280.pth... +[2024-06-10 18:42:23,319][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000005278_86474752.pth +[2024-06-10 18:42:23,323][46970] Saving new best policy, reward=0.036! +[2024-06-10 18:42:26,893][46990] Updated weights for policy 0, policy_version 5930 (0.0027) +[2024-06-10 18:42:28,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.6, 300 sec: 43653.7). Total num frames: 97189888. Throughput: 0: 43608.6. Samples: 97351900. Policy #0 lag: (min: 1.0, avg: 9.4, max: 20.0) +[2024-06-10 18:42:28,240][46753] Avg episode reward: [(0, '0.034')] +[2024-06-10 18:42:31,138][46990] Updated weights for policy 0, policy_version 5940 (0.0037) +[2024-06-10 18:42:33,239][46753] Fps is (10 sec: 44236.1, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 97419264. Throughput: 0: 43552.4. Samples: 97480040. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 18:42:33,240][46753] Avg episode reward: [(0, '0.025')] +[2024-06-10 18:42:34,299][46990] Updated weights for policy 0, policy_version 5950 (0.0031) +[2024-06-10 18:42:38,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 97615872. Throughput: 0: 43625.0. Samples: 97746060. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 18:42:38,240][46753] Avg episode reward: [(0, '0.036')] +[2024-06-10 18:42:38,501][46990] Updated weights for policy 0, policy_version 5960 (0.0028) +[2024-06-10 18:42:41,639][46990] Updated weights for policy 0, policy_version 5970 (0.0026) +[2024-06-10 18:42:43,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 97845248. Throughput: 0: 43476.4. Samples: 98001820. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 18:42:43,240][46753] Avg episode reward: [(0, '0.034')] +[2024-06-10 18:42:46,029][46990] Updated weights for policy 0, policy_version 5980 (0.0032) +[2024-06-10 18:42:48,240][46753] Fps is (10 sec: 45874.2, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 98074624. Throughput: 0: 43479.7. Samples: 98134420. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 18:42:48,240][46753] Avg episode reward: [(0, '0.026')] +[2024-06-10 18:42:49,325][46990] Updated weights for policy 0, policy_version 5990 (0.0049) +[2024-06-10 18:42:52,420][46970] Signal inference workers to stop experience collection... (1400 times) +[2024-06-10 18:42:52,458][46990] InferenceWorker_p0-w0: stopping experience collection (1400 times) +[2024-06-10 18:42:52,476][46970] Signal inference workers to resume experience collection... (1400 times) +[2024-06-10 18:42:52,478][46990] InferenceWorker_p0-w0: resuming experience collection (1400 times) +[2024-06-10 18:42:53,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 98287616. Throughput: 0: 43741.3. Samples: 98404960. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 18:42:53,240][46753] Avg episode reward: [(0, '0.026')] +[2024-06-10 18:42:53,420][46990] Updated weights for policy 0, policy_version 6000 (0.0029) +[2024-06-10 18:42:57,073][46990] Updated weights for policy 0, policy_version 6010 (0.0040) +[2024-06-10 18:42:58,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43417.5, 300 sec: 43709.1). Total num frames: 98500608. Throughput: 0: 43651.5. Samples: 98661960. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 18:42:58,243][46753] Avg episode reward: [(0, '0.036')] +[2024-06-10 18:43:00,791][46990] Updated weights for policy 0, policy_version 6020 (0.0039) +[2024-06-10 18:43:03,240][46753] Fps is (10 sec: 44235.9, 60 sec: 43420.7, 300 sec: 43764.7). Total num frames: 98729984. Throughput: 0: 43725.5. Samples: 98793600. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 18:43:03,240][46753] Avg episode reward: [(0, '0.025')] +[2024-06-10 18:43:04,358][46990] Updated weights for policy 0, policy_version 6030 (0.0039) +[2024-06-10 18:43:08,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 98926592. Throughput: 0: 43853.3. Samples: 99062260. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 18:43:08,240][46753] Avg episode reward: [(0, '0.026')] +[2024-06-10 18:43:08,672][46990] Updated weights for policy 0, policy_version 6040 (0.0033) +[2024-06-10 18:43:11,609][46990] Updated weights for policy 0, policy_version 6050 (0.0036) +[2024-06-10 18:43:13,240][46753] Fps is (10 sec: 42598.8, 60 sec: 43417.5, 300 sec: 43709.8). Total num frames: 99155968. Throughput: 0: 43534.9. Samples: 99310980. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 18:43:13,240][46753] Avg episode reward: [(0, '0.029')] +[2024-06-10 18:43:16,139][46990] Updated weights for policy 0, policy_version 6060 (0.0036) +[2024-06-10 18:43:18,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 99368960. Throughput: 0: 43619.2. Samples: 99442900. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 18:43:18,240][46753] Avg episode reward: [(0, '0.037')] +[2024-06-10 18:43:18,352][46970] Saving new best policy, reward=0.037! +[2024-06-10 18:43:19,304][46990] Updated weights for policy 0, policy_version 6070 (0.0033) +[2024-06-10 18:43:23,240][46753] Fps is (10 sec: 44237.0, 60 sec: 43690.5, 300 sec: 43709.2). Total num frames: 99598336. Throughput: 0: 43835.4. Samples: 99718660. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 18:43:23,249][46753] Avg episode reward: [(0, '0.034')] +[2024-06-10 18:43:23,501][46990] Updated weights for policy 0, policy_version 6080 (0.0029) +[2024-06-10 18:43:27,236][46990] Updated weights for policy 0, policy_version 6090 (0.0042) +[2024-06-10 18:43:28,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 99811328. Throughput: 0: 43687.5. Samples: 99967760. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 18:43:28,240][46753] Avg episode reward: [(0, '0.038')] +[2024-06-10 18:43:28,249][46970] Saving new best policy, reward=0.038! +[2024-06-10 18:43:31,032][46990] Updated weights for policy 0, policy_version 6100 (0.0043) +[2024-06-10 18:43:33,244][46753] Fps is (10 sec: 44217.3, 60 sec: 43687.4, 300 sec: 43764.7). Total num frames: 100040704. Throughput: 0: 43775.8. Samples: 100104520. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 18:43:33,244][46753] Avg episode reward: [(0, '0.032')] +[2024-06-10 18:43:34,418][46990] Updated weights for policy 0, policy_version 6110 (0.0028) +[2024-06-10 18:43:38,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43417.5, 300 sec: 43542.6). Total num frames: 100220928. Throughput: 0: 43749.8. Samples: 100373700. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 18:43:38,240][46753] Avg episode reward: [(0, '0.033')] +[2024-06-10 18:43:38,565][46990] Updated weights for policy 0, policy_version 6120 (0.0034) +[2024-06-10 18:43:42,041][46990] Updated weights for policy 0, policy_version 6130 (0.0038) +[2024-06-10 18:43:43,239][46753] Fps is (10 sec: 42617.4, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 100466688. Throughput: 0: 43591.1. Samples: 100623560. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 18:43:43,240][46753] Avg episode reward: [(0, '0.031')] +[2024-06-10 18:43:46,147][46990] Updated weights for policy 0, policy_version 6140 (0.0037) +[2024-06-10 18:43:48,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43417.8, 300 sec: 43764.8). Total num frames: 100679680. Throughput: 0: 43776.7. Samples: 100763540. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 18:43:48,240][46753] Avg episode reward: [(0, '0.027')] +[2024-06-10 18:43:49,366][46990] Updated weights for policy 0, policy_version 6150 (0.0031) +[2024-06-10 18:43:53,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 100892672. Throughput: 0: 43681.2. Samples: 101027920. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 18:43:53,240][46753] Avg episode reward: [(0, '0.030')] +[2024-06-10 18:43:53,592][46990] Updated weights for policy 0, policy_version 6160 (0.0039) +[2024-06-10 18:43:56,929][46990] Updated weights for policy 0, policy_version 6170 (0.0049) +[2024-06-10 18:43:58,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 101122048. Throughput: 0: 43764.1. Samples: 101280360. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 18:43:58,240][46753] Avg episode reward: [(0, '0.031')] +[2024-06-10 18:44:00,952][46990] Updated weights for policy 0, policy_version 6180 (0.0041) +[2024-06-10 18:44:03,244][46753] Fps is (10 sec: 45854.9, 60 sec: 43687.6, 300 sec: 43819.6). Total num frames: 101351424. Throughput: 0: 43748.5. Samples: 101411780. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 18:44:03,244][46753] Avg episode reward: [(0, '0.042')] +[2024-06-10 18:44:03,261][46970] Saving new best policy, reward=0.042! +[2024-06-10 18:44:04,477][46990] Updated weights for policy 0, policy_version 6190 (0.0046) +[2024-06-10 18:44:08,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 101531648. Throughput: 0: 43624.6. Samples: 101681760. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 18:44:08,240][46753] Avg episode reward: [(0, '0.031')] +[2024-06-10 18:44:08,518][46970] Signal inference workers to stop experience collection... (1450 times) +[2024-06-10 18:44:08,568][46990] InferenceWorker_p0-w0: stopping experience collection (1450 times) +[2024-06-10 18:44:08,625][46970] Signal inference workers to resume experience collection... (1450 times) +[2024-06-10 18:44:08,625][46990] InferenceWorker_p0-w0: resuming experience collection (1450 times) +[2024-06-10 18:44:08,750][46990] Updated weights for policy 0, policy_version 6200 (0.0033) +[2024-06-10 18:44:11,727][46990] Updated weights for policy 0, policy_version 6210 (0.0031) +[2024-06-10 18:44:13,240][46753] Fps is (10 sec: 42616.8, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 101777408. Throughput: 0: 43739.0. Samples: 101936020. Policy #0 lag: (min: 0.0, avg: 13.4, max: 23.0) +[2024-06-10 18:44:13,240][46753] Avg episode reward: [(0, '0.034')] +[2024-06-10 18:44:16,332][46990] Updated weights for policy 0, policy_version 6220 (0.0034) +[2024-06-10 18:44:18,239][46753] Fps is (10 sec: 45874.8, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 101990400. Throughput: 0: 43865.3. Samples: 102078260. Policy #0 lag: (min: 0.0, avg: 11.1, max: 24.0) +[2024-06-10 18:44:18,240][46753] Avg episode reward: [(0, '0.040')] +[2024-06-10 18:44:19,018][46990] Updated weights for policy 0, policy_version 6230 (0.0029) +[2024-06-10 18:44:23,240][46753] Fps is (10 sec: 42598.6, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 102203392. Throughput: 0: 43788.4. Samples: 102344180. Policy #0 lag: (min: 0.0, avg: 11.1, max: 24.0) +[2024-06-10 18:44:23,240][46753] Avg episode reward: [(0, '0.041')] +[2024-06-10 18:44:23,383][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000006239_102219776.pth... +[2024-06-10 18:44:23,432][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000005598_91717632.pth +[2024-06-10 18:44:23,567][46990] Updated weights for policy 0, policy_version 6240 (0.0028) +[2024-06-10 18:44:26,756][46990] Updated weights for policy 0, policy_version 6250 (0.0038) +[2024-06-10 18:44:28,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 102432768. Throughput: 0: 43914.3. Samples: 102599700. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 18:44:28,240][46753] Avg episode reward: [(0, '0.035')] +[2024-06-10 18:44:30,955][46990] Updated weights for policy 0, policy_version 6260 (0.0024) +[2024-06-10 18:44:33,240][46753] Fps is (10 sec: 45874.9, 60 sec: 43693.8, 300 sec: 43820.2). Total num frames: 102662144. Throughput: 0: 43699.3. Samples: 102730020. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 18:44:33,243][46753] Avg episode reward: [(0, '0.040')] +[2024-06-10 18:44:34,213][46990] Updated weights for policy 0, policy_version 6270 (0.0030) +[2024-06-10 18:44:38,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 102858752. Throughput: 0: 43817.2. Samples: 102999700. Policy #0 lag: (min: 0.0, avg: 12.4, max: 23.0) +[2024-06-10 18:44:38,240][46753] Avg episode reward: [(0, '0.044')] +[2024-06-10 18:44:38,241][46970] Saving new best policy, reward=0.044! +[2024-06-10 18:44:38,800][46990] Updated weights for policy 0, policy_version 6280 (0.0046) +[2024-06-10 18:44:41,675][46990] Updated weights for policy 0, policy_version 6290 (0.0035) +[2024-06-10 18:44:43,239][46753] Fps is (10 sec: 42599.6, 60 sec: 43690.8, 300 sec: 43653.7). Total num frames: 103088128. Throughput: 0: 43793.9. Samples: 103251080. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 18:44:43,240][46753] Avg episode reward: [(0, '0.033')] +[2024-06-10 18:44:46,306][46990] Updated weights for policy 0, policy_version 6300 (0.0029) +[2024-06-10 18:44:48,240][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 103317504. Throughput: 0: 43936.2. Samples: 103388720. Policy #0 lag: (min: 0.0, avg: 9.4, max: 19.0) +[2024-06-10 18:44:48,240][46753] Avg episode reward: [(0, '0.040')] +[2024-06-10 18:44:48,993][46990] Updated weights for policy 0, policy_version 6310 (0.0026) +[2024-06-10 18:44:53,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43963.8, 300 sec: 43653.6). Total num frames: 103530496. Throughput: 0: 43669.7. Samples: 103646900. Policy #0 lag: (min: 1.0, avg: 9.8, max: 22.0) +[2024-06-10 18:44:53,240][46753] Avg episode reward: [(0, '0.040')] +[2024-06-10 18:44:53,497][46990] Updated weights for policy 0, policy_version 6320 (0.0023) +[2024-06-10 18:44:56,819][46990] Updated weights for policy 0, policy_version 6330 (0.0047) +[2024-06-10 18:44:58,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 103727104. Throughput: 0: 43837.9. Samples: 103908720. Policy #0 lag: (min: 1.0, avg: 9.8, max: 22.0) +[2024-06-10 18:44:58,242][46753] Avg episode reward: [(0, '0.043')] +[2024-06-10 18:45:01,486][46990] Updated weights for policy 0, policy_version 6340 (0.0038) +[2024-06-10 18:45:03,240][46753] Fps is (10 sec: 42593.8, 60 sec: 43420.1, 300 sec: 43709.7). Total num frames: 103956480. Throughput: 0: 43521.6. Samples: 104036780. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 18:45:03,241][46753] Avg episode reward: [(0, '0.035')] +[2024-06-10 18:45:04,482][46990] Updated weights for policy 0, policy_version 6350 (0.0036) +[2024-06-10 18:45:08,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 104136704. Throughput: 0: 43317.5. Samples: 104293460. Policy #0 lag: (min: 0.0, avg: 13.4, max: 22.0) +[2024-06-10 18:45:08,240][46753] Avg episode reward: [(0, '0.043')] +[2024-06-10 18:45:09,367][46990] Updated weights for policy 0, policy_version 6360 (0.0031) +[2024-06-10 18:45:11,772][46990] Updated weights for policy 0, policy_version 6370 (0.0040) +[2024-06-10 18:45:13,240][46753] Fps is (10 sec: 44241.1, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 104398848. Throughput: 0: 43370.1. Samples: 104551360. Policy #0 lag: (min: 0.0, avg: 12.4, max: 22.0) +[2024-06-10 18:45:13,240][46753] Avg episode reward: [(0, '0.042')] +[2024-06-10 18:45:16,670][46990] Updated weights for policy 0, policy_version 6380 (0.0037) +[2024-06-10 18:45:18,239][46753] Fps is (10 sec: 47513.2, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 104611840. Throughput: 0: 43554.4. Samples: 104689960. Policy #0 lag: (min: 1.0, avg: 9.8, max: 22.0) +[2024-06-10 18:45:18,240][46753] Avg episode reward: [(0, '0.040')] +[2024-06-10 18:45:19,158][46990] Updated weights for policy 0, policy_version 6390 (0.0044) +[2024-06-10 18:45:23,239][46753] Fps is (10 sec: 42599.1, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 104824832. Throughput: 0: 43219.3. Samples: 104944560. Policy #0 lag: (min: 1.0, avg: 9.8, max: 22.0) +[2024-06-10 18:45:23,240][46753] Avg episode reward: [(0, '0.035')] +[2024-06-10 18:45:23,899][46990] Updated weights for policy 0, policy_version 6400 (0.0035) +[2024-06-10 18:45:25,814][46970] Signal inference workers to stop experience collection... (1500 times) +[2024-06-10 18:45:25,862][46990] InferenceWorker_p0-w0: stopping experience collection (1500 times) +[2024-06-10 18:45:25,871][46970] Signal inference workers to resume experience collection... (1500 times) +[2024-06-10 18:45:25,880][46990] InferenceWorker_p0-w0: resuming experience collection (1500 times) +[2024-06-10 18:45:26,928][46990] Updated weights for policy 0, policy_version 6410 (0.0042) +[2024-06-10 18:45:28,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 105054208. Throughput: 0: 43564.4. Samples: 105211480. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 18:45:28,240][46753] Avg episode reward: [(0, '0.037')] +[2024-06-10 18:45:31,617][46990] Updated weights for policy 0, policy_version 6420 (0.0051) +[2024-06-10 18:45:33,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 105267200. Throughput: 0: 43492.1. Samples: 105345860. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 18:45:33,242][46753] Avg episode reward: [(0, '0.056')] +[2024-06-10 18:45:33,255][46970] Saving new best policy, reward=0.056! +[2024-06-10 18:45:34,297][46990] Updated weights for policy 0, policy_version 6430 (0.0042) +[2024-06-10 18:45:38,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43417.7, 300 sec: 43542.6). Total num frames: 105463808. Throughput: 0: 43488.5. Samples: 105603880. Policy #0 lag: (min: 0.0, avg: 13.5, max: 23.0) +[2024-06-10 18:45:38,240][46753] Avg episode reward: [(0, '0.051')] +[2024-06-10 18:45:39,399][46990] Updated weights for policy 0, policy_version 6440 (0.0035) +[2024-06-10 18:45:41,860][46990] Updated weights for policy 0, policy_version 6450 (0.0033) +[2024-06-10 18:45:43,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 105709568. Throughput: 0: 43380.1. Samples: 105860820. Policy #0 lag: (min: 0.0, avg: 11.2, max: 24.0) +[2024-06-10 18:45:43,240][46753] Avg episode reward: [(0, '0.035')] +[2024-06-10 18:45:46,586][46990] Updated weights for policy 0, policy_version 6460 (0.0036) +[2024-06-10 18:45:48,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 105922560. Throughput: 0: 43557.5. Samples: 105996820. Policy #0 lag: (min: 0.0, avg: 6.9, max: 18.0) +[2024-06-10 18:45:48,240][46753] Avg episode reward: [(0, '0.056')] +[2024-06-10 18:45:49,409][46990] Updated weights for policy 0, policy_version 6470 (0.0038) +[2024-06-10 18:45:53,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 106151936. Throughput: 0: 43702.6. Samples: 106260080. Policy #0 lag: (min: 0.0, avg: 6.9, max: 18.0) +[2024-06-10 18:45:53,240][46753] Avg episode reward: [(0, '0.046')] +[2024-06-10 18:45:53,720][46990] Updated weights for policy 0, policy_version 6480 (0.0032) +[2024-06-10 18:45:57,014][46990] Updated weights for policy 0, policy_version 6490 (0.0035) +[2024-06-10 18:45:58,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 106364928. Throughput: 0: 43647.6. Samples: 106515500. Policy #0 lag: (min: 0.0, avg: 10.4, max: 23.0) +[2024-06-10 18:45:58,240][46753] Avg episode reward: [(0, '0.043')] +[2024-06-10 18:46:01,403][46990] Updated weights for policy 0, policy_version 6500 (0.0048) +[2024-06-10 18:46:03,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43418.4, 300 sec: 43653.7). Total num frames: 106561536. Throughput: 0: 43544.1. Samples: 106649440. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 18:46:03,240][46753] Avg episode reward: [(0, '0.049')] +[2024-06-10 18:46:04,649][46990] Updated weights for policy 0, policy_version 6510 (0.0035) +[2024-06-10 18:46:08,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 106774528. Throughput: 0: 43624.8. Samples: 106907680. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 18:46:08,240][46753] Avg episode reward: [(0, '0.048')] +[2024-06-10 18:46:09,162][46990] Updated weights for policy 0, policy_version 6520 (0.0037) +[2024-06-10 18:46:12,130][46990] Updated weights for policy 0, policy_version 6530 (0.0028) +[2024-06-10 18:46:13,240][46753] Fps is (10 sec: 45874.6, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 107020288. Throughput: 0: 43419.0. Samples: 107165340. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 18:46:13,240][46753] Avg episode reward: [(0, '0.053')] +[2024-06-10 18:46:16,354][46990] Updated weights for policy 0, policy_version 6540 (0.0047) +[2024-06-10 18:46:18,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 107216896. Throughput: 0: 43489.4. Samples: 107302880. Policy #0 lag: (min: 0.0, avg: 7.9, max: 20.0) +[2024-06-10 18:46:18,240][46753] Avg episode reward: [(0, '0.044')] +[2024-06-10 18:46:19,556][46990] Updated weights for policy 0, policy_version 6550 (0.0039) +[2024-06-10 18:46:23,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 107462656. Throughput: 0: 43677.3. Samples: 107569360. Policy #0 lag: (min: 0.0, avg: 7.9, max: 20.0) +[2024-06-10 18:46:23,240][46753] Avg episode reward: [(0, '0.041')] +[2024-06-10 18:46:23,256][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000006559_107462656.pth... +[2024-06-10 18:46:23,311][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000005920_96993280.pth +[2024-06-10 18:46:23,481][46990] Updated weights for policy 0, policy_version 6560 (0.0027) +[2024-06-10 18:46:26,977][46990] Updated weights for policy 0, policy_version 6570 (0.0043) +[2024-06-10 18:46:28,240][46753] Fps is (10 sec: 47512.9, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 107692032. Throughput: 0: 43734.9. Samples: 107828900. Policy #0 lag: (min: 0.0, avg: 7.1, max: 20.0) +[2024-06-10 18:46:28,240][46753] Avg episode reward: [(0, '0.038')] +[2024-06-10 18:46:31,210][46990] Updated weights for policy 0, policy_version 6580 (0.0025) +[2024-06-10 18:46:33,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 107888640. Throughput: 0: 43668.8. Samples: 107961920. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 18:46:33,240][46753] Avg episode reward: [(0, '0.046')] +[2024-06-10 18:46:34,688][46990] Updated weights for policy 0, policy_version 6590 (0.0032) +[2024-06-10 18:46:36,876][46970] Signal inference workers to stop experience collection... (1550 times) +[2024-06-10 18:46:36,876][46970] Signal inference workers to resume experience collection... (1550 times) +[2024-06-10 18:46:36,902][46990] InferenceWorker_p0-w0: stopping experience collection (1550 times) +[2024-06-10 18:46:36,902][46990] InferenceWorker_p0-w0: resuming experience collection (1550 times) +[2024-06-10 18:46:38,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 108101632. Throughput: 0: 43624.4. Samples: 108223180. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 18:46:38,240][46753] Avg episode reward: [(0, '0.043')] +[2024-06-10 18:46:38,955][46990] Updated weights for policy 0, policy_version 6600 (0.0033) +[2024-06-10 18:46:42,179][46990] Updated weights for policy 0, policy_version 6610 (0.0035) +[2024-06-10 18:46:43,240][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 108347392. Throughput: 0: 43704.4. Samples: 108482200. Policy #0 lag: (min: 1.0, avg: 11.4, max: 23.0) +[2024-06-10 18:46:43,240][46753] Avg episode reward: [(0, '0.049')] +[2024-06-10 18:46:46,418][46990] Updated weights for policy 0, policy_version 6620 (0.0033) +[2024-06-10 18:46:48,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 108527616. Throughput: 0: 43798.3. Samples: 108620360. Policy #0 lag: (min: 1.0, avg: 11.4, max: 23.0) +[2024-06-10 18:46:48,240][46753] Avg episode reward: [(0, '0.048')] +[2024-06-10 18:46:49,781][46990] Updated weights for policy 0, policy_version 6630 (0.0043) +[2024-06-10 18:46:53,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 108773376. Throughput: 0: 43796.9. Samples: 108878540. Policy #0 lag: (min: 0.0, avg: 10.9, max: 23.0) +[2024-06-10 18:46:53,240][46753] Avg episode reward: [(0, '0.046')] +[2024-06-10 18:46:53,591][46990] Updated weights for policy 0, policy_version 6640 (0.0025) +[2024-06-10 18:46:57,029][46990] Updated weights for policy 0, policy_version 6650 (0.0052) +[2024-06-10 18:46:58,240][46753] Fps is (10 sec: 47512.7, 60 sec: 43963.7, 300 sec: 43654.3). Total num frames: 109002752. Throughput: 0: 43893.8. Samples: 109140560. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 18:46:58,240][46753] Avg episode reward: [(0, '0.047')] +[2024-06-10 18:47:01,346][46990] Updated weights for policy 0, policy_version 6660 (0.0027) +[2024-06-10 18:47:03,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 109199360. Throughput: 0: 43781.2. Samples: 109273040. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 18:47:03,240][46753] Avg episode reward: [(0, '0.045')] +[2024-06-10 18:47:04,688][46990] Updated weights for policy 0, policy_version 6670 (0.0045) +[2024-06-10 18:47:08,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 109412352. Throughput: 0: 43714.7. Samples: 109536520. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 18:47:08,240][46753] Avg episode reward: [(0, '0.049')] +[2024-06-10 18:47:08,896][46990] Updated weights for policy 0, policy_version 6680 (0.0042) +[2024-06-10 18:47:12,164][46990] Updated weights for policy 0, policy_version 6690 (0.0035) +[2024-06-10 18:47:13,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 109641728. Throughput: 0: 43906.4. Samples: 109804680. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 18:47:13,240][46753] Avg episode reward: [(0, '0.049')] +[2024-06-10 18:47:16,027][46990] Updated weights for policy 0, policy_version 6700 (0.0033) +[2024-06-10 18:47:18,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 109854720. Throughput: 0: 43864.5. Samples: 109935820. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 18:47:18,240][46753] Avg episode reward: [(0, '0.049')] +[2024-06-10 18:47:19,562][46990] Updated weights for policy 0, policy_version 6710 (0.0028) +[2024-06-10 18:47:23,177][46990] Updated weights for policy 0, policy_version 6720 (0.0030) +[2024-06-10 18:47:23,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 110100480. Throughput: 0: 43788.5. Samples: 110193660. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 18:47:23,240][46753] Avg episode reward: [(0, '0.063')] +[2024-06-10 18:47:23,253][46970] Saving new best policy, reward=0.063! +[2024-06-10 18:47:27,123][46990] Updated weights for policy 0, policy_version 6730 (0.0027) +[2024-06-10 18:47:28,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 110313472. Throughput: 0: 43878.7. Samples: 110456740. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 18:47:28,240][46753] Avg episode reward: [(0, '0.055')] +[2024-06-10 18:47:31,061][46990] Updated weights for policy 0, policy_version 6740 (0.0036) +[2024-06-10 18:47:33,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 110510080. Throughput: 0: 43762.2. Samples: 110589660. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 18:47:33,240][46753] Avg episode reward: [(0, '0.059')] +[2024-06-10 18:47:34,957][46990] Updated weights for policy 0, policy_version 6750 (0.0037) +[2024-06-10 18:47:38,240][46753] Fps is (10 sec: 40959.6, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 110723072. Throughput: 0: 43753.2. Samples: 110847440. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 18:47:38,240][46753] Avg episode reward: [(0, '0.055')] +[2024-06-10 18:47:38,794][46990] Updated weights for policy 0, policy_version 6760 (0.0023) +[2024-06-10 18:47:42,236][46990] Updated weights for policy 0, policy_version 6770 (0.0031) +[2024-06-10 18:47:43,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 110968832. Throughput: 0: 43837.9. Samples: 111113260. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 18:47:43,240][46753] Avg episode reward: [(0, '0.039')] +[2024-06-10 18:47:45,861][46990] Updated weights for policy 0, policy_version 6780 (0.0036) +[2024-06-10 18:47:48,239][46753] Fps is (10 sec: 45875.4, 60 sec: 44236.7, 300 sec: 43709.2). Total num frames: 111181824. Throughput: 0: 43855.6. Samples: 111246540. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 18:47:48,240][46753] Avg episode reward: [(0, '0.048')] +[2024-06-10 18:47:49,531][46990] Updated weights for policy 0, policy_version 6790 (0.0030) +[2024-06-10 18:47:53,244][46753] Fps is (10 sec: 44216.6, 60 sec: 43960.4, 300 sec: 43764.1). Total num frames: 111411200. Throughput: 0: 43917.3. Samples: 111513000. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 18:47:53,245][46753] Avg episode reward: [(0, '0.045')] +[2024-06-10 18:47:53,245][46990] Updated weights for policy 0, policy_version 6800 (0.0043) +[2024-06-10 18:47:56,980][46990] Updated weights for policy 0, policy_version 6810 (0.0040) +[2024-06-10 18:47:58,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 111624192. Throughput: 0: 43815.4. Samples: 111776380. Policy #0 lag: (min: 0.0, avg: 9.6, max: 22.0) +[2024-06-10 18:47:58,240][46753] Avg episode reward: [(0, '0.047')] +[2024-06-10 18:48:00,993][46990] Updated weights for policy 0, policy_version 6820 (0.0030) +[2024-06-10 18:48:03,239][46753] Fps is (10 sec: 42617.9, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 111837184. Throughput: 0: 43889.8. Samples: 111910860. Policy #0 lag: (min: 0.0, avg: 9.2, max: 23.0) +[2024-06-10 18:48:03,240][46753] Avg episode reward: [(0, '0.064')] +[2024-06-10 18:48:03,353][46970] Saving new best policy, reward=0.064! +[2024-06-10 18:48:04,574][46990] Updated weights for policy 0, policy_version 6830 (0.0038) +[2024-06-10 18:48:08,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 112050176. Throughput: 0: 43898.2. Samples: 112169080. Policy #0 lag: (min: 0.0, avg: 9.2, max: 23.0) +[2024-06-10 18:48:08,240][46753] Avg episode reward: [(0, '0.060')] +[2024-06-10 18:48:08,531][46990] Updated weights for policy 0, policy_version 6840 (0.0034) +[2024-06-10 18:48:12,255][46990] Updated weights for policy 0, policy_version 6850 (0.0037) +[2024-06-10 18:48:13,239][46753] Fps is (10 sec: 45874.9, 60 sec: 44236.7, 300 sec: 43820.2). Total num frames: 112295936. Throughput: 0: 43979.5. Samples: 112435820. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 18:48:13,240][46753] Avg episode reward: [(0, '0.056')] +[2024-06-10 18:48:15,756][46990] Updated weights for policy 0, policy_version 6860 (0.0045) +[2024-06-10 18:48:16,735][46970] Signal inference workers to stop experience collection... (1600 times) +[2024-06-10 18:48:16,735][46970] Signal inference workers to resume experience collection... (1600 times) +[2024-06-10 18:48:16,775][46990] InferenceWorker_p0-w0: stopping experience collection (1600 times) +[2024-06-10 18:48:16,775][46990] InferenceWorker_p0-w0: resuming experience collection (1600 times) +[2024-06-10 18:48:18,239][46753] Fps is (10 sec: 45875.3, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 112508928. Throughput: 0: 43969.3. Samples: 112568280. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 18:48:18,249][46753] Avg episode reward: [(0, '0.060')] +[2024-06-10 18:48:19,706][46990] Updated weights for policy 0, policy_version 6870 (0.0042) +[2024-06-10 18:48:22,840][46990] Updated weights for policy 0, policy_version 6880 (0.0033) +[2024-06-10 18:48:23,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 112721920. Throughput: 0: 44061.9. Samples: 112830220. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 18:48:23,240][46753] Avg episode reward: [(0, '0.060')] +[2024-06-10 18:48:23,262][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000006880_112721920.pth... +[2024-06-10 18:48:23,325][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000006239_102219776.pth +[2024-06-10 18:48:27,175][46990] Updated weights for policy 0, policy_version 6890 (0.0034) +[2024-06-10 18:48:28,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43963.8, 300 sec: 43765.4). Total num frames: 112951296. Throughput: 0: 43948.0. Samples: 113090920. Policy #0 lag: (min: 1.0, avg: 11.8, max: 24.0) +[2024-06-10 18:48:28,240][46753] Avg episode reward: [(0, '0.048')] +[2024-06-10 18:48:30,532][46990] Updated weights for policy 0, policy_version 6900 (0.0033) +[2024-06-10 18:48:33,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 113147904. Throughput: 0: 44049.9. Samples: 113228780. Policy #0 lag: (min: 1.0, avg: 11.8, max: 24.0) +[2024-06-10 18:48:33,240][46753] Avg episode reward: [(0, '0.045')] +[2024-06-10 18:48:34,701][46990] Updated weights for policy 0, policy_version 6910 (0.0029) +[2024-06-10 18:48:38,235][46990] Updated weights for policy 0, policy_version 6920 (0.0022) +[2024-06-10 18:48:38,240][46753] Fps is (10 sec: 42597.8, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 113377280. Throughput: 0: 43815.0. Samples: 113484480. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 18:48:38,248][46753] Avg episode reward: [(0, '0.044')] +[2024-06-10 18:48:42,126][46990] Updated weights for policy 0, policy_version 6930 (0.0032) +[2024-06-10 18:48:43,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 113590272. Throughput: 0: 43988.5. Samples: 113755860. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 18:48:43,240][46753] Avg episode reward: [(0, '0.052')] +[2024-06-10 18:48:45,583][46990] Updated weights for policy 0, policy_version 6940 (0.0026) +[2024-06-10 18:48:48,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 113819648. Throughput: 0: 43914.6. Samples: 113887020. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 18:48:48,240][46753] Avg episode reward: [(0, '0.063')] +[2024-06-10 18:48:49,766][46990] Updated weights for policy 0, policy_version 6950 (0.0035) +[2024-06-10 18:48:52,718][46990] Updated weights for policy 0, policy_version 6960 (0.0026) +[2024-06-10 18:48:53,240][46753] Fps is (10 sec: 45874.8, 60 sec: 43967.0, 300 sec: 43820.3). Total num frames: 114049024. Throughput: 0: 44048.8. Samples: 114151280. Policy #0 lag: (min: 0.0, avg: 10.0, max: 20.0) +[2024-06-10 18:48:53,240][46753] Avg episode reward: [(0, '0.050')] +[2024-06-10 18:48:56,911][46990] Updated weights for policy 0, policy_version 6970 (0.0029) +[2024-06-10 18:48:58,244][46753] Fps is (10 sec: 42579.3, 60 sec: 43687.4, 300 sec: 43709.2). Total num frames: 114245632. Throughput: 0: 43808.5. Samples: 114407400. Policy #0 lag: (min: 0.0, avg: 10.0, max: 20.0) +[2024-06-10 18:48:58,244][46753] Avg episode reward: [(0, '0.058')] +[2024-06-10 18:49:00,391][46990] Updated weights for policy 0, policy_version 6980 (0.0039) +[2024-06-10 18:49:03,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 114475008. Throughput: 0: 43816.0. Samples: 114540000. Policy #0 lag: (min: 0.0, avg: 8.7, max: 18.0) +[2024-06-10 18:49:03,240][46753] Avg episode reward: [(0, '0.059')] +[2024-06-10 18:49:04,429][46990] Updated weights for policy 0, policy_version 6990 (0.0043) +[2024-06-10 18:49:08,047][46990] Updated weights for policy 0, policy_version 7000 (0.0035) +[2024-06-10 18:49:08,239][46753] Fps is (10 sec: 44256.8, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 114688000. Throughput: 0: 43726.2. Samples: 114797900. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 18:49:08,240][46753] Avg episode reward: [(0, '0.058')] +[2024-06-10 18:49:11,746][46990] Updated weights for policy 0, policy_version 7010 (0.0042) +[2024-06-10 18:49:13,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 114900992. Throughput: 0: 43945.3. Samples: 115068460. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 18:49:13,240][46753] Avg episode reward: [(0, '0.062')] +[2024-06-10 18:49:15,538][46990] Updated weights for policy 0, policy_version 7020 (0.0043) +[2024-06-10 18:49:18,240][46753] Fps is (10 sec: 45874.4, 60 sec: 43963.6, 300 sec: 43875.8). Total num frames: 115146752. Throughput: 0: 43681.6. Samples: 115194460. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 18:49:18,240][46753] Avg episode reward: [(0, '0.061')] +[2024-06-10 18:49:19,609][46990] Updated weights for policy 0, policy_version 7030 (0.0040) +[2024-06-10 18:49:22,775][46990] Updated weights for policy 0, policy_version 7040 (0.0037) +[2024-06-10 18:49:23,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 115359744. Throughput: 0: 43917.0. Samples: 115460740. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 18:49:23,240][46753] Avg episode reward: [(0, '0.056')] +[2024-06-10 18:49:27,053][46990] Updated weights for policy 0, policy_version 7050 (0.0052) +[2024-06-10 18:49:28,239][46753] Fps is (10 sec: 40960.7, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 115556352. Throughput: 0: 43454.7. Samples: 115711320. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 18:49:28,240][46753] Avg episode reward: [(0, '0.059')] +[2024-06-10 18:49:30,448][46990] Updated weights for policy 0, policy_version 7060 (0.0047) +[2024-06-10 18:49:33,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 115785728. Throughput: 0: 43488.0. Samples: 115843980. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 18:49:33,240][46753] Avg episode reward: [(0, '0.059')] +[2024-06-10 18:49:34,804][46990] Updated weights for policy 0, policy_version 7070 (0.0038) +[2024-06-10 18:49:38,086][46990] Updated weights for policy 0, policy_version 7080 (0.0033) +[2024-06-10 18:49:38,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 115998720. Throughput: 0: 43477.9. Samples: 116107780. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 18:49:38,240][46753] Avg episode reward: [(0, '0.053')] +[2024-06-10 18:49:42,019][46990] Updated weights for policy 0, policy_version 7090 (0.0032) +[2024-06-10 18:49:43,240][46753] Fps is (10 sec: 40959.8, 60 sec: 43417.5, 300 sec: 43653.7). Total num frames: 116195328. Throughput: 0: 43766.5. Samples: 116376700. Policy #0 lag: (min: 0.0, avg: 10.5, max: 20.0) +[2024-06-10 18:49:43,240][46753] Avg episode reward: [(0, '0.061')] +[2024-06-10 18:49:45,403][46970] Signal inference workers to stop experience collection... (1650 times) +[2024-06-10 18:49:45,456][46990] InferenceWorker_p0-w0: stopping experience collection (1650 times) +[2024-06-10 18:49:45,519][46970] Signal inference workers to resume experience collection... (1650 times) +[2024-06-10 18:49:45,519][46990] InferenceWorker_p0-w0: resuming experience collection (1650 times) +[2024-06-10 18:49:45,693][46990] Updated weights for policy 0, policy_version 7100 (0.0037) +[2024-06-10 18:49:48,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 116424704. Throughput: 0: 43557.0. Samples: 116500060. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 18:49:48,240][46753] Avg episode reward: [(0, '0.062')] +[2024-06-10 18:49:49,424][46990] Updated weights for policy 0, policy_version 7110 (0.0027) +[2024-06-10 18:49:52,884][46990] Updated weights for policy 0, policy_version 7120 (0.0036) +[2024-06-10 18:49:53,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 116670464. Throughput: 0: 43952.8. Samples: 116775780. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 18:49:53,240][46753] Avg episode reward: [(0, '0.062')] +[2024-06-10 18:49:56,886][46990] Updated weights for policy 0, policy_version 7130 (0.0031) +[2024-06-10 18:49:58,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43693.9, 300 sec: 43764.9). Total num frames: 116867072. Throughput: 0: 43587.1. Samples: 117029880. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 18:49:58,240][46753] Avg episode reward: [(0, '0.056')] +[2024-06-10 18:50:00,343][46990] Updated weights for policy 0, policy_version 7140 (0.0026) +[2024-06-10 18:50:03,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.7, 300 sec: 43986.9). Total num frames: 117112832. Throughput: 0: 43737.4. Samples: 117162640. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 18:50:03,240][46753] Avg episode reward: [(0, '0.063')] +[2024-06-10 18:50:04,399][46990] Updated weights for policy 0, policy_version 7150 (0.0031) +[2024-06-10 18:50:07,776][46990] Updated weights for policy 0, policy_version 7160 (0.0054) +[2024-06-10 18:50:08,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 117325824. Throughput: 0: 43722.3. Samples: 117428240. Policy #0 lag: (min: 0.0, avg: 9.7, max: 20.0) +[2024-06-10 18:50:08,240][46753] Avg episode reward: [(0, '0.078')] +[2024-06-10 18:50:08,244][46970] Saving new best policy, reward=0.078! +[2024-06-10 18:50:11,552][46990] Updated weights for policy 0, policy_version 7170 (0.0041) +[2024-06-10 18:50:13,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 117522432. Throughput: 0: 44054.2. Samples: 117693760. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 18:50:13,240][46753] Avg episode reward: [(0, '0.058')] +[2024-06-10 18:50:15,182][46990] Updated weights for policy 0, policy_version 7180 (0.0037) +[2024-06-10 18:50:18,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.7, 300 sec: 43820.2). Total num frames: 117751808. Throughput: 0: 43979.6. Samples: 117823060. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 18:50:18,240][46753] Avg episode reward: [(0, '0.057')] +[2024-06-10 18:50:18,843][46990] Updated weights for policy 0, policy_version 7190 (0.0035) +[2024-06-10 18:50:22,866][46990] Updated weights for policy 0, policy_version 7200 (0.0049) +[2024-06-10 18:50:23,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 117964800. Throughput: 0: 44018.2. Samples: 118088600. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 18:50:23,240][46753] Avg episode reward: [(0, '0.064')] +[2024-06-10 18:50:23,361][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000007201_117981184.pth... +[2024-06-10 18:50:23,407][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000006559_107462656.pth +[2024-06-10 18:50:26,720][46990] Updated weights for policy 0, policy_version 7210 (0.0037) +[2024-06-10 18:50:28,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 118177792. Throughput: 0: 43794.2. Samples: 118347440. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 18:50:28,250][46753] Avg episode reward: [(0, '0.050')] +[2024-06-10 18:50:30,710][46990] Updated weights for policy 0, policy_version 7220 (0.0028) +[2024-06-10 18:50:33,244][46753] Fps is (10 sec: 44216.9, 60 sec: 43687.4, 300 sec: 43875.1). Total num frames: 118407168. Throughput: 0: 43960.9. Samples: 118478500. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 18:50:33,244][46753] Avg episode reward: [(0, '0.058')] +[2024-06-10 18:50:34,074][46990] Updated weights for policy 0, policy_version 7230 (0.0025) +[2024-06-10 18:50:37,793][46990] Updated weights for policy 0, policy_version 7240 (0.0042) +[2024-06-10 18:50:38,239][46753] Fps is (10 sec: 44237.7, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 118620160. Throughput: 0: 43789.1. Samples: 118746280. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 18:50:38,240][46753] Avg episode reward: [(0, '0.061')] +[2024-06-10 18:50:41,286][46990] Updated weights for policy 0, policy_version 7250 (0.0038) +[2024-06-10 18:50:43,239][46753] Fps is (10 sec: 42617.4, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 118833152. Throughput: 0: 44055.1. Samples: 119012360. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 18:50:43,241][46753] Avg episode reward: [(0, '0.064')] +[2024-06-10 18:50:45,073][46990] Updated weights for policy 0, policy_version 7260 (0.0037) +[2024-06-10 18:50:48,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 119062528. Throughput: 0: 43937.9. Samples: 119139840. Policy #0 lag: (min: 0.0, avg: 9.3, max: 22.0) +[2024-06-10 18:50:48,240][46753] Avg episode reward: [(0, '0.067')] +[2024-06-10 18:50:48,644][46990] Updated weights for policy 0, policy_version 7270 (0.0037) +[2024-06-10 18:50:52,793][46990] Updated weights for policy 0, policy_version 7280 (0.0026) +[2024-06-10 18:50:53,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 119275520. Throughput: 0: 43961.7. Samples: 119406520. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 18:50:53,240][46753] Avg episode reward: [(0, '0.065')] +[2024-06-10 18:50:56,287][46990] Updated weights for policy 0, policy_version 7290 (0.0044) +[2024-06-10 18:50:58,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 119488512. Throughput: 0: 43829.0. Samples: 119666060. Policy #0 lag: (min: 0.0, avg: 10.8, max: 20.0) +[2024-06-10 18:50:58,240][46753] Avg episode reward: [(0, '0.064')] +[2024-06-10 18:51:00,713][46990] Updated weights for policy 0, policy_version 7300 (0.0033) +[2024-06-10 18:51:03,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43417.7, 300 sec: 43875.8). Total num frames: 119717888. Throughput: 0: 43918.3. Samples: 119799380. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 18:51:03,240][46753] Avg episode reward: [(0, '0.060')] +[2024-06-10 18:51:03,734][46990] Updated weights for policy 0, policy_version 7310 (0.0023) +[2024-06-10 18:51:08,005][46990] Updated weights for policy 0, policy_version 7320 (0.0037) +[2024-06-10 18:51:08,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 119930880. Throughput: 0: 43801.8. Samples: 120059680. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 18:51:08,240][46753] Avg episode reward: [(0, '0.067')] +[2024-06-10 18:51:10,983][46990] Updated weights for policy 0, policy_version 7330 (0.0032) +[2024-06-10 18:51:13,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 120143872. Throughput: 0: 43921.8. Samples: 120323920. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 18:51:13,243][46753] Avg episode reward: [(0, '0.071')] +[2024-06-10 18:51:15,256][46990] Updated weights for policy 0, policy_version 7340 (0.0039) +[2024-06-10 18:51:18,239][46753] Fps is (10 sec: 45874.8, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 120389632. Throughput: 0: 43823.9. Samples: 120450380. Policy #0 lag: (min: 0.0, avg: 10.2, max: 20.0) +[2024-06-10 18:51:18,240][46753] Avg episode reward: [(0, '0.052')] +[2024-06-10 18:51:18,670][46990] Updated weights for policy 0, policy_version 7350 (0.0039) +[2024-06-10 18:51:18,672][46970] Signal inference workers to stop experience collection... (1700 times) +[2024-06-10 18:51:18,672][46970] Signal inference workers to resume experience collection... (1700 times) +[2024-06-10 18:51:18,711][46990] InferenceWorker_p0-w0: stopping experience collection (1700 times) +[2024-06-10 18:51:18,711][46990] InferenceWorker_p0-w0: resuming experience collection (1700 times) +[2024-06-10 18:51:23,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43417.5, 300 sec: 43653.6). Total num frames: 120569856. Throughput: 0: 43718.0. Samples: 120713600. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:51:23,240][46753] Avg episode reward: [(0, '0.062')] +[2024-06-10 18:51:23,272][46990] Updated weights for policy 0, policy_version 7360 (0.0039) +[2024-06-10 18:51:26,357][46990] Updated weights for policy 0, policy_version 7370 (0.0042) +[2024-06-10 18:51:28,244][46753] Fps is (10 sec: 40941.6, 60 sec: 43687.4, 300 sec: 43764.1). Total num frames: 120799232. Throughput: 0: 43589.0. Samples: 120974060. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:51:28,244][46753] Avg episode reward: [(0, '0.061')] +[2024-06-10 18:51:30,798][46990] Updated weights for policy 0, policy_version 7380 (0.0045) +[2024-06-10 18:51:33,239][46753] Fps is (10 sec: 47514.6, 60 sec: 43967.1, 300 sec: 43875.8). Total num frames: 121044992. Throughput: 0: 43765.4. Samples: 121109280. Policy #0 lag: (min: 0.0, avg: 9.6, max: 20.0) +[2024-06-10 18:51:33,240][46753] Avg episode reward: [(0, '0.067')] +[2024-06-10 18:51:33,800][46990] Updated weights for policy 0, policy_version 7390 (0.0043) +[2024-06-10 18:51:38,041][46990] Updated weights for policy 0, policy_version 7400 (0.0027) +[2024-06-10 18:51:38,240][46753] Fps is (10 sec: 44256.1, 60 sec: 43690.5, 300 sec: 43709.2). Total num frames: 121241600. Throughput: 0: 43726.6. Samples: 121374220. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 18:51:38,240][46753] Avg episode reward: [(0, '0.070')] +[2024-06-10 18:51:40,948][46990] Updated weights for policy 0, policy_version 7410 (0.0038) +[2024-06-10 18:51:43,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 121454592. Throughput: 0: 43809.7. Samples: 121637500. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 18:51:43,240][46753] Avg episode reward: [(0, '0.068')] +[2024-06-10 18:51:45,383][46990] Updated weights for policy 0, policy_version 7420 (0.0038) +[2024-06-10 18:51:48,240][46753] Fps is (10 sec: 45875.4, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 121700352. Throughput: 0: 43692.8. Samples: 121765560. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 18:51:48,240][46753] Avg episode reward: [(0, '0.063')] +[2024-06-10 18:51:48,456][46990] Updated weights for policy 0, policy_version 7430 (0.0039) +[2024-06-10 18:51:52,980][46990] Updated weights for policy 0, policy_version 7440 (0.0047) +[2024-06-10 18:51:53,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 121896960. Throughput: 0: 43840.8. Samples: 122032520. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 18:51:53,240][46753] Avg episode reward: [(0, '0.065')] +[2024-06-10 18:51:56,141][46990] Updated weights for policy 0, policy_version 7450 (0.0029) +[2024-06-10 18:51:58,240][46753] Fps is (10 sec: 42598.0, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 122126336. Throughput: 0: 43686.6. Samples: 122289820. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 18:51:58,240][46753] Avg episode reward: [(0, '0.068')] +[2024-06-10 18:52:00,401][46990] Updated weights for policy 0, policy_version 7460 (0.0027) +[2024-06-10 18:52:03,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 122355712. Throughput: 0: 43712.5. Samples: 122417440. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 18:52:03,240][46753] Avg episode reward: [(0, '0.056')] +[2024-06-10 18:52:03,559][46990] Updated weights for policy 0, policy_version 7470 (0.0037) +[2024-06-10 18:52:07,850][46990] Updated weights for policy 0, policy_version 7480 (0.0042) +[2024-06-10 18:52:08,244][46753] Fps is (10 sec: 44218.1, 60 sec: 43960.4, 300 sec: 43819.6). Total num frames: 122568704. Throughput: 0: 43774.0. Samples: 122683620. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 18:52:08,244][46753] Avg episode reward: [(0, '0.079')] +[2024-06-10 18:52:10,827][46990] Updated weights for policy 0, policy_version 7490 (0.0042) +[2024-06-10 18:52:13,239][46753] Fps is (10 sec: 40959.4, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 122765312. Throughput: 0: 43875.9. Samples: 122948280. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 18:52:13,240][46753] Avg episode reward: [(0, '0.082')] +[2024-06-10 18:52:13,246][46970] Saving new best policy, reward=0.082! +[2024-06-10 18:52:15,305][46990] Updated weights for policy 0, policy_version 7500 (0.0033) +[2024-06-10 18:52:18,239][46753] Fps is (10 sec: 45895.1, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 123027456. Throughput: 0: 43674.9. Samples: 123074660. Policy #0 lag: (min: 0.0, avg: 10.5, max: 24.0) +[2024-06-10 18:52:18,240][46753] Avg episode reward: [(0, '0.061')] +[2024-06-10 18:52:18,395][46990] Updated weights for policy 0, policy_version 7510 (0.0041) +[2024-06-10 18:52:23,189][46990] Updated weights for policy 0, policy_version 7520 (0.0034) +[2024-06-10 18:52:23,240][46753] Fps is (10 sec: 44236.2, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 123207680. Throughput: 0: 43619.5. Samples: 123337100. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 18:52:23,240][46753] Avg episode reward: [(0, '0.057')] +[2024-06-10 18:52:23,264][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000007520_123207680.pth... +[2024-06-10 18:52:23,339][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000006880_112721920.pth +[2024-06-10 18:52:26,122][46990] Updated weights for policy 0, policy_version 7530 (0.0028) +[2024-06-10 18:52:28,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43967.0, 300 sec: 43820.3). Total num frames: 123437056. Throughput: 0: 43479.6. Samples: 123594080. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 18:52:28,240][46753] Avg episode reward: [(0, '0.065')] +[2024-06-10 18:52:30,711][46990] Updated weights for policy 0, policy_version 7540 (0.0051) +[2024-06-10 18:52:33,239][46753] Fps is (10 sec: 45876.1, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 123666432. Throughput: 0: 43670.3. Samples: 123730720. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:52:33,240][46753] Avg episode reward: [(0, '0.060')] +[2024-06-10 18:52:33,854][46990] Updated weights for policy 0, policy_version 7550 (0.0029) +[2024-06-10 18:52:37,946][46990] Updated weights for policy 0, policy_version 7560 (0.0035) +[2024-06-10 18:52:38,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 123863040. Throughput: 0: 43661.7. Samples: 123997300. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 18:52:38,240][46753] Avg episode reward: [(0, '0.073')] +[2024-06-10 18:52:41,062][46990] Updated weights for policy 0, policy_version 7570 (0.0033) +[2024-06-10 18:52:43,239][46753] Fps is (10 sec: 44236.8, 60 sec: 44236.8, 300 sec: 43820.3). Total num frames: 124108800. Throughput: 0: 43759.7. Samples: 124259000. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 18:52:43,240][46753] Avg episode reward: [(0, '0.054')] +[2024-06-10 18:52:45,393][46990] Updated weights for policy 0, policy_version 7580 (0.0030) +[2024-06-10 18:52:48,239][46753] Fps is (10 sec: 47514.0, 60 sec: 43963.8, 300 sec: 43820.9). Total num frames: 124338176. Throughput: 0: 43821.3. Samples: 124389400. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 18:52:48,240][46753] Avg episode reward: [(0, '0.071')] +[2024-06-10 18:52:48,330][46990] Updated weights for policy 0, policy_version 7590 (0.0034) +[2024-06-10 18:52:53,137][46990] Updated weights for policy 0, policy_version 7600 (0.0034) +[2024-06-10 18:52:53,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 124518400. Throughput: 0: 43737.6. Samples: 124651620. Policy #0 lag: (min: 0.0, avg: 10.1, max: 23.0) +[2024-06-10 18:52:53,240][46753] Avg episode reward: [(0, '0.054')] +[2024-06-10 18:52:56,021][46990] Updated weights for policy 0, policy_version 7610 (0.0029) +[2024-06-10 18:52:58,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.9, 300 sec: 43820.3). Total num frames: 124764160. Throughput: 0: 43519.2. Samples: 124906640. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 18:52:58,240][46753] Avg episode reward: [(0, '0.058')] +[2024-06-10 18:53:00,730][46990] Updated weights for policy 0, policy_version 7620 (0.0035) +[2024-06-10 18:53:03,240][46753] Fps is (10 sec: 44236.6, 60 sec: 43417.5, 300 sec: 43764.7). Total num frames: 124960768. Throughput: 0: 43718.2. Samples: 125041980. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 18:53:03,240][46753] Avg episode reward: [(0, '0.087')] +[2024-06-10 18:53:03,246][46970] Saving new best policy, reward=0.087! +[2024-06-10 18:53:03,826][46990] Updated weights for policy 0, policy_version 7630 (0.0023) +[2024-06-10 18:53:07,947][46990] Updated weights for policy 0, policy_version 7640 (0.0037) +[2024-06-10 18:53:08,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43420.8, 300 sec: 43653.6). Total num frames: 125173760. Throughput: 0: 43796.6. Samples: 125307940. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 18:53:08,240][46753] Avg episode reward: [(0, '0.073')] +[2024-06-10 18:53:10,428][46970] Signal inference workers to stop experience collection... (1750 times) +[2024-06-10 18:53:10,428][46970] Signal inference workers to resume experience collection... (1750 times) +[2024-06-10 18:53:10,461][46990] InferenceWorker_p0-w0: stopping experience collection (1750 times) +[2024-06-10 18:53:10,461][46990] InferenceWorker_p0-w0: resuming experience collection (1750 times) +[2024-06-10 18:53:11,018][46990] Updated weights for policy 0, policy_version 7650 (0.0029) +[2024-06-10 18:53:13,239][46753] Fps is (10 sec: 45875.6, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 125419520. Throughput: 0: 43802.6. Samples: 125565200. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 18:53:13,240][46753] Avg episode reward: [(0, '0.064')] +[2024-06-10 18:53:15,445][46990] Updated weights for policy 0, policy_version 7660 (0.0035) +[2024-06-10 18:53:18,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 125632512. Throughput: 0: 43690.3. Samples: 125696780. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 18:53:18,240][46753] Avg episode reward: [(0, '0.082')] +[2024-06-10 18:53:18,675][46990] Updated weights for policy 0, policy_version 7670 (0.0030) +[2024-06-10 18:53:23,088][46990] Updated weights for policy 0, policy_version 7680 (0.0029) +[2024-06-10 18:53:23,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 125829120. Throughput: 0: 43640.9. Samples: 125961140. Policy #0 lag: (min: 1.0, avg: 11.8, max: 21.0) +[2024-06-10 18:53:23,250][46753] Avg episode reward: [(0, '0.072')] +[2024-06-10 18:53:26,342][46990] Updated weights for policy 0, policy_version 7690 (0.0025) +[2024-06-10 18:53:28,240][46753] Fps is (10 sec: 44235.9, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 126074880. Throughput: 0: 43517.6. Samples: 126217300. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 18:53:28,240][46753] Avg episode reward: [(0, '0.075')] +[2024-06-10 18:53:30,724][46990] Updated weights for policy 0, policy_version 7700 (0.0027) +[2024-06-10 18:53:33,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 126271488. Throughput: 0: 43695.5. Samples: 126355700. Policy #0 lag: (min: 1.0, avg: 9.9, max: 21.0) +[2024-06-10 18:53:33,240][46753] Avg episode reward: [(0, '0.072')] +[2024-06-10 18:53:33,771][46990] Updated weights for policy 0, policy_version 7710 (0.0036) +[2024-06-10 18:53:37,786][46990] Updated weights for policy 0, policy_version 7720 (0.0030) +[2024-06-10 18:53:38,240][46753] Fps is (10 sec: 42598.5, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 126500864. Throughput: 0: 43895.0. Samples: 126626900. Policy #0 lag: (min: 1.0, avg: 9.9, max: 21.0) +[2024-06-10 18:53:38,240][46753] Avg episode reward: [(0, '0.080')] +[2024-06-10 18:53:40,917][46990] Updated weights for policy 0, policy_version 7730 (0.0032) +[2024-06-10 18:53:43,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 126730240. Throughput: 0: 43888.5. Samples: 126881620. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 18:53:43,240][46753] Avg episode reward: [(0, '0.084')] +[2024-06-10 18:53:45,050][46990] Updated weights for policy 0, policy_version 7740 (0.0039) +[2024-06-10 18:53:48,240][46753] Fps is (10 sec: 45875.0, 60 sec: 43690.5, 300 sec: 43764.7). Total num frames: 126959616. Throughput: 0: 44054.1. Samples: 127024420. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:53:48,240][46753] Avg episode reward: [(0, '0.071')] +[2024-06-10 18:53:48,400][46990] Updated weights for policy 0, policy_version 7750 (0.0035) +[2024-06-10 18:53:52,771][46990] Updated weights for policy 0, policy_version 7760 (0.0031) +[2024-06-10 18:53:53,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.8, 300 sec: 43765.4). Total num frames: 127156224. Throughput: 0: 43770.7. Samples: 127277620. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 18:53:53,240][46753] Avg episode reward: [(0, '0.073')] +[2024-06-10 18:53:56,335][46990] Updated weights for policy 0, policy_version 7770 (0.0031) +[2024-06-10 18:53:58,239][46753] Fps is (10 sec: 42599.3, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 127385600. Throughput: 0: 43785.8. Samples: 127535560. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 18:53:58,240][46753] Avg episode reward: [(0, '0.081')] +[2024-06-10 18:54:00,288][46990] Updated weights for policy 0, policy_version 7780 (0.0028) +[2024-06-10 18:54:03,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43963.9, 300 sec: 43764.7). Total num frames: 127598592. Throughput: 0: 43937.8. Samples: 127673980. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 18:54:03,240][46753] Avg episode reward: [(0, '0.077')] +[2024-06-10 18:54:03,723][46990] Updated weights for policy 0, policy_version 7790 (0.0036) +[2024-06-10 18:54:07,847][46990] Updated weights for policy 0, policy_version 7800 (0.0036) +[2024-06-10 18:54:08,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 127811584. Throughput: 0: 43969.7. Samples: 127939780. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 18:54:08,240][46753] Avg episode reward: [(0, '0.079')] +[2024-06-10 18:54:11,014][46990] Updated weights for policy 0, policy_version 7810 (0.0040) +[2024-06-10 18:54:13,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 128040960. Throughput: 0: 44055.7. Samples: 128199800. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 18:54:13,240][46753] Avg episode reward: [(0, '0.082')] +[2024-06-10 18:54:15,167][46990] Updated weights for policy 0, policy_version 7820 (0.0030) +[2024-06-10 18:54:18,240][46753] Fps is (10 sec: 45874.8, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 128270336. Throughput: 0: 44017.7. Samples: 128336500. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 18:54:18,240][46753] Avg episode reward: [(0, '0.065')] +[2024-06-10 18:54:18,694][46990] Updated weights for policy 0, policy_version 7830 (0.0038) +[2024-06-10 18:54:22,759][46990] Updated weights for policy 0, policy_version 7840 (0.0049) +[2024-06-10 18:54:23,240][46753] Fps is (10 sec: 42597.6, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 128466944. Throughput: 0: 43688.4. Samples: 128592880. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 18:54:23,240][46753] Avg episode reward: [(0, '0.074')] +[2024-06-10 18:54:23,261][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000007841_128466944.pth... +[2024-06-10 18:54:23,317][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000007201_117981184.pth +[2024-06-10 18:54:26,121][46990] Updated weights for policy 0, policy_version 7850 (0.0038) +[2024-06-10 18:54:28,240][46753] Fps is (10 sec: 42598.6, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 128696320. Throughput: 0: 43739.9. Samples: 128849920. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 18:54:28,240][46753] Avg episode reward: [(0, '0.097')] +[2024-06-10 18:54:28,241][46970] Saving new best policy, reward=0.097! +[2024-06-10 18:54:30,336][46990] Updated weights for policy 0, policy_version 7860 (0.0036) +[2024-06-10 18:54:33,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 128909312. Throughput: 0: 43715.7. Samples: 128991620. Policy #0 lag: (min: 0.0, avg: 10.6, max: 20.0) +[2024-06-10 18:54:33,240][46753] Avg episode reward: [(0, '0.070')] +[2024-06-10 18:54:33,613][46990] Updated weights for policy 0, policy_version 7870 (0.0032) +[2024-06-10 18:54:37,728][46990] Updated weights for policy 0, policy_version 7880 (0.0035) +[2024-06-10 18:54:38,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 129122304. Throughput: 0: 43911.1. Samples: 129253620. Policy #0 lag: (min: 0.0, avg: 10.1, max: 22.0) +[2024-06-10 18:54:38,240][46753] Avg episode reward: [(0, '0.081')] +[2024-06-10 18:54:40,914][46990] Updated weights for policy 0, policy_version 7890 (0.0044) +[2024-06-10 18:54:43,240][46753] Fps is (10 sec: 44234.7, 60 sec: 43690.3, 300 sec: 43820.2). Total num frames: 129351680. Throughput: 0: 43918.1. Samples: 129511900. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 18:54:43,240][46753] Avg episode reward: [(0, '0.085')] +[2024-06-10 18:54:45,143][46990] Updated weights for policy 0, policy_version 7900 (0.0026) +[2024-06-10 18:54:48,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 129581056. Throughput: 0: 43862.6. Samples: 129647800. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 18:54:48,240][46753] Avg episode reward: [(0, '0.069')] +[2024-06-10 18:54:48,515][46990] Updated weights for policy 0, policy_version 7910 (0.0034) +[2024-06-10 18:54:52,643][46990] Updated weights for policy 0, policy_version 7920 (0.0042) +[2024-06-10 18:54:53,239][46753] Fps is (10 sec: 42600.7, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 129777664. Throughput: 0: 43739.2. Samples: 129908040. Policy #0 lag: (min: 0.0, avg: 12.4, max: 22.0) +[2024-06-10 18:54:53,240][46753] Avg episode reward: [(0, '0.084')] +[2024-06-10 18:54:55,943][46990] Updated weights for policy 0, policy_version 7930 (0.0029) +[2024-06-10 18:54:58,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43417.6, 300 sec: 43653.7). Total num frames: 129990656. Throughput: 0: 43616.5. Samples: 130162540. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 18:54:58,240][46753] Avg episode reward: [(0, '0.071')] +[2024-06-10 18:54:58,248][46970] Signal inference workers to stop experience collection... (1800 times) +[2024-06-10 18:54:58,294][46970] Signal inference workers to resume experience collection... (1800 times) +[2024-06-10 18:54:58,296][46990] InferenceWorker_p0-w0: stopping experience collection (1800 times) +[2024-06-10 18:54:58,325][46990] InferenceWorker_p0-w0: resuming experience collection (1800 times) +[2024-06-10 18:55:00,338][46990] Updated weights for policy 0, policy_version 7940 (0.0037) +[2024-06-10 18:55:03,239][46753] Fps is (10 sec: 45874.8, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 130236416. Throughput: 0: 43630.3. Samples: 130299860. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 18:55:03,240][46753] Avg episode reward: [(0, '0.068')] +[2024-06-10 18:55:03,509][46990] Updated weights for policy 0, policy_version 7950 (0.0036) +[2024-06-10 18:55:07,828][46990] Updated weights for policy 0, policy_version 7960 (0.0047) +[2024-06-10 18:55:08,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 130433024. Throughput: 0: 43723.2. Samples: 130560420. Policy #0 lag: (min: 0.0, avg: 11.4, max: 24.0) +[2024-06-10 18:55:08,240][46753] Avg episode reward: [(0, '0.067')] +[2024-06-10 18:55:11,098][46990] Updated weights for policy 0, policy_version 7970 (0.0034) +[2024-06-10 18:55:13,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 130662400. Throughput: 0: 43862.7. Samples: 130823740. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 18:55:13,240][46753] Avg episode reward: [(0, '0.083')] +[2024-06-10 18:55:15,158][46990] Updated weights for policy 0, policy_version 7980 (0.0031) +[2024-06-10 18:55:18,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43690.8, 300 sec: 43820.3). Total num frames: 130891776. Throughput: 0: 43737.5. Samples: 130959800. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 18:55:18,240][46753] Avg episode reward: [(0, '0.070')] +[2024-06-10 18:55:18,320][46990] Updated weights for policy 0, policy_version 7990 (0.0035) +[2024-06-10 18:55:22,663][46990] Updated weights for policy 0, policy_version 8000 (0.0038) +[2024-06-10 18:55:23,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 131072000. Throughput: 0: 43801.8. Samples: 131224700. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 18:55:23,240][46753] Avg episode reward: [(0, '0.084')] +[2024-06-10 18:55:25,742][46990] Updated weights for policy 0, policy_version 8010 (0.0057) +[2024-06-10 18:55:28,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43963.9, 300 sec: 43820.9). Total num frames: 131334144. Throughput: 0: 43733.5. Samples: 131479880. Policy #0 lag: (min: 1.0, avg: 9.8, max: 22.0) +[2024-06-10 18:55:28,240][46753] Avg episode reward: [(0, '0.080')] +[2024-06-10 18:55:29,943][46990] Updated weights for policy 0, policy_version 8020 (0.0033) +[2024-06-10 18:55:33,198][46990] Updated weights for policy 0, policy_version 8030 (0.0024) +[2024-06-10 18:55:33,240][46753] Fps is (10 sec: 49151.1, 60 sec: 44236.7, 300 sec: 43875.7). Total num frames: 131563520. Throughput: 0: 43782.0. Samples: 131618000. Policy #0 lag: (min: 0.0, avg: 8.3, max: 22.0) +[2024-06-10 18:55:33,240][46753] Avg episode reward: [(0, '0.091')] +[2024-06-10 18:55:37,587][46990] Updated weights for policy 0, policy_version 8040 (0.0037) +[2024-06-10 18:55:38,240][46753] Fps is (10 sec: 42597.0, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 131760128. Throughput: 0: 43763.3. Samples: 131877400. Policy #0 lag: (min: 0.0, avg: 9.4, max: 20.0) +[2024-06-10 18:55:38,240][46753] Avg episode reward: [(0, '0.077')] +[2024-06-10 18:55:40,893][46990] Updated weights for policy 0, policy_version 8050 (0.0039) +[2024-06-10 18:55:43,239][46753] Fps is (10 sec: 40961.0, 60 sec: 43691.1, 300 sec: 43764.7). Total num frames: 131973120. Throughput: 0: 43930.1. Samples: 132139400. Policy #0 lag: (min: 0.0, avg: 9.4, max: 20.0) +[2024-06-10 18:55:43,240][46753] Avg episode reward: [(0, '0.076')] +[2024-06-10 18:55:45,287][46990] Updated weights for policy 0, policy_version 8060 (0.0030) +[2024-06-10 18:55:48,222][46990] Updated weights for policy 0, policy_version 8070 (0.0044) +[2024-06-10 18:55:48,239][46753] Fps is (10 sec: 45876.3, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 132218880. Throughput: 0: 43814.3. Samples: 132271500. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 18:55:48,240][46753] Avg episode reward: [(0, '0.089')] +[2024-06-10 18:55:52,649][46990] Updated weights for policy 0, policy_version 8080 (0.0031) +[2024-06-10 18:55:53,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43417.5, 300 sec: 43709.2). Total num frames: 132382720. Throughput: 0: 43826.2. Samples: 132532600. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 18:55:53,240][46753] Avg episode reward: [(0, '0.079')] +[2024-06-10 18:55:55,882][46990] Updated weights for policy 0, policy_version 8090 (0.0037) +[2024-06-10 18:55:58,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 132628480. Throughput: 0: 43609.4. Samples: 132786160. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 18:55:58,240][46753] Avg episode reward: [(0, '0.077')] +[2024-06-10 18:56:00,019][46990] Updated weights for policy 0, policy_version 8100 (0.0043) +[2024-06-10 18:56:03,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 132857856. Throughput: 0: 43647.5. Samples: 132923940. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 18:56:03,240][46753] Avg episode reward: [(0, '0.084')] +[2024-06-10 18:56:03,311][46990] Updated weights for policy 0, policy_version 8110 (0.0036) +[2024-06-10 18:56:07,766][46990] Updated weights for policy 0, policy_version 8120 (0.0037) +[2024-06-10 18:56:08,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 133054464. Throughput: 0: 43559.6. Samples: 133184880. Policy #0 lag: (min: 1.0, avg: 10.2, max: 21.0) +[2024-06-10 18:56:08,240][46753] Avg episode reward: [(0, '0.084')] +[2024-06-10 18:56:10,842][46990] Updated weights for policy 0, policy_version 8130 (0.0038) +[2024-06-10 18:56:13,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 133283840. Throughput: 0: 43587.0. Samples: 133441300. Policy #0 lag: (min: 0.0, avg: 11.0, max: 22.0) +[2024-06-10 18:56:13,240][46753] Avg episode reward: [(0, '0.089')] +[2024-06-10 18:56:15,561][46990] Updated weights for policy 0, policy_version 8140 (0.0045) +[2024-06-10 18:56:18,195][46990] Updated weights for policy 0, policy_version 8150 (0.0028) +[2024-06-10 18:56:18,240][46753] Fps is (10 sec: 47512.9, 60 sec: 43963.6, 300 sec: 43931.3). Total num frames: 133529600. Throughput: 0: 43628.5. Samples: 133581280. Policy #0 lag: (min: 0.0, avg: 11.0, max: 22.0) +[2024-06-10 18:56:18,240][46753] Avg episode reward: [(0, '0.094')] +[2024-06-10 18:56:23,122][46990] Updated weights for policy 0, policy_version 8160 (0.0039) +[2024-06-10 18:56:23,240][46753] Fps is (10 sec: 40959.3, 60 sec: 43690.6, 300 sec: 43709.8). Total num frames: 133693440. Throughput: 0: 43479.2. Samples: 133833960. Policy #0 lag: (min: 1.0, avg: 12.2, max: 21.0) +[2024-06-10 18:56:23,240][46753] Avg episode reward: [(0, '0.085')] +[2024-06-10 18:56:23,249][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000008160_133693440.pth... +[2024-06-10 18:56:23,307][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000007520_123207680.pth +[2024-06-10 18:56:25,926][46990] Updated weights for policy 0, policy_version 8170 (0.0037) +[2024-06-10 18:56:28,241][46753] Fps is (10 sec: 40952.1, 60 sec: 43416.0, 300 sec: 43708.9). Total num frames: 133939200. Throughput: 0: 43262.4. Samples: 134086300. Policy #0 lag: (min: 0.0, avg: 10.7, max: 20.0) +[2024-06-10 18:56:28,242][46753] Avg episode reward: [(0, '0.086')] +[2024-06-10 18:56:30,520][46990] Updated weights for policy 0, policy_version 8180 (0.0030) +[2024-06-10 18:56:32,898][46970] Signal inference workers to stop experience collection... (1850 times) +[2024-06-10 18:56:32,950][46990] InferenceWorker_p0-w0: stopping experience collection (1850 times) +[2024-06-10 18:56:32,956][46970] Signal inference workers to resume experience collection... (1850 times) +[2024-06-10 18:56:32,963][46990] InferenceWorker_p0-w0: resuming experience collection (1850 times) +[2024-06-10 18:56:33,239][46753] Fps is (10 sec: 47514.8, 60 sec: 43417.8, 300 sec: 43820.3). Total num frames: 134168576. Throughput: 0: 43553.4. Samples: 134231400. Policy #0 lag: (min: 0.0, avg: 10.7, max: 20.0) +[2024-06-10 18:56:33,240][46753] Avg episode reward: [(0, '0.091')] +[2024-06-10 18:56:33,336][46990] Updated weights for policy 0, policy_version 8190 (0.0043) +[2024-06-10 18:56:37,689][46990] Updated weights for policy 0, policy_version 8200 (0.0034) +[2024-06-10 18:56:38,239][46753] Fps is (10 sec: 42606.9, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 134365184. Throughput: 0: 43625.8. Samples: 134495760. Policy #0 lag: (min: 0.0, avg: 10.9, max: 20.0) +[2024-06-10 18:56:38,244][46753] Avg episode reward: [(0, '0.098')] +[2024-06-10 18:56:41,045][46990] Updated weights for policy 0, policy_version 8210 (0.0041) +[2024-06-10 18:56:43,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 134594560. Throughput: 0: 43714.7. Samples: 134753320. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 18:56:43,240][46753] Avg episode reward: [(0, '0.089')] +[2024-06-10 18:56:45,772][46990] Updated weights for policy 0, policy_version 8220 (0.0034) +[2024-06-10 18:56:48,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 134823936. Throughput: 0: 43651.1. Samples: 134888240. Policy #0 lag: (min: 1.0, avg: 10.9, max: 22.0) +[2024-06-10 18:56:48,240][46753] Avg episode reward: [(0, '0.095')] +[2024-06-10 18:56:48,450][46990] Updated weights for policy 0, policy_version 8230 (0.0030) +[2024-06-10 18:56:52,926][46990] Updated weights for policy 0, policy_version 8240 (0.0033) +[2024-06-10 18:56:53,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 135004160. Throughput: 0: 43508.0. Samples: 135142740. Policy #0 lag: (min: 1.0, avg: 10.9, max: 22.0) +[2024-06-10 18:56:53,240][46753] Avg episode reward: [(0, '0.090')] +[2024-06-10 18:56:55,840][46990] Updated weights for policy 0, policy_version 8250 (0.0036) +[2024-06-10 18:56:58,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 135249920. Throughput: 0: 43545.3. Samples: 135400840. Policy #0 lag: (min: 0.0, avg: 11.6, max: 24.0) +[2024-06-10 18:56:58,241][46753] Avg episode reward: [(0, '0.091')] +[2024-06-10 18:57:00,154][46990] Updated weights for policy 0, policy_version 8260 (0.0028) +[2024-06-10 18:57:03,244][46753] Fps is (10 sec: 47493.9, 60 sec: 43687.6, 300 sec: 43764.8). Total num frames: 135479296. Throughput: 0: 43540.1. Samples: 135540760. Policy #0 lag: (min: 0.0, avg: 9.1, max: 19.0) +[2024-06-10 18:57:03,244][46753] Avg episode reward: [(0, '0.093')] +[2024-06-10 18:57:03,343][46990] Updated weights for policy 0, policy_version 8270 (0.0031) +[2024-06-10 18:57:08,052][46990] Updated weights for policy 0, policy_version 8280 (0.0033) +[2024-06-10 18:57:08,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 135659520. Throughput: 0: 43677.5. Samples: 135799440. Policy #0 lag: (min: 0.0, avg: 9.1, max: 19.0) +[2024-06-10 18:57:08,240][46753] Avg episode reward: [(0, '0.096')] +[2024-06-10 18:57:11,026][46990] Updated weights for policy 0, policy_version 8290 (0.0024) +[2024-06-10 18:57:13,239][46753] Fps is (10 sec: 42616.4, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 135905280. Throughput: 0: 43804.7. Samples: 136057420. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 18:57:13,240][46753] Avg episode reward: [(0, '0.097')] +[2024-06-10 18:57:15,519][46990] Updated weights for policy 0, policy_version 8300 (0.0028) +[2024-06-10 18:57:18,239][46753] Fps is (10 sec: 47513.5, 60 sec: 43417.7, 300 sec: 43820.3). Total num frames: 136134656. Throughput: 0: 43700.3. Samples: 136197920. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 18:57:18,240][46753] Avg episode reward: [(0, '0.099')] +[2024-06-10 18:57:18,240][46970] Saving new best policy, reward=0.099! +[2024-06-10 18:57:18,423][46990] Updated weights for policy 0, policy_version 8310 (0.0026) +[2024-06-10 18:57:22,886][46990] Updated weights for policy 0, policy_version 8320 (0.0041) +[2024-06-10 18:57:23,240][46753] Fps is (10 sec: 40959.5, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 136314880. Throughput: 0: 43564.4. Samples: 136456160. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 18:57:23,240][46753] Avg episode reward: [(0, '0.076')] +[2024-06-10 18:57:26,138][46990] Updated weights for policy 0, policy_version 8330 (0.0039) +[2024-06-10 18:57:28,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43692.2, 300 sec: 43709.2). Total num frames: 136560640. Throughput: 0: 43563.6. Samples: 136713680. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 18:57:28,240][46753] Avg episode reward: [(0, '0.084')] +[2024-06-10 18:57:30,017][46990] Updated weights for policy 0, policy_version 8340 (0.0028) +[2024-06-10 18:57:33,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43417.5, 300 sec: 43764.7). Total num frames: 136773632. Throughput: 0: 43811.1. Samples: 136859740. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 18:57:33,240][46753] Avg episode reward: [(0, '0.095')] +[2024-06-10 18:57:33,256][46970] Signal inference workers to stop experience collection... (1900 times) +[2024-06-10 18:57:33,257][46970] Signal inference workers to resume experience collection... (1900 times) +[2024-06-10 18:57:33,286][46990] InferenceWorker_p0-w0: stopping experience collection (1900 times) +[2024-06-10 18:57:33,286][46990] InferenceWorker_p0-w0: resuming experience collection (1900 times) +[2024-06-10 18:57:33,395][46990] Updated weights for policy 0, policy_version 8350 (0.0040) +[2024-06-10 18:57:37,307][46990] Updated weights for policy 0, policy_version 8360 (0.0053) +[2024-06-10 18:57:38,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 136986624. Throughput: 0: 43927.6. Samples: 137119480. Policy #0 lag: (min: 0.0, avg: 9.2, max: 20.0) +[2024-06-10 18:57:38,240][46753] Avg episode reward: [(0, '0.096')] +[2024-06-10 18:57:41,070][46990] Updated weights for policy 0, policy_version 8370 (0.0036) +[2024-06-10 18:57:43,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 137216000. Throughput: 0: 43851.2. Samples: 137374140. Policy #0 lag: (min: 0.0, avg: 9.2, max: 20.0) +[2024-06-10 18:57:43,240][46753] Avg episode reward: [(0, '0.075')] +[2024-06-10 18:57:45,066][46990] Updated weights for policy 0, policy_version 8380 (0.0035) +[2024-06-10 18:57:48,240][46753] Fps is (10 sec: 44235.9, 60 sec: 43417.5, 300 sec: 43764.7). Total num frames: 137428992. Throughput: 0: 43841.2. Samples: 137513440. Policy #0 lag: (min: 0.0, avg: 7.3, max: 20.0) +[2024-06-10 18:57:48,240][46753] Avg episode reward: [(0, '0.097')] +[2024-06-10 18:57:48,541][46990] Updated weights for policy 0, policy_version 8390 (0.0031) +[2024-06-10 18:57:52,714][46990] Updated weights for policy 0, policy_version 8400 (0.0033) +[2024-06-10 18:57:53,240][46753] Fps is (10 sec: 40959.3, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 137625600. Throughput: 0: 43609.7. Samples: 137761880. Policy #0 lag: (min: 0.0, avg: 7.3, max: 20.0) +[2024-06-10 18:57:53,240][46753] Avg episode reward: [(0, '0.090')] +[2024-06-10 18:57:56,196][46990] Updated weights for policy 0, policy_version 8410 (0.0042) +[2024-06-10 18:57:58,240][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 137871360. Throughput: 0: 43675.4. Samples: 138022820. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 18:57:58,240][46753] Avg episode reward: [(0, '0.094')] +[2024-06-10 18:57:59,986][46990] Updated weights for policy 0, policy_version 8420 (0.0034) +[2024-06-10 18:58:03,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43147.6, 300 sec: 43709.2). Total num frames: 138067968. Throughput: 0: 43585.4. Samples: 138159260. Policy #0 lag: (min: 0.0, avg: 9.0, max: 20.0) +[2024-06-10 18:58:03,240][46753] Avg episode reward: [(0, '0.095')] +[2024-06-10 18:58:03,672][46990] Updated weights for policy 0, policy_version 8430 (0.0024) +[2024-06-10 18:58:07,380][46990] Updated weights for policy 0, policy_version 8440 (0.0036) +[2024-06-10 18:58:08,240][46753] Fps is (10 sec: 40960.3, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 138280960. Throughput: 0: 43645.8. Samples: 138420220. Policy #0 lag: (min: 0.0, avg: 9.0, max: 20.0) +[2024-06-10 18:58:08,240][46753] Avg episode reward: [(0, '0.085')] +[2024-06-10 18:58:11,217][46990] Updated weights for policy 0, policy_version 8450 (0.0042) +[2024-06-10 18:58:13,240][46753] Fps is (10 sec: 45874.3, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 138526720. Throughput: 0: 43689.2. Samples: 138679700. Policy #0 lag: (min: 0.0, avg: 9.6, max: 20.0) +[2024-06-10 18:58:13,240][46753] Avg episode reward: [(0, '0.088')] +[2024-06-10 18:58:14,825][46990] Updated weights for policy 0, policy_version 8460 (0.0047) +[2024-06-10 18:58:18,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43144.5, 300 sec: 43709.2). Total num frames: 138723328. Throughput: 0: 43525.3. Samples: 138818380. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 18:58:18,240][46753] Avg episode reward: [(0, '0.089')] +[2024-06-10 18:58:18,657][46990] Updated weights for policy 0, policy_version 8470 (0.0034) +[2024-06-10 18:58:22,633][46990] Updated weights for policy 0, policy_version 8480 (0.0044) +[2024-06-10 18:58:23,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43963.8, 300 sec: 43653.7). Total num frames: 138952704. Throughput: 0: 43474.6. Samples: 139075840. Policy #0 lag: (min: 0.0, avg: 8.5, max: 20.0) +[2024-06-10 18:58:23,240][46753] Avg episode reward: [(0, '0.097')] +[2024-06-10 18:58:23,263][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000008481_138952704.pth... +[2024-06-10 18:58:23,329][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000007841_128466944.pth +[2024-06-10 18:58:26,171][46990] Updated weights for policy 0, policy_version 8490 (0.0038) +[2024-06-10 18:58:28,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 139182080. Throughput: 0: 43531.0. Samples: 139333040. Policy #0 lag: (min: 0.0, avg: 8.5, max: 20.0) +[2024-06-10 18:58:28,240][46753] Avg episode reward: [(0, '0.094')] +[2024-06-10 18:58:30,238][46990] Updated weights for policy 0, policy_version 8500 (0.0021) +[2024-06-10 18:58:33,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 139395072. Throughput: 0: 43536.7. Samples: 139472580. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 18:58:33,240][46753] Avg episode reward: [(0, '0.101')] +[2024-06-10 18:58:33,368][46970] Saving new best policy, reward=0.101! +[2024-06-10 18:58:33,639][46990] Updated weights for policy 0, policy_version 8510 (0.0032) +[2024-06-10 18:58:37,545][46990] Updated weights for policy 0, policy_version 8520 (0.0026) +[2024-06-10 18:58:38,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 139591680. Throughput: 0: 43795.7. Samples: 139732680. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 18:58:38,240][46753] Avg episode reward: [(0, '0.110')] +[2024-06-10 18:58:38,243][46970] Saving new best policy, reward=0.110! +[2024-06-10 18:58:41,326][46990] Updated weights for policy 0, policy_version 8530 (0.0028) +[2024-06-10 18:58:43,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.6, 300 sec: 43653.7). Total num frames: 139837440. Throughput: 0: 43742.4. Samples: 139991220. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 18:58:43,240][46753] Avg episode reward: [(0, '0.094')] +[2024-06-10 18:58:45,201][46990] Updated weights for policy 0, policy_version 8540 (0.0041) +[2024-06-10 18:58:48,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43417.7, 300 sec: 43653.6). Total num frames: 140034048. Throughput: 0: 43688.8. Samples: 140125260. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 18:58:48,240][46753] Avg episode reward: [(0, '0.105')] +[2024-06-10 18:58:48,719][46990] Updated weights for policy 0, policy_version 8550 (0.0033) +[2024-06-10 18:58:52,528][46990] Updated weights for policy 0, policy_version 8560 (0.0037) +[2024-06-10 18:58:53,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 140263424. Throughput: 0: 43796.8. Samples: 140391080. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 18:58:53,240][46753] Avg episode reward: [(0, '0.106')] +[2024-06-10 18:58:56,121][46990] Updated weights for policy 0, policy_version 8570 (0.0045) +[2024-06-10 18:58:56,631][46970] Signal inference workers to stop experience collection... (1950 times) +[2024-06-10 18:58:56,632][46970] Signal inference workers to resume experience collection... (1950 times) +[2024-06-10 18:58:56,655][46990] InferenceWorker_p0-w0: stopping experience collection (1950 times) +[2024-06-10 18:58:56,655][46990] InferenceWorker_p0-w0: resuming experience collection (1950 times) +[2024-06-10 18:58:58,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 140492800. Throughput: 0: 43839.2. Samples: 140652460. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 18:58:58,243][46753] Avg episode reward: [(0, '0.095')] +[2024-06-10 18:59:00,070][46990] Updated weights for policy 0, policy_version 8580 (0.0038) +[2024-06-10 18:59:03,239][46753] Fps is (10 sec: 44237.7, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 140705792. Throughput: 0: 43726.7. Samples: 140786080. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 18:59:03,240][46753] Avg episode reward: [(0, '0.097')] +[2024-06-10 18:59:03,757][46990] Updated weights for policy 0, policy_version 8590 (0.0030) +[2024-06-10 18:59:07,642][46990] Updated weights for policy 0, policy_version 8600 (0.0031) +[2024-06-10 18:59:08,239][46753] Fps is (10 sec: 44236.7, 60 sec: 44236.8, 300 sec: 43709.2). Total num frames: 140935168. Throughput: 0: 43826.6. Samples: 141048040. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 18:59:08,240][46753] Avg episode reward: [(0, '0.094')] +[2024-06-10 18:59:11,257][46990] Updated weights for policy 0, policy_version 8610 (0.0041) +[2024-06-10 18:59:13,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 141148160. Throughput: 0: 43864.0. Samples: 141306920. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 18:59:13,240][46753] Avg episode reward: [(0, '0.089')] +[2024-06-10 18:59:15,110][46990] Updated weights for policy 0, policy_version 8620 (0.0032) +[2024-06-10 18:59:18,239][46753] Fps is (10 sec: 39321.6, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 141328384. Throughput: 0: 43609.6. Samples: 141435020. Policy #0 lag: (min: 0.0, avg: 11.0, max: 23.0) +[2024-06-10 18:59:18,240][46753] Avg episode reward: [(0, '0.088')] +[2024-06-10 18:59:18,863][46990] Updated weights for policy 0, policy_version 8630 (0.0042) +[2024-06-10 18:59:22,687][46990] Updated weights for policy 0, policy_version 8640 (0.0038) +[2024-06-10 18:59:23,244][46753] Fps is (10 sec: 42579.6, 60 sec: 43687.4, 300 sec: 43653.0). Total num frames: 141574144. Throughput: 0: 43514.3. Samples: 141691020. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:59:23,244][46753] Avg episode reward: [(0, '0.093')] +[2024-06-10 18:59:26,597][46990] Updated weights for policy 0, policy_version 8650 (0.0037) +[2024-06-10 18:59:28,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 141787136. Throughput: 0: 43669.3. Samples: 141956340. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 18:59:28,240][46753] Avg episode reward: [(0, '0.104')] +[2024-06-10 18:59:29,909][46990] Updated weights for policy 0, policy_version 8660 (0.0031) +[2024-06-10 18:59:33,239][46753] Fps is (10 sec: 42617.8, 60 sec: 43417.6, 300 sec: 43653.7). Total num frames: 142000128. Throughput: 0: 43618.7. Samples: 142088100. Policy #0 lag: (min: 0.0, avg: 10.1, max: 23.0) +[2024-06-10 18:59:33,240][46753] Avg episode reward: [(0, '0.113')] +[2024-06-10 18:59:33,352][46970] Saving new best policy, reward=0.113! +[2024-06-10 18:59:33,982][46990] Updated weights for policy 0, policy_version 8670 (0.0042) +[2024-06-10 18:59:37,911][46990] Updated weights for policy 0, policy_version 8680 (0.0035) +[2024-06-10 18:59:38,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.6, 300 sec: 43653.7). Total num frames: 142229504. Throughput: 0: 43404.1. Samples: 142344260. Policy #0 lag: (min: 0.0, avg: 9.7, max: 20.0) +[2024-06-10 18:59:38,240][46753] Avg episode reward: [(0, '0.084')] +[2024-06-10 18:59:41,391][46990] Updated weights for policy 0, policy_version 8690 (0.0033) +[2024-06-10 18:59:43,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 142442496. Throughput: 0: 43260.1. Samples: 142599160. Policy #0 lag: (min: 0.0, avg: 9.7, max: 20.0) +[2024-06-10 18:59:43,240][46753] Avg episode reward: [(0, '0.100')] +[2024-06-10 18:59:45,220][46990] Updated weights for policy 0, policy_version 8700 (0.0030) +[2024-06-10 18:59:48,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 142639104. Throughput: 0: 43207.5. Samples: 142730420. Policy #0 lag: (min: 0.0, avg: 8.3, max: 19.0) +[2024-06-10 18:59:48,240][46753] Avg episode reward: [(0, '0.101')] +[2024-06-10 18:59:49,263][46990] Updated weights for policy 0, policy_version 8710 (0.0045) +[2024-06-10 18:59:52,783][46990] Updated weights for policy 0, policy_version 8720 (0.0035) +[2024-06-10 18:59:53,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.8, 300 sec: 43653.6). Total num frames: 142868480. Throughput: 0: 43247.3. Samples: 142994160. Policy #0 lag: (min: 0.0, avg: 9.4, max: 20.0) +[2024-06-10 18:59:53,240][46753] Avg episode reward: [(0, '0.103')] +[2024-06-10 18:59:56,721][46990] Updated weights for policy 0, policy_version 8730 (0.0038) +[2024-06-10 18:59:58,239][46753] Fps is (10 sec: 47513.4, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 143114240. Throughput: 0: 43272.9. Samples: 143254200. Policy #0 lag: (min: 0.0, avg: 9.4, max: 20.0) +[2024-06-10 18:59:58,240][46753] Avg episode reward: [(0, '0.104')] +[2024-06-10 19:00:00,095][46990] Updated weights for policy 0, policy_version 8740 (0.0039) +[2024-06-10 19:00:03,239][46753] Fps is (10 sec: 42597.8, 60 sec: 43144.5, 300 sec: 43598.1). Total num frames: 143294464. Throughput: 0: 43433.8. Samples: 143389540. Policy #0 lag: (min: 0.0, avg: 10.0, max: 20.0) +[2024-06-10 19:00:03,240][46753] Avg episode reward: [(0, '0.089')] +[2024-06-10 19:00:04,020][46990] Updated weights for policy 0, policy_version 8750 (0.0029) +[2024-06-10 19:00:07,573][46990] Updated weights for policy 0, policy_version 8760 (0.0031) +[2024-06-10 19:00:08,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 143540224. Throughput: 0: 43745.7. Samples: 143659380. Policy #0 lag: (min: 0.0, avg: 10.4, max: 20.0) +[2024-06-10 19:00:08,240][46753] Avg episode reward: [(0, '0.107')] +[2024-06-10 19:00:11,351][46990] Updated weights for policy 0, policy_version 8770 (0.0033) +[2024-06-10 19:00:13,240][46753] Fps is (10 sec: 47513.1, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 143769600. Throughput: 0: 43470.1. Samples: 143912500. Policy #0 lag: (min: 0.0, avg: 10.4, max: 20.0) +[2024-06-10 19:00:13,240][46753] Avg episode reward: [(0, '0.111')] +[2024-06-10 19:00:15,223][46990] Updated weights for policy 0, policy_version 8780 (0.0030) +[2024-06-10 19:00:18,240][46753] Fps is (10 sec: 42596.1, 60 sec: 43963.4, 300 sec: 43709.1). Total num frames: 143966208. Throughput: 0: 43487.0. Samples: 144045040. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 19:00:18,240][46753] Avg episode reward: [(0, '0.106')] +[2024-06-10 19:00:19,148][46990] Updated weights for policy 0, policy_version 8790 (0.0025) +[2024-06-10 19:00:22,767][46990] Updated weights for policy 0, policy_version 8800 (0.0037) +[2024-06-10 19:00:23,240][46753] Fps is (10 sec: 42598.7, 60 sec: 43693.9, 300 sec: 43598.1). Total num frames: 144195584. Throughput: 0: 43709.3. Samples: 144311180. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 19:00:23,240][46753] Avg episode reward: [(0, '0.116')] +[2024-06-10 19:00:23,384][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000008802_144211968.pth... +[2024-06-10 19:00:23,445][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000008160_133693440.pth +[2024-06-10 19:00:23,449][46970] Saving new best policy, reward=0.116! +[2024-06-10 19:00:26,417][46990] Updated weights for policy 0, policy_version 8810 (0.0043) +[2024-06-10 19:00:28,239][46753] Fps is (10 sec: 45878.0, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 144424960. Throughput: 0: 43801.8. Samples: 144570240. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 19:00:28,240][46753] Avg episode reward: [(0, '0.111')] +[2024-06-10 19:00:30,266][46990] Updated weights for policy 0, policy_version 8820 (0.0045) +[2024-06-10 19:00:32,005][46970] Signal inference workers to stop experience collection... (2000 times) +[2024-06-10 19:00:32,005][46970] Signal inference workers to resume experience collection... (2000 times) +[2024-06-10 19:00:32,056][46990] InferenceWorker_p0-w0: stopping experience collection (2000 times) +[2024-06-10 19:00:32,056][46990] InferenceWorker_p0-w0: resuming experience collection (2000 times) +[2024-06-10 19:00:33,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 144621568. Throughput: 0: 43715.9. Samples: 144697640. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 19:00:33,240][46753] Avg episode reward: [(0, '0.115')] +[2024-06-10 19:00:33,911][46990] Updated weights for policy 0, policy_version 8830 (0.0031) +[2024-06-10 19:00:37,759][46990] Updated weights for policy 0, policy_version 8840 (0.0040) +[2024-06-10 19:00:38,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 144834560. Throughput: 0: 43775.1. Samples: 144964040. Policy #0 lag: (min: 1.0, avg: 12.0, max: 23.0) +[2024-06-10 19:00:38,240][46753] Avg episode reward: [(0, '0.109')] +[2024-06-10 19:00:41,805][46990] Updated weights for policy 0, policy_version 8850 (0.0038) +[2024-06-10 19:00:43,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.6, 300 sec: 43542.6). Total num frames: 145063936. Throughput: 0: 43679.1. Samples: 145219760. Policy #0 lag: (min: 1.0, avg: 12.0, max: 23.0) +[2024-06-10 19:00:43,240][46753] Avg episode reward: [(0, '0.094')] +[2024-06-10 19:00:45,251][46990] Updated weights for policy 0, policy_version 8860 (0.0038) +[2024-06-10 19:00:48,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 145276928. Throughput: 0: 43636.6. Samples: 145353180. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:00:48,240][46753] Avg episode reward: [(0, '0.105')] +[2024-06-10 19:00:49,178][46990] Updated weights for policy 0, policy_version 8870 (0.0035) +[2024-06-10 19:00:52,907][46990] Updated weights for policy 0, policy_version 8880 (0.0037) +[2024-06-10 19:00:53,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 145489920. Throughput: 0: 43422.6. Samples: 145613400. Policy #0 lag: (min: 0.0, avg: 10.9, max: 20.0) +[2024-06-10 19:00:53,240][46753] Avg episode reward: [(0, '0.097')] +[2024-06-10 19:00:56,773][46990] Updated weights for policy 0, policy_version 8890 (0.0030) +[2024-06-10 19:00:58,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 145735680. Throughput: 0: 43642.9. Samples: 145876420. Policy #0 lag: (min: 0.0, avg: 10.9, max: 20.0) +[2024-06-10 19:00:58,240][46753] Avg episode reward: [(0, '0.102')] +[2024-06-10 19:01:00,360][46990] Updated weights for policy 0, policy_version 8900 (0.0023) +[2024-06-10 19:01:03,240][46753] Fps is (10 sec: 45875.1, 60 sec: 44236.8, 300 sec: 43709.2). Total num frames: 145948672. Throughput: 0: 43731.1. Samples: 146012920. Policy #0 lag: (min: 0.0, avg: 12.4, max: 22.0) +[2024-06-10 19:01:03,244][46753] Avg episode reward: [(0, '0.094')] +[2024-06-10 19:01:03,930][46990] Updated weights for policy 0, policy_version 8910 (0.0029) +[2024-06-10 19:01:07,848][46990] Updated weights for policy 0, policy_version 8920 (0.0045) +[2024-06-10 19:01:08,240][46753] Fps is (10 sec: 42597.6, 60 sec: 43690.5, 300 sec: 43653.6). Total num frames: 146161664. Throughput: 0: 43628.4. Samples: 146274460. Policy #0 lag: (min: 1.0, avg: 11.1, max: 22.0) +[2024-06-10 19:01:08,240][46753] Avg episode reward: [(0, '0.090')] +[2024-06-10 19:01:11,719][46990] Updated weights for policy 0, policy_version 8930 (0.0025) +[2024-06-10 19:01:13,240][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 146391040. Throughput: 0: 43608.7. Samples: 146532640. Policy #0 lag: (min: 1.0, avg: 11.6, max: 23.0) +[2024-06-10 19:01:13,240][46753] Avg episode reward: [(0, '0.100')] +[2024-06-10 19:01:15,233][46990] Updated weights for policy 0, policy_version 8940 (0.0048) +[2024-06-10 19:01:18,239][46753] Fps is (10 sec: 42599.1, 60 sec: 43691.0, 300 sec: 43709.2). Total num frames: 146587648. Throughput: 0: 43883.3. Samples: 146672380. Policy #0 lag: (min: 1.0, avg: 11.6, max: 23.0) +[2024-06-10 19:01:18,240][46753] Avg episode reward: [(0, '0.112')] +[2024-06-10 19:01:19,285][46990] Updated weights for policy 0, policy_version 8950 (0.0032) +[2024-06-10 19:01:23,037][46990] Updated weights for policy 0, policy_version 8960 (0.0040) +[2024-06-10 19:01:23,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43417.7, 300 sec: 43598.4). Total num frames: 146800640. Throughput: 0: 43706.2. Samples: 146930820. Policy #0 lag: (min: 0.0, avg: 12.7, max: 23.0) +[2024-06-10 19:01:23,240][46753] Avg episode reward: [(0, '0.104')] +[2024-06-10 19:01:26,574][46990] Updated weights for policy 0, policy_version 8970 (0.0029) +[2024-06-10 19:01:28,240][46753] Fps is (10 sec: 47513.1, 60 sec: 43963.6, 300 sec: 43709.1). Total num frames: 147062784. Throughput: 0: 43747.1. Samples: 147188380. Policy #0 lag: (min: 1.0, avg: 11.0, max: 22.0) +[2024-06-10 19:01:28,240][46753] Avg episode reward: [(0, '0.111')] +[2024-06-10 19:01:30,498][46990] Updated weights for policy 0, policy_version 8980 (0.0034) +[2024-06-10 19:01:33,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43963.9, 300 sec: 43709.2). Total num frames: 147259392. Throughput: 0: 43923.5. Samples: 147329740. Policy #0 lag: (min: 1.0, avg: 11.0, max: 22.0) +[2024-06-10 19:01:33,240][46753] Avg episode reward: [(0, '0.106')] +[2024-06-10 19:01:34,228][46990] Updated weights for policy 0, policy_version 8990 (0.0039) +[2024-06-10 19:01:37,680][46990] Updated weights for policy 0, policy_version 9000 (0.0035) +[2024-06-10 19:01:38,240][46753] Fps is (10 sec: 39321.6, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 147456000. Throughput: 0: 43802.2. Samples: 147584500. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:01:38,240][46753] Avg episode reward: [(0, '0.114')] +[2024-06-10 19:01:41,449][46990] Updated weights for policy 0, policy_version 9010 (0.0043) +[2024-06-10 19:01:43,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.8, 300 sec: 43653.6). Total num frames: 147701760. Throughput: 0: 43749.7. Samples: 147845160. Policy #0 lag: (min: 1.0, avg: 10.0, max: 20.0) +[2024-06-10 19:01:43,240][46753] Avg episode reward: [(0, '0.131')] +[2024-06-10 19:01:43,261][46970] Saving new best policy, reward=0.131! +[2024-06-10 19:01:45,359][46990] Updated weights for policy 0, policy_version 9020 (0.0041) +[2024-06-10 19:01:48,244][46753] Fps is (10 sec: 45855.2, 60 sec: 43960.4, 300 sec: 43764.1). Total num frames: 147914752. Throughput: 0: 43806.0. Samples: 147984380. Policy #0 lag: (min: 1.0, avg: 10.0, max: 20.0) +[2024-06-10 19:01:48,244][46753] Avg episode reward: [(0, '0.121')] +[2024-06-10 19:01:49,094][46990] Updated weights for policy 0, policy_version 9030 (0.0032) +[2024-06-10 19:01:53,011][46990] Updated weights for policy 0, policy_version 9040 (0.0031) +[2024-06-10 19:01:53,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 148111360. Throughput: 0: 43647.2. Samples: 148238580. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 19:01:53,240][46753] Avg episode reward: [(0, '0.106')] +[2024-06-10 19:01:54,767][46970] Signal inference workers to stop experience collection... (2050 times) +[2024-06-10 19:01:54,767][46970] Signal inference workers to resume experience collection... (2050 times) +[2024-06-10 19:01:54,789][46990] InferenceWorker_p0-w0: stopping experience collection (2050 times) +[2024-06-10 19:01:54,789][46990] InferenceWorker_p0-w0: resuming experience collection (2050 times) +[2024-06-10 19:01:56,397][46990] Updated weights for policy 0, policy_version 9050 (0.0033) +[2024-06-10 19:01:58,239][46753] Fps is (10 sec: 44256.8, 60 sec: 43690.7, 300 sec: 43654.3). Total num frames: 148357120. Throughput: 0: 43741.1. Samples: 148500980. Policy #0 lag: (min: 1.0, avg: 10.8, max: 21.0) +[2024-06-10 19:01:58,240][46753] Avg episode reward: [(0, '0.081')] +[2024-06-10 19:02:00,629][46990] Updated weights for policy 0, policy_version 9060 (0.0036) +[2024-06-10 19:02:03,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 148570112. Throughput: 0: 43826.6. Samples: 148644580. Policy #0 lag: (min: 1.0, avg: 10.8, max: 21.0) +[2024-06-10 19:02:03,240][46753] Avg episode reward: [(0, '0.101')] +[2024-06-10 19:02:04,021][46990] Updated weights for policy 0, policy_version 9070 (0.0042) +[2024-06-10 19:02:07,890][46990] Updated weights for policy 0, policy_version 9080 (0.0043) +[2024-06-10 19:02:08,239][46753] Fps is (10 sec: 40959.6, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 148766720. Throughput: 0: 43850.6. Samples: 148904100. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:02:08,240][46753] Avg episode reward: [(0, '0.109')] +[2024-06-10 19:02:11,396][46990] Updated weights for policy 0, policy_version 9090 (0.0046) +[2024-06-10 19:02:13,242][46753] Fps is (10 sec: 44223.6, 60 sec: 43688.5, 300 sec: 43653.2). Total num frames: 149012480. Throughput: 0: 43879.4. Samples: 149163080. Policy #0 lag: (min: 1.0, avg: 10.2, max: 20.0) +[2024-06-10 19:02:13,243][46753] Avg episode reward: [(0, '0.115')] +[2024-06-10 19:02:15,357][46990] Updated weights for policy 0, policy_version 9100 (0.0041) +[2024-06-10 19:02:18,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 149225472. Throughput: 0: 43647.4. Samples: 149293880. Policy #0 lag: (min: 1.0, avg: 10.2, max: 20.0) +[2024-06-10 19:02:18,240][46753] Avg episode reward: [(0, '0.125')] +[2024-06-10 19:02:19,101][46990] Updated weights for policy 0, policy_version 9110 (0.0039) +[2024-06-10 19:02:22,915][46990] Updated weights for policy 0, policy_version 9120 (0.0025) +[2024-06-10 19:02:23,239][46753] Fps is (10 sec: 40972.3, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 149422080. Throughput: 0: 43765.8. Samples: 149553960. Policy #0 lag: (min: 0.0, avg: 11.0, max: 22.0) +[2024-06-10 19:02:23,240][46753] Avg episode reward: [(0, '0.101')] +[2024-06-10 19:02:23,277][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000009120_149422080.pth... +[2024-06-10 19:02:23,338][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000008481_138952704.pth +[2024-06-10 19:02:26,353][46990] Updated weights for policy 0, policy_version 9130 (0.0028) +[2024-06-10 19:02:28,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 149667840. Throughput: 0: 43705.0. Samples: 149811880. Policy #0 lag: (min: 1.0, avg: 11.0, max: 22.0) +[2024-06-10 19:02:28,240][46753] Avg episode reward: [(0, '0.097')] +[2024-06-10 19:02:30,048][46990] Updated weights for policy 0, policy_version 9140 (0.0045) +[2024-06-10 19:02:33,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 149864448. Throughput: 0: 43787.5. Samples: 149954620. Policy #0 lag: (min: 1.0, avg: 11.0, max: 22.0) +[2024-06-10 19:02:33,240][46753] Avg episode reward: [(0, '0.104')] +[2024-06-10 19:02:33,638][46990] Updated weights for policy 0, policy_version 9150 (0.0043) +[2024-06-10 19:02:37,648][46990] Updated weights for policy 0, policy_version 9160 (0.0043) +[2024-06-10 19:02:38,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43963.8, 300 sec: 43653.6). Total num frames: 150093824. Throughput: 0: 43933.0. Samples: 150215560. Policy #0 lag: (min: 0.0, avg: 10.1, max: 19.0) +[2024-06-10 19:02:38,240][46753] Avg episode reward: [(0, '0.116')] +[2024-06-10 19:02:41,330][46990] Updated weights for policy 0, policy_version 9170 (0.0034) +[2024-06-10 19:02:43,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 150323200. Throughput: 0: 43860.0. Samples: 150474680. Policy #0 lag: (min: 1.0, avg: 10.5, max: 22.0) +[2024-06-10 19:02:43,240][46753] Avg episode reward: [(0, '0.117')] +[2024-06-10 19:02:45,160][46990] Updated weights for policy 0, policy_version 9180 (0.0047) +[2024-06-10 19:02:48,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43420.8, 300 sec: 43709.2). Total num frames: 150519808. Throughput: 0: 43580.5. Samples: 150605700. Policy #0 lag: (min: 1.0, avg: 10.5, max: 22.0) +[2024-06-10 19:02:48,240][46753] Avg episode reward: [(0, '0.122')] +[2024-06-10 19:02:48,869][46990] Updated weights for policy 0, policy_version 9190 (0.0031) +[2024-06-10 19:02:52,605][46990] Updated weights for policy 0, policy_version 9200 (0.0047) +[2024-06-10 19:02:53,239][46753] Fps is (10 sec: 40959.4, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 150732800. Throughput: 0: 43626.2. Samples: 150867280. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 19:02:53,240][46753] Avg episode reward: [(0, '0.099')] +[2024-06-10 19:02:56,378][46990] Updated weights for policy 0, policy_version 9210 (0.0029) +[2024-06-10 19:02:58,242][46753] Fps is (10 sec: 47502.6, 60 sec: 43962.0, 300 sec: 43819.9). Total num frames: 150994944. Throughput: 0: 43595.4. Samples: 151124840. Policy #0 lag: (min: 1.0, avg: 11.7, max: 20.0) +[2024-06-10 19:02:58,242][46753] Avg episode reward: [(0, '0.103')] +[2024-06-10 19:02:59,773][46990] Updated weights for policy 0, policy_version 9220 (0.0038) +[2024-06-10 19:03:03,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 151191552. Throughput: 0: 43849.4. Samples: 151267100. Policy #0 lag: (min: 1.0, avg: 11.7, max: 20.0) +[2024-06-10 19:03:03,240][46753] Avg episode reward: [(0, '0.109')] +[2024-06-10 19:03:03,733][46990] Updated weights for policy 0, policy_version 9230 (0.0042) +[2024-06-10 19:03:07,424][46990] Updated weights for policy 0, policy_version 9240 (0.0041) +[2024-06-10 19:03:08,240][46753] Fps is (10 sec: 40969.2, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 151404544. Throughput: 0: 43728.9. Samples: 151521760. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 19:03:08,240][46753] Avg episode reward: [(0, '0.096')] +[2024-06-10 19:03:11,380][46990] Updated weights for policy 0, policy_version 9250 (0.0032) +[2024-06-10 19:03:13,240][46753] Fps is (10 sec: 45874.8, 60 sec: 43965.9, 300 sec: 43820.2). Total num frames: 151650304. Throughput: 0: 43808.3. Samples: 151783260. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 19:03:13,240][46753] Avg episode reward: [(0, '0.112')] +[2024-06-10 19:03:14,972][46990] Updated weights for policy 0, policy_version 9260 (0.0033) +[2024-06-10 19:03:16,579][46970] Signal inference workers to stop experience collection... (2100 times) +[2024-06-10 19:03:16,599][46990] InferenceWorker_p0-w0: stopping experience collection (2100 times) +[2024-06-10 19:03:16,689][46970] Signal inference workers to resume experience collection... (2100 times) +[2024-06-10 19:03:16,690][46990] InferenceWorker_p0-w0: resuming experience collection (2100 times) +[2024-06-10 19:03:18,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 151830528. Throughput: 0: 43656.0. Samples: 151919140. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 19:03:18,240][46753] Avg episode reward: [(0, '0.110')] +[2024-06-10 19:03:18,859][46990] Updated weights for policy 0, policy_version 9270 (0.0022) +[2024-06-10 19:03:22,199][46990] Updated weights for policy 0, policy_version 9280 (0.0044) +[2024-06-10 19:03:23,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 152059904. Throughput: 0: 43571.0. Samples: 152176260. Policy #0 lag: (min: 0.0, avg: 12.5, max: 23.0) +[2024-06-10 19:03:23,240][46753] Avg episode reward: [(0, '0.112')] +[2024-06-10 19:03:26,339][46990] Updated weights for policy 0, policy_version 9290 (0.0033) +[2024-06-10 19:03:28,240][46753] Fps is (10 sec: 47512.9, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 152305664. Throughput: 0: 43580.3. Samples: 152435800. Policy #0 lag: (min: 0.0, avg: 12.4, max: 23.0) +[2024-06-10 19:03:28,240][46753] Avg episode reward: [(0, '0.112')] +[2024-06-10 19:03:29,733][46990] Updated weights for policy 0, policy_version 9300 (0.0032) +[2024-06-10 19:03:33,240][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 152485888. Throughput: 0: 43774.5. Samples: 152575560. Policy #0 lag: (min: 0.0, avg: 12.4, max: 23.0) +[2024-06-10 19:03:33,240][46753] Avg episode reward: [(0, '0.116')] +[2024-06-10 19:03:33,635][46990] Updated weights for policy 0, policy_version 9310 (0.0034) +[2024-06-10 19:03:37,133][46990] Updated weights for policy 0, policy_version 9320 (0.0033) +[2024-06-10 19:03:38,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 152715264. Throughput: 0: 43792.5. Samples: 152837940. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:03:38,240][46753] Avg episode reward: [(0, '0.116')] +[2024-06-10 19:03:41,140][46990] Updated weights for policy 0, policy_version 9330 (0.0036) +[2024-06-10 19:03:43,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 152961024. Throughput: 0: 43810.6. Samples: 153096220. Policy #0 lag: (min: 0.0, avg: 12.2, max: 23.0) +[2024-06-10 19:03:43,240][46753] Avg episode reward: [(0, '0.127')] +[2024-06-10 19:03:44,848][46990] Updated weights for policy 0, policy_version 9340 (0.0036) +[2024-06-10 19:03:48,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 153157632. Throughput: 0: 43797.0. Samples: 153237960. Policy #0 lag: (min: 0.0, avg: 12.2, max: 23.0) +[2024-06-10 19:03:48,240][46753] Avg episode reward: [(0, '0.108')] +[2024-06-10 19:03:48,707][46990] Updated weights for policy 0, policy_version 9350 (0.0038) +[2024-06-10 19:03:52,078][46990] Updated weights for policy 0, policy_version 9360 (0.0035) +[2024-06-10 19:03:53,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 153354240. Throughput: 0: 43834.3. Samples: 153494300. Policy #0 lag: (min: 0.0, avg: 12.4, max: 25.0) +[2024-06-10 19:03:53,240][46753] Avg episode reward: [(0, '0.111')] +[2024-06-10 19:03:56,262][46990] Updated weights for policy 0, policy_version 9370 (0.0037) +[2024-06-10 19:03:58,239][46753] Fps is (10 sec: 45874.8, 60 sec: 43692.3, 300 sec: 43764.7). Total num frames: 153616384. Throughput: 0: 43683.2. Samples: 153749000. Policy #0 lag: (min: 0.0, avg: 12.4, max: 25.0) +[2024-06-10 19:03:58,240][46753] Avg episode reward: [(0, '0.106')] +[2024-06-10 19:03:59,593][46990] Updated weights for policy 0, policy_version 9380 (0.0029) +[2024-06-10 19:04:03,240][46753] Fps is (10 sec: 45874.7, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 153812992. Throughput: 0: 43674.1. Samples: 153884480. Policy #0 lag: (min: 0.0, avg: 12.0, max: 24.0) +[2024-06-10 19:04:03,240][46753] Avg episode reward: [(0, '0.130')] +[2024-06-10 19:04:03,719][46990] Updated weights for policy 0, policy_version 9390 (0.0027) +[2024-06-10 19:04:07,621][46990] Updated weights for policy 0, policy_version 9400 (0.0046) +[2024-06-10 19:04:08,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 154025984. Throughput: 0: 43702.8. Samples: 154142880. Policy #0 lag: (min: 0.0, avg: 11.7, max: 22.0) +[2024-06-10 19:04:08,240][46753] Avg episode reward: [(0, '0.110')] +[2024-06-10 19:04:11,223][46990] Updated weights for policy 0, policy_version 9410 (0.0039) +[2024-06-10 19:04:13,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 154271744. Throughput: 0: 43687.2. Samples: 154401720. Policy #0 lag: (min: 0.0, avg: 11.7, max: 22.0) +[2024-06-10 19:04:13,240][46753] Avg episode reward: [(0, '0.122')] +[2024-06-10 19:04:14,976][46990] Updated weights for policy 0, policy_version 9420 (0.0035) +[2024-06-10 19:04:18,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43654.3). Total num frames: 154451968. Throughput: 0: 43516.1. Samples: 154533780. Policy #0 lag: (min: 0.0, avg: 12.4, max: 24.0) +[2024-06-10 19:04:18,240][46753] Avg episode reward: [(0, '0.124')] +[2024-06-10 19:04:19,230][46990] Updated weights for policy 0, policy_version 9430 (0.0032) +[2024-06-10 19:04:22,249][46990] Updated weights for policy 0, policy_version 9440 (0.0034) +[2024-06-10 19:04:23,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 154664960. Throughput: 0: 43413.9. Samples: 154791560. Policy #0 lag: (min: 1.0, avg: 11.8, max: 20.0) +[2024-06-10 19:04:23,240][46753] Avg episode reward: [(0, '0.119')] +[2024-06-10 19:04:23,269][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000009441_154681344.pth... +[2024-06-10 19:04:23,326][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000008802_144211968.pth +[2024-06-10 19:04:26,601][46990] Updated weights for policy 0, policy_version 9450 (0.0041) +[2024-06-10 19:04:28,240][46753] Fps is (10 sec: 45874.5, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 154910720. Throughput: 0: 43561.3. Samples: 155056480. Policy #0 lag: (min: 1.0, avg: 11.8, max: 20.0) +[2024-06-10 19:04:28,240][46753] Avg episode reward: [(0, '0.125')] +[2024-06-10 19:04:30,211][46990] Updated weights for policy 0, policy_version 9460 (0.0022) +[2024-06-10 19:04:33,240][46753] Fps is (10 sec: 45874.2, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 155123712. Throughput: 0: 43475.4. Samples: 155194360. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 19:04:33,240][46753] Avg episode reward: [(0, '0.134')] +[2024-06-10 19:04:33,246][46970] Saving new best policy, reward=0.134! +[2024-06-10 19:04:33,663][46990] Updated weights for policy 0, policy_version 9470 (0.0029) +[2024-06-10 19:04:37,989][46990] Updated weights for policy 0, policy_version 9480 (0.0032) +[2024-06-10 19:04:38,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 155336704. Throughput: 0: 43524.5. Samples: 155452900. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 19:04:38,240][46753] Avg episode reward: [(0, '0.109')] +[2024-06-10 19:04:41,538][46990] Updated weights for policy 0, policy_version 9490 (0.0030) +[2024-06-10 19:04:43,240][46753] Fps is (10 sec: 44236.6, 60 sec: 43417.5, 300 sec: 43820.2). Total num frames: 155566080. Throughput: 0: 43674.5. Samples: 155714360. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 19:04:43,240][46753] Avg episode reward: [(0, '0.114')] +[2024-06-10 19:04:45,252][46990] Updated weights for policy 0, policy_version 9500 (0.0030) +[2024-06-10 19:04:48,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43417.5, 300 sec: 43709.2). Total num frames: 155762688. Throughput: 0: 43584.0. Samples: 155845760. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:04:48,240][46753] Avg episode reward: [(0, '0.111')] +[2024-06-10 19:04:49,264][46990] Updated weights for policy 0, policy_version 9510 (0.0038) +[2024-06-10 19:04:53,153][46990] Updated weights for policy 0, policy_version 9520 (0.0036) +[2024-06-10 19:04:53,239][46753] Fps is (10 sec: 40960.8, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 155975680. Throughput: 0: 43640.0. Samples: 156106680. Policy #0 lag: (min: 0.0, avg: 9.7, max: 23.0) +[2024-06-10 19:04:53,240][46753] Avg episode reward: [(0, '0.116')] +[2024-06-10 19:04:56,770][46990] Updated weights for policy 0, policy_version 9530 (0.0041) +[2024-06-10 19:04:57,553][46970] Signal inference workers to stop experience collection... (2150 times) +[2024-06-10 19:04:57,553][46970] Signal inference workers to resume experience collection... (2150 times) +[2024-06-10 19:04:57,585][46990] InferenceWorker_p0-w0: stopping experience collection (2150 times) +[2024-06-10 19:04:57,586][46990] InferenceWorker_p0-w0: resuming experience collection (2150 times) +[2024-06-10 19:04:58,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 156221440. Throughput: 0: 43700.5. Samples: 156368240. Policy #0 lag: (min: 0.0, avg: 9.7, max: 23.0) +[2024-06-10 19:04:58,240][46753] Avg episode reward: [(0, '0.119')] +[2024-06-10 19:05:00,727][46990] Updated weights for policy 0, policy_version 9540 (0.0036) +[2024-06-10 19:05:03,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 156434432. Throughput: 0: 43825.3. Samples: 156505920. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:05:03,240][46753] Avg episode reward: [(0, '0.122')] +[2024-06-10 19:05:03,730][46990] Updated weights for policy 0, policy_version 9550 (0.0036) +[2024-06-10 19:05:08,145][46990] Updated weights for policy 0, policy_version 9560 (0.0042) +[2024-06-10 19:05:08,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 156631040. Throughput: 0: 43840.0. Samples: 156764360. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:05:08,240][46753] Avg episode reward: [(0, '0.104')] +[2024-06-10 19:05:11,505][46990] Updated weights for policy 0, policy_version 9570 (0.0038) +[2024-06-10 19:05:13,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43144.6, 300 sec: 43709.3). Total num frames: 156860416. Throughput: 0: 43776.7. Samples: 157026420. Policy #0 lag: (min: 1.0, avg: 10.5, max: 23.0) +[2024-06-10 19:05:13,240][46753] Avg episode reward: [(0, '0.127')] +[2024-06-10 19:05:15,472][46990] Updated weights for policy 0, policy_version 9580 (0.0039) +[2024-06-10 19:05:18,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 157073408. Throughput: 0: 43633.1. Samples: 157157840. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 19:05:18,240][46753] Avg episode reward: [(0, '0.124')] +[2024-06-10 19:05:18,933][46990] Updated weights for policy 0, policy_version 9590 (0.0035) +[2024-06-10 19:05:23,154][46990] Updated weights for policy 0, policy_version 9600 (0.0035) +[2024-06-10 19:05:23,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 157286400. Throughput: 0: 43632.5. Samples: 157416360. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 19:05:23,240][46753] Avg episode reward: [(0, '0.114')] +[2024-06-10 19:05:26,327][46990] Updated weights for policy 0, policy_version 9610 (0.0025) +[2024-06-10 19:05:28,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 157515776. Throughput: 0: 43709.5. Samples: 157681280. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:05:28,240][46753] Avg episode reward: [(0, '0.127')] +[2024-06-10 19:05:30,639][46990] Updated weights for policy 0, policy_version 9620 (0.0032) +[2024-06-10 19:05:33,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 157745152. Throughput: 0: 43645.0. Samples: 157809780. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 19:05:33,240][46753] Avg episode reward: [(0, '0.123')] +[2024-06-10 19:05:33,539][46990] Updated weights for policy 0, policy_version 9630 (0.0042) +[2024-06-10 19:05:37,998][46990] Updated weights for policy 0, policy_version 9640 (0.0040) +[2024-06-10 19:05:38,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 157958144. Throughput: 0: 43689.7. Samples: 158072720. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 19:05:38,240][46753] Avg episode reward: [(0, '0.127')] +[2024-06-10 19:05:41,134][46990] Updated weights for policy 0, policy_version 9650 (0.0025) +[2024-06-10 19:05:43,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 158171136. Throughput: 0: 43795.1. Samples: 158339020. Policy #0 lag: (min: 0.0, avg: 10.1, max: 24.0) +[2024-06-10 19:05:43,240][46753] Avg episode reward: [(0, '0.133')] +[2024-06-10 19:05:45,515][46990] Updated weights for policy 0, policy_version 9660 (0.0027) +[2024-06-10 19:05:48,239][46753] Fps is (10 sec: 45875.6, 60 sec: 44236.9, 300 sec: 43820.3). Total num frames: 158416896. Throughput: 0: 43740.5. Samples: 158474240. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 19:05:48,240][46753] Avg episode reward: [(0, '0.142')] +[2024-06-10 19:05:48,241][46970] Saving new best policy, reward=0.142! +[2024-06-10 19:05:48,630][46990] Updated weights for policy 0, policy_version 9670 (0.0022) +[2024-06-10 19:05:53,193][46990] Updated weights for policy 0, policy_version 9680 (0.0039) +[2024-06-10 19:05:53,240][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 158597120. Throughput: 0: 43718.5. Samples: 158731700. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 19:05:53,240][46753] Avg episode reward: [(0, '0.125')] +[2024-06-10 19:05:55,882][46990] Updated weights for policy 0, policy_version 9690 (0.0032) +[2024-06-10 19:05:58,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 158826496. Throughput: 0: 43873.4. Samples: 159000720. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 19:05:58,240][46753] Avg episode reward: [(0, '0.122')] +[2024-06-10 19:06:00,337][46990] Updated weights for policy 0, policy_version 9700 (0.0043) +[2024-06-10 19:06:03,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 159072256. Throughput: 0: 43850.1. Samples: 159131100. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 19:06:03,240][46753] Avg episode reward: [(0, '0.143')] +[2024-06-10 19:06:03,460][46990] Updated weights for policy 0, policy_version 9710 (0.0027) +[2024-06-10 19:06:07,638][46990] Updated weights for policy 0, policy_version 9720 (0.0035) +[2024-06-10 19:06:08,239][46753] Fps is (10 sec: 45875.0, 60 sec: 44236.8, 300 sec: 43709.2). Total num frames: 159285248. Throughput: 0: 43934.7. Samples: 159393420. Policy #0 lag: (min: 0.0, avg: 12.2, max: 23.0) +[2024-06-10 19:06:08,240][46753] Avg episode reward: [(0, '0.129')] +[2024-06-10 19:06:11,071][46990] Updated weights for policy 0, policy_version 9730 (0.0030) +[2024-06-10 19:06:13,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 159481856. Throughput: 0: 43959.1. Samples: 159659440. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 19:06:13,240][46753] Avg episode reward: [(0, '0.124')] +[2024-06-10 19:06:15,180][46990] Updated weights for policy 0, policy_version 9740 (0.0033) +[2024-06-10 19:06:17,989][46970] Signal inference workers to stop experience collection... (2200 times) +[2024-06-10 19:06:18,038][46990] InferenceWorker_p0-w0: stopping experience collection (2200 times) +[2024-06-10 19:06:18,046][46970] Signal inference workers to resume experience collection... (2200 times) +[2024-06-10 19:06:18,060][46990] InferenceWorker_p0-w0: resuming experience collection (2200 times) +[2024-06-10 19:06:18,239][46753] Fps is (10 sec: 44236.6, 60 sec: 44236.7, 300 sec: 43820.2). Total num frames: 159727616. Throughput: 0: 43954.7. Samples: 159787740. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 19:06:18,240][46753] Avg episode reward: [(0, '0.139')] +[2024-06-10 19:06:18,390][46990] Updated weights for policy 0, policy_version 9750 (0.0038) +[2024-06-10 19:06:22,695][46990] Updated weights for policy 0, policy_version 9760 (0.0027) +[2024-06-10 19:06:23,244][46753] Fps is (10 sec: 44216.9, 60 sec: 43960.4, 300 sec: 43597.4). Total num frames: 159924224. Throughput: 0: 43920.5. Samples: 160049340. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:06:23,244][46753] Avg episode reward: [(0, '0.137')] +[2024-06-10 19:06:23,250][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000009761_159924224.pth... +[2024-06-10 19:06:23,325][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000009120_149422080.pth +[2024-06-10 19:06:25,653][46990] Updated weights for policy 0, policy_version 9770 (0.0037) +[2024-06-10 19:06:28,240][46753] Fps is (10 sec: 42597.5, 60 sec: 43963.6, 300 sec: 43709.1). Total num frames: 160153600. Throughput: 0: 43998.5. Samples: 160318960. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 19:06:28,240][46753] Avg episode reward: [(0, '0.139')] +[2024-06-10 19:06:29,874][46990] Updated weights for policy 0, policy_version 9780 (0.0030) +[2024-06-10 19:06:33,134][46990] Updated weights for policy 0, policy_version 9790 (0.0033) +[2024-06-10 19:06:33,240][46753] Fps is (10 sec: 47534.7, 60 sec: 44236.7, 300 sec: 43875.8). Total num frames: 160399360. Throughput: 0: 43808.3. Samples: 160445620. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 19:06:33,240][46753] Avg episode reward: [(0, '0.126')] +[2024-06-10 19:06:37,467][46990] Updated weights for policy 0, policy_version 9800 (0.0028) +[2024-06-10 19:06:38,240][46753] Fps is (10 sec: 44237.5, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 160595968. Throughput: 0: 43973.3. Samples: 160710500. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 19:06:38,240][46753] Avg episode reward: [(0, '0.122')] +[2024-06-10 19:06:40,845][46990] Updated weights for policy 0, policy_version 9810 (0.0035) +[2024-06-10 19:06:43,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43963.7, 300 sec: 43709.8). Total num frames: 160808960. Throughput: 0: 43886.1. Samples: 160975600. Policy #0 lag: (min: 1.0, avg: 9.7, max: 23.0) +[2024-06-10 19:06:43,240][46753] Avg episode reward: [(0, '0.104')] +[2024-06-10 19:06:44,816][46990] Updated weights for policy 0, policy_version 9820 (0.0030) +[2024-06-10 19:06:48,099][46990] Updated weights for policy 0, policy_version 9830 (0.0043) +[2024-06-10 19:06:48,240][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.6, 300 sec: 43875.8). Total num frames: 161054720. Throughput: 0: 43774.6. Samples: 161100960. Policy #0 lag: (min: 1.0, avg: 9.7, max: 23.0) +[2024-06-10 19:06:48,240][46753] Avg episode reward: [(0, '0.129')] +[2024-06-10 19:06:52,388][46990] Updated weights for policy 0, policy_version 9840 (0.0044) +[2024-06-10 19:06:53,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 161234944. Throughput: 0: 43842.6. Samples: 161366340. Policy #0 lag: (min: 0.0, avg: 9.6, max: 20.0) +[2024-06-10 19:06:53,240][46753] Avg episode reward: [(0, '0.128')] +[2024-06-10 19:06:55,492][46990] Updated weights for policy 0, policy_version 9850 (0.0034) +[2024-06-10 19:06:58,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 161464320. Throughput: 0: 43830.7. Samples: 161631820. Policy #0 lag: (min: 0.0, avg: 9.6, max: 20.0) +[2024-06-10 19:06:58,243][46753] Avg episode reward: [(0, '0.138')] +[2024-06-10 19:06:59,603][46990] Updated weights for policy 0, policy_version 9860 (0.0027) +[2024-06-10 19:07:03,021][46990] Updated weights for policy 0, policy_version 9870 (0.0034) +[2024-06-10 19:07:03,240][46753] Fps is (10 sec: 47513.5, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 161710080. Throughput: 0: 43914.6. Samples: 161763900. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:07:03,240][46753] Avg episode reward: [(0, '0.126')] +[2024-06-10 19:07:07,071][46990] Updated weights for policy 0, policy_version 9880 (0.0033) +[2024-06-10 19:07:08,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.6, 300 sec: 43709.6). Total num frames: 161906688. Throughput: 0: 43857.3. Samples: 162022720. Policy #0 lag: (min: 1.0, avg: 10.6, max: 22.0) +[2024-06-10 19:07:08,240][46753] Avg episode reward: [(0, '0.142')] +[2024-06-10 19:07:10,747][46990] Updated weights for policy 0, policy_version 9890 (0.0030) +[2024-06-10 19:07:13,239][46753] Fps is (10 sec: 39322.2, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 162103296. Throughput: 0: 43799.9. Samples: 162289940. Policy #0 lag: (min: 1.0, avg: 10.6, max: 22.0) +[2024-06-10 19:07:13,240][46753] Avg episode reward: [(0, '0.136')] +[2024-06-10 19:07:14,699][46990] Updated weights for policy 0, policy_version 9900 (0.0033) +[2024-06-10 19:07:17,984][46990] Updated weights for policy 0, policy_version 9910 (0.0029) +[2024-06-10 19:07:18,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 162365440. Throughput: 0: 43736.9. Samples: 162413780. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 19:07:18,240][46753] Avg episode reward: [(0, '0.132')] +[2024-06-10 19:07:22,184][46990] Updated weights for policy 0, policy_version 9920 (0.0046) +[2024-06-10 19:07:23,240][46753] Fps is (10 sec: 45874.4, 60 sec: 43967.0, 300 sec: 43709.2). Total num frames: 162562048. Throughput: 0: 43748.0. Samples: 162679160. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 19:07:23,242][46753] Avg episode reward: [(0, '0.129')] +[2024-06-10 19:07:25,420][46990] Updated weights for policy 0, policy_version 9930 (0.0035) +[2024-06-10 19:07:28,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43417.8, 300 sec: 43709.2). Total num frames: 162758656. Throughput: 0: 43717.0. Samples: 162942860. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 19:07:28,240][46753] Avg episode reward: [(0, '0.142')] +[2024-06-10 19:07:29,682][46990] Updated weights for policy 0, policy_version 9940 (0.0029) +[2024-06-10 19:07:33,089][46990] Updated weights for policy 0, policy_version 9950 (0.0039) +[2024-06-10 19:07:33,240][46753] Fps is (10 sec: 45874.8, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 163020800. Throughput: 0: 43737.3. Samples: 163069140. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 19:07:33,240][46753] Avg episode reward: [(0, '0.141')] +[2024-06-10 19:07:35,842][46970] Signal inference workers to stop experience collection... (2250 times) +[2024-06-10 19:07:35,842][46970] Signal inference workers to resume experience collection... (2250 times) +[2024-06-10 19:07:35,867][46990] InferenceWorker_p0-w0: stopping experience collection (2250 times) +[2024-06-10 19:07:35,868][46990] InferenceWorker_p0-w0: resuming experience collection (2250 times) +[2024-06-10 19:07:37,204][46990] Updated weights for policy 0, policy_version 9960 (0.0033) +[2024-06-10 19:07:38,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 163217408. Throughput: 0: 43656.5. Samples: 163330880. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 19:07:38,240][46753] Avg episode reward: [(0, '0.142')] +[2024-06-10 19:07:40,821][46990] Updated weights for policy 0, policy_version 9970 (0.0045) +[2024-06-10 19:07:43,239][46753] Fps is (10 sec: 37684.0, 60 sec: 43144.6, 300 sec: 43653.6). Total num frames: 163397632. Throughput: 0: 43728.9. Samples: 163599620. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 19:07:43,240][46753] Avg episode reward: [(0, '0.130')] +[2024-06-10 19:07:44,789][46990] Updated weights for policy 0, policy_version 9980 (0.0033) +[2024-06-10 19:07:48,119][46990] Updated weights for policy 0, policy_version 9990 (0.0042) +[2024-06-10 19:07:48,240][46753] Fps is (10 sec: 45874.6, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 163676160. Throughput: 0: 43530.2. Samples: 163722760. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 19:07:48,240][46753] Avg episode reward: [(0, '0.135')] +[2024-06-10 19:07:52,307][46990] Updated weights for policy 0, policy_version 10000 (0.0036) +[2024-06-10 19:07:53,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.8, 300 sec: 43598.5). Total num frames: 163856384. Throughput: 0: 43678.8. Samples: 163988260. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 19:07:53,240][46753] Avg episode reward: [(0, '0.136')] +[2024-06-10 19:07:55,542][46990] Updated weights for policy 0, policy_version 10010 (0.0030) +[2024-06-10 19:07:58,239][46753] Fps is (10 sec: 37683.8, 60 sec: 43144.6, 300 sec: 43598.1). Total num frames: 164052992. Throughput: 0: 43630.2. Samples: 164253300. Policy #0 lag: (min: 0.0, avg: 12.1, max: 22.0) +[2024-06-10 19:07:58,240][46753] Avg episode reward: [(0, '0.131')] +[2024-06-10 19:07:59,835][46990] Updated weights for policy 0, policy_version 10020 (0.0030) +[2024-06-10 19:08:03,122][46990] Updated weights for policy 0, policy_version 10030 (0.0039) +[2024-06-10 19:08:03,240][46753] Fps is (10 sec: 47512.8, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 164331520. Throughput: 0: 43611.5. Samples: 164376300. Policy #0 lag: (min: 0.0, avg: 12.1, max: 22.0) +[2024-06-10 19:08:03,240][46753] Avg episode reward: [(0, '0.139')] +[2024-06-10 19:08:07,282][46990] Updated weights for policy 0, policy_version 10040 (0.0035) +[2024-06-10 19:08:08,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 164528128. Throughput: 0: 43543.7. Samples: 164638620. Policy #0 lag: (min: 0.0, avg: 12.0, max: 20.0) +[2024-06-10 19:08:08,240][46753] Avg episode reward: [(0, '0.144')] +[2024-06-10 19:08:08,299][46970] Saving new best policy, reward=0.144! +[2024-06-10 19:08:10,768][46990] Updated weights for policy 0, policy_version 10050 (0.0033) +[2024-06-10 19:08:13,240][46753] Fps is (10 sec: 36045.0, 60 sec: 43144.4, 300 sec: 43598.1). Total num frames: 164691968. Throughput: 0: 43605.2. Samples: 164905100. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:08:13,240][46753] Avg episode reward: [(0, '0.127')] +[2024-06-10 19:08:14,791][46990] Updated weights for policy 0, policy_version 10060 (0.0030) +[2024-06-10 19:08:18,015][46990] Updated weights for policy 0, policy_version 10070 (0.0043) +[2024-06-10 19:08:18,239][46753] Fps is (10 sec: 45874.6, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 164986880. Throughput: 0: 43546.3. Samples: 165028720. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:08:18,248][46753] Avg episode reward: [(0, '0.133')] +[2024-06-10 19:08:22,259][46990] Updated weights for policy 0, policy_version 10080 (0.0033) +[2024-06-10 19:08:23,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 165167104. Throughput: 0: 43681.3. Samples: 165296540. Policy #0 lag: (min: 0.0, avg: 7.1, max: 21.0) +[2024-06-10 19:08:23,240][46753] Avg episode reward: [(0, '0.130')] +[2024-06-10 19:08:23,352][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000010082_165183488.pth... +[2024-06-10 19:08:23,391][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000009441_154681344.pth +[2024-06-10 19:08:25,494][46990] Updated weights for policy 0, policy_version 10090 (0.0027) +[2024-06-10 19:08:28,239][46753] Fps is (10 sec: 37683.7, 60 sec: 43417.6, 300 sec: 43653.7). Total num frames: 165363712. Throughput: 0: 43689.8. Samples: 165565660. Policy #0 lag: (min: 0.0, avg: 7.1, max: 21.0) +[2024-06-10 19:08:28,240][46753] Avg episode reward: [(0, '0.140')] +[2024-06-10 19:08:29,906][46990] Updated weights for policy 0, policy_version 10100 (0.0037) +[2024-06-10 19:08:33,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43417.8, 300 sec: 43764.7). Total num frames: 165625856. Throughput: 0: 43648.2. Samples: 165686920. Policy #0 lag: (min: 0.0, avg: 7.5, max: 20.0) +[2024-06-10 19:08:33,240][46753] Avg episode reward: [(0, '0.140')] +[2024-06-10 19:08:33,263][46990] Updated weights for policy 0, policy_version 10110 (0.0027) +[2024-06-10 19:08:37,369][46990] Updated weights for policy 0, policy_version 10120 (0.0050) +[2024-06-10 19:08:38,244][46753] Fps is (10 sec: 47492.0, 60 sec: 43687.4, 300 sec: 43653.0). Total num frames: 165838848. Throughput: 0: 43627.1. Samples: 165951680. Policy #0 lag: (min: 1.0, avg: 10.5, max: 23.0) +[2024-06-10 19:08:38,245][46753] Avg episode reward: [(0, '0.150')] +[2024-06-10 19:08:38,245][46970] Saving new best policy, reward=0.150! +[2024-06-10 19:08:40,614][46990] Updated weights for policy 0, policy_version 10130 (0.0034) +[2024-06-10 19:08:43,239][46753] Fps is (10 sec: 39321.1, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 166019072. Throughput: 0: 43702.6. Samples: 166219920. Policy #0 lag: (min: 1.0, avg: 10.5, max: 23.0) +[2024-06-10 19:08:43,240][46753] Avg episode reward: [(0, '0.136')] +[2024-06-10 19:08:43,700][46970] Signal inference workers to stop experience collection... (2300 times) +[2024-06-10 19:08:43,731][46990] InferenceWorker_p0-w0: stopping experience collection (2300 times) +[2024-06-10 19:08:43,757][46970] Signal inference workers to resume experience collection... (2300 times) +[2024-06-10 19:08:43,758][46990] InferenceWorker_p0-w0: resuming experience collection (2300 times) +[2024-06-10 19:08:44,691][46990] Updated weights for policy 0, policy_version 10140 (0.0035) +[2024-06-10 19:08:48,043][46990] Updated weights for policy 0, policy_version 10150 (0.0026) +[2024-06-10 19:08:48,239][46753] Fps is (10 sec: 45896.0, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 166297600. Throughput: 0: 43666.8. Samples: 166341300. Policy #0 lag: (min: 0.0, avg: 11.0, max: 25.0) +[2024-06-10 19:08:48,240][46753] Avg episode reward: [(0, '0.121')] +[2024-06-10 19:08:52,330][46990] Updated weights for policy 0, policy_version 10160 (0.0035) +[2024-06-10 19:08:53,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 166477824. Throughput: 0: 43843.6. Samples: 166611580. Policy #0 lag: (min: 0.0, avg: 11.0, max: 25.0) +[2024-06-10 19:08:53,240][46753] Avg episode reward: [(0, '0.130')] +[2024-06-10 19:08:55,594][46990] Updated weights for policy 0, policy_version 10170 (0.0027) +[2024-06-10 19:08:58,244][46753] Fps is (10 sec: 37666.1, 60 sec: 43687.4, 300 sec: 43597.5). Total num frames: 166674432. Throughput: 0: 43822.3. Samples: 166877300. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:08:58,245][46753] Avg episode reward: [(0, '0.137')] +[2024-06-10 19:08:59,707][46990] Updated weights for policy 0, policy_version 10180 (0.0029) +[2024-06-10 19:09:03,239][46753] Fps is (10 sec: 45874.5, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 166936576. Throughput: 0: 43832.9. Samples: 167001200. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:09:03,242][46753] Avg episode reward: [(0, '0.151')] +[2024-06-10 19:09:03,255][46970] Saving new best policy, reward=0.151! +[2024-06-10 19:09:03,451][46990] Updated weights for policy 0, policy_version 10190 (0.0041) +[2024-06-10 19:09:07,242][46990] Updated weights for policy 0, policy_version 10200 (0.0039) +[2024-06-10 19:09:08,239][46753] Fps is (10 sec: 49174.4, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 167165952. Throughput: 0: 43786.3. Samples: 167266920. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:09:08,240][46753] Avg episode reward: [(0, '0.133')] +[2024-06-10 19:09:10,601][46990] Updated weights for policy 0, policy_version 10210 (0.0027) +[2024-06-10 19:09:13,240][46753] Fps is (10 sec: 39321.5, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 167329792. Throughput: 0: 43772.3. Samples: 167535420. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 19:09:13,242][46753] Avg episode reward: [(0, '0.139')] +[2024-06-10 19:09:14,860][46990] Updated weights for policy 0, policy_version 10220 (0.0037) +[2024-06-10 19:09:17,970][46990] Updated weights for policy 0, policy_version 10230 (0.0043) +[2024-06-10 19:09:18,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 167608320. Throughput: 0: 43770.1. Samples: 167656580. Policy #0 lag: (min: 1.0, avg: 11.7, max: 20.0) +[2024-06-10 19:09:18,240][46753] Avg episode reward: [(0, '0.129')] +[2024-06-10 19:09:22,301][46990] Updated weights for policy 0, policy_version 10240 (0.0026) +[2024-06-10 19:09:23,239][46753] Fps is (10 sec: 49152.8, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 167821312. Throughput: 0: 43889.3. Samples: 167926500. Policy #0 lag: (min: 1.0, avg: 11.7, max: 20.0) +[2024-06-10 19:09:23,240][46753] Avg episode reward: [(0, '0.142')] +[2024-06-10 19:09:25,475][46990] Updated weights for policy 0, policy_version 10250 (0.0033) +[2024-06-10 19:09:28,239][46753] Fps is (10 sec: 37683.4, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 167985152. Throughput: 0: 43690.3. Samples: 168185980. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 19:09:28,240][46753] Avg episode reward: [(0, '0.141')] +[2024-06-10 19:09:29,976][46990] Updated weights for policy 0, policy_version 10260 (0.0045) +[2024-06-10 19:09:33,175][46990] Updated weights for policy 0, policy_version 10270 (0.0030) +[2024-06-10 19:09:33,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 168263680. Throughput: 0: 43718.1. Samples: 168308620. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 19:09:33,240][46753] Avg episode reward: [(0, '0.141')] +[2024-06-10 19:09:37,393][46990] Updated weights for policy 0, policy_version 10280 (0.0031) +[2024-06-10 19:09:38,239][46753] Fps is (10 sec: 49152.4, 60 sec: 43967.1, 300 sec: 43764.8). Total num frames: 168476672. Throughput: 0: 43596.9. Samples: 168573440. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:09:38,240][46753] Avg episode reward: [(0, '0.152')] +[2024-06-10 19:09:40,763][46990] Updated weights for policy 0, policy_version 10290 (0.0037) +[2024-06-10 19:09:43,240][46753] Fps is (10 sec: 37683.0, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 168640512. Throughput: 0: 43586.9. Samples: 168838520. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:09:43,240][46753] Avg episode reward: [(0, '0.132')] +[2024-06-10 19:09:44,253][46970] Signal inference workers to stop experience collection... (2350 times) +[2024-06-10 19:09:44,306][46970] Signal inference workers to resume experience collection... (2350 times) +[2024-06-10 19:09:44,307][46990] InferenceWorker_p0-w0: stopping experience collection (2350 times) +[2024-06-10 19:09:44,324][46990] InferenceWorker_p0-w0: resuming experience collection (2350 times) +[2024-06-10 19:09:45,077][46990] Updated weights for policy 0, policy_version 10300 (0.0035) +[2024-06-10 19:09:48,223][46990] Updated weights for policy 0, policy_version 10310 (0.0032) +[2024-06-10 19:09:48,239][46753] Fps is (10 sec: 44236.1, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 168919040. Throughput: 0: 43583.1. Samples: 168962440. Policy #0 lag: (min: 0.0, avg: 9.2, max: 22.0) +[2024-06-10 19:09:48,240][46753] Avg episode reward: [(0, '0.144')] +[2024-06-10 19:09:52,359][46990] Updated weights for policy 0, policy_version 10320 (0.0038) +[2024-06-10 19:09:53,239][46753] Fps is (10 sec: 49152.8, 60 sec: 44236.7, 300 sec: 43764.7). Total num frames: 169132032. Throughput: 0: 43778.6. Samples: 169236960. Policy #0 lag: (min: 0.0, avg: 8.9, max: 23.0) +[2024-06-10 19:09:53,240][46753] Avg episode reward: [(0, '0.139')] +[2024-06-10 19:09:55,782][46990] Updated weights for policy 0, policy_version 10330 (0.0029) +[2024-06-10 19:09:58,240][46753] Fps is (10 sec: 39321.3, 60 sec: 43966.9, 300 sec: 43653.6). Total num frames: 169312256. Throughput: 0: 43545.3. Samples: 169494960. Policy #0 lag: (min: 0.0, avg: 8.9, max: 23.0) +[2024-06-10 19:09:58,240][46753] Avg episode reward: [(0, '0.143')] +[2024-06-10 19:09:59,828][46990] Updated weights for policy 0, policy_version 10340 (0.0043) +[2024-06-10 19:10:03,184][46990] Updated weights for policy 0, policy_version 10350 (0.0038) +[2024-06-10 19:10:03,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 169574400. Throughput: 0: 43560.4. Samples: 169616800. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 19:10:03,240][46753] Avg episode reward: [(0, '0.152')] +[2024-06-10 19:10:03,245][46970] Saving new best policy, reward=0.152! +[2024-06-10 19:10:07,347][46990] Updated weights for policy 0, policy_version 10360 (0.0041) +[2024-06-10 19:10:08,239][46753] Fps is (10 sec: 47514.4, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 169787392. Throughput: 0: 43615.1. Samples: 169889180. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 19:10:08,240][46753] Avg episode reward: [(0, '0.143')] +[2024-06-10 19:10:10,942][46990] Updated weights for policy 0, policy_version 10370 (0.0028) +[2024-06-10 19:10:13,239][46753] Fps is (10 sec: 39322.1, 60 sec: 43963.9, 300 sec: 43709.2). Total num frames: 169967616. Throughput: 0: 43574.8. Samples: 170146840. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 19:10:13,240][46753] Avg episode reward: [(0, '0.132')] +[2024-06-10 19:10:14,829][46990] Updated weights for policy 0, policy_version 10380 (0.0040) +[2024-06-10 19:10:18,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.6, 300 sec: 43820.2). Total num frames: 170213376. Throughput: 0: 43655.2. Samples: 170273100. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 19:10:18,240][46753] Avg episode reward: [(0, '0.143')] +[2024-06-10 19:10:18,375][46990] Updated weights for policy 0, policy_version 10390 (0.0034) +[2024-06-10 19:10:22,442][46990] Updated weights for policy 0, policy_version 10400 (0.0042) +[2024-06-10 19:10:23,240][46753] Fps is (10 sec: 47512.5, 60 sec: 43690.5, 300 sec: 43820.2). Total num frames: 170442752. Throughput: 0: 43779.8. Samples: 170543540. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 19:10:23,240][46753] Avg episode reward: [(0, '0.150')] +[2024-06-10 19:10:23,262][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000010403_170442752.pth... +[2024-06-10 19:10:23,328][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000009761_159924224.pth +[2024-06-10 19:10:25,909][46990] Updated weights for policy 0, policy_version 10410 (0.0028) +[2024-06-10 19:10:28,243][46753] Fps is (10 sec: 42584.5, 60 sec: 44234.4, 300 sec: 43708.7). Total num frames: 170639360. Throughput: 0: 43580.1. Samples: 170799760. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 19:10:28,243][46753] Avg episode reward: [(0, '0.141')] +[2024-06-10 19:10:29,890][46990] Updated weights for policy 0, policy_version 10420 (0.0032) +[2024-06-10 19:10:33,239][46753] Fps is (10 sec: 42599.3, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 170868736. Throughput: 0: 43676.6. Samples: 170927880. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 19:10:33,240][46753] Avg episode reward: [(0, '0.146')] +[2024-06-10 19:10:33,255][46990] Updated weights for policy 0, policy_version 10430 (0.0036) +[2024-06-10 19:10:37,469][46990] Updated weights for policy 0, policy_version 10440 (0.0035) +[2024-06-10 19:10:38,239][46753] Fps is (10 sec: 45890.0, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 171098112. Throughput: 0: 43486.6. Samples: 171193860. Policy #0 lag: (min: 0.0, avg: 11.0, max: 24.0) +[2024-06-10 19:10:38,240][46753] Avg episode reward: [(0, '0.156')] +[2024-06-10 19:10:38,241][46970] Saving new best policy, reward=0.156! +[2024-06-10 19:10:41,211][46990] Updated weights for policy 0, policy_version 10450 (0.0033) +[2024-06-10 19:10:43,239][46753] Fps is (10 sec: 39321.6, 60 sec: 43690.9, 300 sec: 43542.6). Total num frames: 171261952. Throughput: 0: 43465.1. Samples: 171450880. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:10:43,240][46753] Avg episode reward: [(0, '0.146')] +[2024-06-10 19:10:44,994][46990] Updated weights for policy 0, policy_version 10460 (0.0031) +[2024-06-10 19:10:48,239][46753] Fps is (10 sec: 39321.8, 60 sec: 42871.5, 300 sec: 43709.2). Total num frames: 171491328. Throughput: 0: 43452.9. Samples: 171572180. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:10:48,240][46753] Avg episode reward: [(0, '0.161')] +[2024-06-10 19:10:48,417][46970] Saving new best policy, reward=0.161! +[2024-06-10 19:10:48,643][46990] Updated weights for policy 0, policy_version 10470 (0.0041) +[2024-06-10 19:10:52,494][46990] Updated weights for policy 0, policy_version 10480 (0.0032) +[2024-06-10 19:10:53,239][46753] Fps is (10 sec: 49151.3, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 171753472. Throughput: 0: 43549.7. Samples: 171848920. Policy #0 lag: (min: 0.0, avg: 9.3, max: 20.0) +[2024-06-10 19:10:53,240][46753] Avg episode reward: [(0, '0.138')] +[2024-06-10 19:10:55,287][46970] Signal inference workers to stop experience collection... (2400 times) +[2024-06-10 19:10:55,318][46990] InferenceWorker_p0-w0: stopping experience collection (2400 times) +[2024-06-10 19:10:55,344][46970] Signal inference workers to resume experience collection... (2400 times) +[2024-06-10 19:10:55,345][46990] InferenceWorker_p0-w0: resuming experience collection (2400 times) +[2024-06-10 19:10:55,862][46990] Updated weights for policy 0, policy_version 10490 (0.0038) +[2024-06-10 19:10:58,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.8, 300 sec: 43653.6). Total num frames: 171950080. Throughput: 0: 43540.3. Samples: 172106160. Policy #0 lag: (min: 0.0, avg: 9.3, max: 20.0) +[2024-06-10 19:10:58,240][46753] Avg episode reward: [(0, '0.142')] +[2024-06-10 19:10:59,879][46990] Updated weights for policy 0, policy_version 10500 (0.0040) +[2024-06-10 19:11:03,244][46753] Fps is (10 sec: 42579.6, 60 sec: 43414.4, 300 sec: 43708.5). Total num frames: 172179456. Throughput: 0: 43675.2. Samples: 172238680. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 19:11:03,244][46753] Avg episode reward: [(0, '0.144')] +[2024-06-10 19:11:03,360][46990] Updated weights for policy 0, policy_version 10510 (0.0039) +[2024-06-10 19:11:07,350][46990] Updated weights for policy 0, policy_version 10520 (0.0037) +[2024-06-10 19:11:08,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 172408832. Throughput: 0: 43550.3. Samples: 172503300. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:11:08,240][46753] Avg episode reward: [(0, '0.133')] +[2024-06-10 19:11:11,118][46990] Updated weights for policy 0, policy_version 10530 (0.0038) +[2024-06-10 19:11:13,239][46753] Fps is (10 sec: 40978.6, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 172589056. Throughput: 0: 43719.7. Samples: 172767000. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:11:13,240][46753] Avg episode reward: [(0, '0.140')] +[2024-06-10 19:11:14,890][46990] Updated weights for policy 0, policy_version 10540 (0.0032) +[2024-06-10 19:11:18,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43144.5, 300 sec: 43654.3). Total num frames: 172802048. Throughput: 0: 43609.3. Samples: 172890300. Policy #0 lag: (min: 0.0, avg: 12.0, max: 24.0) +[2024-06-10 19:11:18,240][46753] Avg episode reward: [(0, '0.149')] +[2024-06-10 19:11:18,792][46990] Updated weights for policy 0, policy_version 10550 (0.0030) +[2024-06-10 19:11:22,579][46990] Updated weights for policy 0, policy_version 10560 (0.0025) +[2024-06-10 19:11:23,239][46753] Fps is (10 sec: 47512.8, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 173064192. Throughput: 0: 43708.9. Samples: 173160760. Policy #0 lag: (min: 0.0, avg: 12.0, max: 24.0) +[2024-06-10 19:11:23,240][46753] Avg episode reward: [(0, '0.155')] +[2024-06-10 19:11:26,190][46990] Updated weights for policy 0, policy_version 10570 (0.0028) +[2024-06-10 19:11:28,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43693.0, 300 sec: 43598.1). Total num frames: 173260800. Throughput: 0: 43660.8. Samples: 173415620. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 19:11:28,240][46753] Avg episode reward: [(0, '0.144')] +[2024-06-10 19:11:30,104][46990] Updated weights for policy 0, policy_version 10580 (0.0037) +[2024-06-10 19:11:33,244][46753] Fps is (10 sec: 39304.3, 60 sec: 43141.3, 300 sec: 43597.5). Total num frames: 173457408. Throughput: 0: 43888.1. Samples: 173547340. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 19:11:33,245][46753] Avg episode reward: [(0, '0.145')] +[2024-06-10 19:11:34,056][46990] Updated weights for policy 0, policy_version 10590 (0.0039) +[2024-06-10 19:11:37,312][46990] Updated weights for policy 0, policy_version 10600 (0.0041) +[2024-06-10 19:11:38,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 173703168. Throughput: 0: 43636.9. Samples: 173812580. Policy #0 lag: (min: 1.0, avg: 10.7, max: 23.0) +[2024-06-10 19:11:38,240][46753] Avg episode reward: [(0, '0.167')] +[2024-06-10 19:11:38,242][46970] Saving new best policy, reward=0.167! +[2024-06-10 19:11:41,382][46990] Updated weights for policy 0, policy_version 10610 (0.0034) +[2024-06-10 19:11:43,239][46753] Fps is (10 sec: 44256.4, 60 sec: 43963.6, 300 sec: 43542.6). Total num frames: 173899776. Throughput: 0: 43765.3. Samples: 174075600. Policy #0 lag: (min: 0.0, avg: 11.2, max: 24.0) +[2024-06-10 19:11:43,240][46753] Avg episode reward: [(0, '0.144')] +[2024-06-10 19:11:45,007][46990] Updated weights for policy 0, policy_version 10620 (0.0040) +[2024-06-10 19:11:48,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 174096384. Throughput: 0: 43676.4. Samples: 174203920. Policy #0 lag: (min: 0.0, avg: 11.2, max: 24.0) +[2024-06-10 19:11:48,240][46753] Avg episode reward: [(0, '0.145')] +[2024-06-10 19:11:48,836][46990] Updated weights for policy 0, policy_version 10630 (0.0045) +[2024-06-10 19:11:52,848][46990] Updated weights for policy 0, policy_version 10640 (0.0027) +[2024-06-10 19:11:53,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 174358528. Throughput: 0: 43663.1. Samples: 174468140. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 19:11:53,240][46753] Avg episode reward: [(0, '0.147')] +[2024-06-10 19:11:56,311][46990] Updated weights for policy 0, policy_version 10650 (0.0034) +[2024-06-10 19:11:58,239][46753] Fps is (10 sec: 47513.6, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 174571520. Throughput: 0: 43617.8. Samples: 174729800. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 19:11:58,240][46753] Avg episode reward: [(0, '0.159')] +[2024-06-10 19:12:00,013][46990] Updated weights for policy 0, policy_version 10660 (0.0037) +[2024-06-10 19:12:00,178][46970] Signal inference workers to stop experience collection... (2450 times) +[2024-06-10 19:12:00,222][46990] InferenceWorker_p0-w0: stopping experience collection (2450 times) +[2024-06-10 19:12:00,229][46970] Signal inference workers to resume experience collection... (2450 times) +[2024-06-10 19:12:00,241][46990] InferenceWorker_p0-w0: resuming experience collection (2450 times) +[2024-06-10 19:12:03,240][46753] Fps is (10 sec: 39321.4, 60 sec: 42874.6, 300 sec: 43542.6). Total num frames: 174751744. Throughput: 0: 43880.4. Samples: 174864920. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 19:12:03,240][46753] Avg episode reward: [(0, '0.143')] +[2024-06-10 19:12:03,815][46990] Updated weights for policy 0, policy_version 10670 (0.0025) +[2024-06-10 19:12:07,109][46990] Updated weights for policy 0, policy_version 10680 (0.0042) +[2024-06-10 19:12:08,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 175030272. Throughput: 0: 43781.9. Samples: 175130940. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:12:08,240][46753] Avg episode reward: [(0, '0.137')] +[2024-06-10 19:12:11,458][46990] Updated weights for policy 0, policy_version 10690 (0.0028) +[2024-06-10 19:12:13,239][46753] Fps is (10 sec: 49152.4, 60 sec: 44236.7, 300 sec: 43653.6). Total num frames: 175243264. Throughput: 0: 43810.3. Samples: 175387080. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:12:13,240][46753] Avg episode reward: [(0, '0.162')] +[2024-06-10 19:12:15,006][46990] Updated weights for policy 0, policy_version 10700 (0.0027) +[2024-06-10 19:12:18,239][46753] Fps is (10 sec: 39321.6, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 175423488. Throughput: 0: 43826.2. Samples: 175519320. Policy #0 lag: (min: 0.0, avg: 9.8, max: 20.0) +[2024-06-10 19:12:18,240][46753] Avg episode reward: [(0, '0.160')] +[2024-06-10 19:12:18,793][46990] Updated weights for policy 0, policy_version 10710 (0.0038) +[2024-06-10 19:12:22,701][46990] Updated weights for policy 0, policy_version 10720 (0.0040) +[2024-06-10 19:12:23,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 175669248. Throughput: 0: 43872.5. Samples: 175786840. Policy #0 lag: (min: 0.0, avg: 9.8, max: 20.0) +[2024-06-10 19:12:23,240][46753] Avg episode reward: [(0, '0.156')] +[2024-06-10 19:12:23,252][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000010722_175669248.pth... +[2024-06-10 19:12:23,303][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000010082_165183488.pth +[2024-06-10 19:12:26,385][46990] Updated weights for policy 0, policy_version 10730 (0.0035) +[2024-06-10 19:12:28,240][46753] Fps is (10 sec: 47512.5, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 175898624. Throughput: 0: 43681.7. Samples: 176041280. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 19:12:28,240][46753] Avg episode reward: [(0, '0.154')] +[2024-06-10 19:12:30,110][46990] Updated weights for policy 0, policy_version 10740 (0.0029) +[2024-06-10 19:12:33,239][46753] Fps is (10 sec: 39321.6, 60 sec: 43420.9, 300 sec: 43542.6). Total num frames: 176062464. Throughput: 0: 43707.1. Samples: 176170740. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 19:12:33,240][46753] Avg episode reward: [(0, '0.161')] +[2024-06-10 19:12:34,072][46990] Updated weights for policy 0, policy_version 10750 (0.0032) +[2024-06-10 19:12:37,226][46990] Updated weights for policy 0, policy_version 10760 (0.0028) +[2024-06-10 19:12:38,239][46753] Fps is (10 sec: 40960.7, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 176308224. Throughput: 0: 43803.6. Samples: 176439300. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 19:12:38,240][46753] Avg episode reward: [(0, '0.156')] +[2024-06-10 19:12:41,530][46990] Updated weights for policy 0, policy_version 10770 (0.0035) +[2024-06-10 19:12:43,240][46753] Fps is (10 sec: 49151.2, 60 sec: 44236.8, 300 sec: 43653.6). Total num frames: 176553984. Throughput: 0: 43737.2. Samples: 176697980. Policy #0 lag: (min: 0.0, avg: 8.7, max: 22.0) +[2024-06-10 19:12:43,240][46753] Avg episode reward: [(0, '0.146')] +[2024-06-10 19:12:44,921][46990] Updated weights for policy 0, policy_version 10780 (0.0039) +[2024-06-10 19:12:48,244][46753] Fps is (10 sec: 42579.2, 60 sec: 43960.4, 300 sec: 43653.0). Total num frames: 176734208. Throughput: 0: 43676.6. Samples: 176830560. Policy #0 lag: (min: 0.0, avg: 8.7, max: 22.0) +[2024-06-10 19:12:48,245][46753] Avg episode reward: [(0, '0.161')] +[2024-06-10 19:12:48,778][46990] Updated weights for policy 0, policy_version 10790 (0.0031) +[2024-06-10 19:12:52,516][46990] Updated weights for policy 0, policy_version 10800 (0.0032) +[2024-06-10 19:12:53,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 176979968. Throughput: 0: 43700.2. Samples: 177097460. Policy #0 lag: (min: 0.0, avg: 11.5, max: 24.0) +[2024-06-10 19:12:53,240][46753] Avg episode reward: [(0, '0.154')] +[2024-06-10 19:12:56,335][46990] Updated weights for policy 0, policy_version 10810 (0.0039) +[2024-06-10 19:12:58,239][46753] Fps is (10 sec: 47534.9, 60 sec: 43963.7, 300 sec: 43653.7). Total num frames: 177209344. Throughput: 0: 43634.7. Samples: 177350640. Policy #0 lag: (min: 0.0, avg: 11.5, max: 24.0) +[2024-06-10 19:12:58,240][46753] Avg episode reward: [(0, '0.158')] +[2024-06-10 19:12:59,898][46990] Updated weights for policy 0, policy_version 10820 (0.0028) +[2024-06-10 19:13:03,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 177389568. Throughput: 0: 43750.5. Samples: 177488100. Policy #0 lag: (min: 0.0, avg: 12.1, max: 21.0) +[2024-06-10 19:13:03,240][46753] Avg episode reward: [(0, '0.153')] +[2024-06-10 19:13:04,068][46990] Updated weights for policy 0, policy_version 10830 (0.0026) +[2024-06-10 19:13:07,222][46990] Updated weights for policy 0, policy_version 10840 (0.0038) +[2024-06-10 19:13:08,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43144.5, 300 sec: 43820.3). Total num frames: 177618944. Throughput: 0: 43508.9. Samples: 177744740. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:13:08,240][46753] Avg episode reward: [(0, '0.153')] +[2024-06-10 19:13:11,644][46990] Updated weights for policy 0, policy_version 10850 (0.0041) +[2024-06-10 19:13:13,239][46753] Fps is (10 sec: 47514.0, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 177864704. Throughput: 0: 43602.9. Samples: 178003400. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:13:13,240][46753] Avg episode reward: [(0, '0.146')] +[2024-06-10 19:13:14,874][46990] Updated weights for policy 0, policy_version 10860 (0.0028) +[2024-06-10 19:13:18,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 178028544. Throughput: 0: 43695.1. Samples: 178137020. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:13:18,240][46753] Avg episode reward: [(0, '0.158')] +[2024-06-10 19:13:18,896][46990] Updated weights for policy 0, policy_version 10870 (0.0035) +[2024-06-10 19:13:22,375][46970] Signal inference workers to stop experience collection... (2500 times) +[2024-06-10 19:13:22,407][46990] InferenceWorker_p0-w0: stopping experience collection (2500 times) +[2024-06-10 19:13:22,430][46970] Signal inference workers to resume experience collection... (2500 times) +[2024-06-10 19:13:22,431][46990] InferenceWorker_p0-w0: resuming experience collection (2500 times) +[2024-06-10 19:13:22,569][46990] Updated weights for policy 0, policy_version 10880 (0.0025) +[2024-06-10 19:13:23,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 178290688. Throughput: 0: 43610.2. Samples: 178401760. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:13:23,240][46753] Avg episode reward: [(0, '0.164')] +[2024-06-10 19:13:26,647][46990] Updated weights for policy 0, policy_version 10890 (0.0036) +[2024-06-10 19:13:28,239][46753] Fps is (10 sec: 50790.1, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 178536448. Throughput: 0: 43625.0. Samples: 178661100. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 19:13:28,240][46753] Avg episode reward: [(0, '0.143')] +[2024-06-10 19:13:29,962][46990] Updated weights for policy 0, policy_version 10900 (0.0046) +[2024-06-10 19:13:33,240][46753] Fps is (10 sec: 40959.6, 60 sec: 43963.6, 300 sec: 43598.8). Total num frames: 178700288. Throughput: 0: 43689.2. Samples: 178796380. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 19:13:33,240][46753] Avg episode reward: [(0, '0.161')] +[2024-06-10 19:13:34,042][46990] Updated weights for policy 0, policy_version 10910 (0.0035) +[2024-06-10 19:13:37,350][46990] Updated weights for policy 0, policy_version 10920 (0.0030) +[2024-06-10 19:13:38,240][46753] Fps is (10 sec: 39321.3, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 178929664. Throughput: 0: 43430.3. Samples: 179051820. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 19:13:38,240][46753] Avg episode reward: [(0, '0.160')] +[2024-06-10 19:13:41,553][46990] Updated weights for policy 0, policy_version 10930 (0.0045) +[2024-06-10 19:13:43,240][46753] Fps is (10 sec: 47513.1, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 179175424. Throughput: 0: 43589.1. Samples: 179312160. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 19:13:43,240][46753] Avg episode reward: [(0, '0.152')] +[2024-06-10 19:13:44,650][46990] Updated weights for policy 0, policy_version 10940 (0.0048) +[2024-06-10 19:13:48,244][46753] Fps is (10 sec: 42579.5, 60 sec: 43690.6, 300 sec: 43653.0). Total num frames: 179355648. Throughput: 0: 43657.0. Samples: 179452860. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 19:13:48,245][46753] Avg episode reward: [(0, '0.143')] +[2024-06-10 19:13:48,859][46990] Updated weights for policy 0, policy_version 10950 (0.0032) +[2024-06-10 19:13:52,343][46990] Updated weights for policy 0, policy_version 10960 (0.0027) +[2024-06-10 19:13:53,239][46753] Fps is (10 sec: 42599.7, 60 sec: 43690.9, 300 sec: 43820.9). Total num frames: 179601408. Throughput: 0: 43968.1. Samples: 179723300. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 19:13:53,240][46753] Avg episode reward: [(0, '0.154')] +[2024-06-10 19:13:56,213][46990] Updated weights for policy 0, policy_version 10970 (0.0038) +[2024-06-10 19:13:58,239][46753] Fps is (10 sec: 47535.1, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 179830784. Throughput: 0: 43934.2. Samples: 179980440. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 19:13:58,240][46753] Avg episode reward: [(0, '0.150')] +[2024-06-10 19:13:59,600][46990] Updated weights for policy 0, policy_version 10980 (0.0038) +[2024-06-10 19:14:03,240][46753] Fps is (10 sec: 42597.3, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 180027392. Throughput: 0: 43981.2. Samples: 180116180. Policy #0 lag: (min: 0.0, avg: 12.0, max: 25.0) +[2024-06-10 19:14:03,240][46753] Avg episode reward: [(0, '0.167')] +[2024-06-10 19:14:03,540][46990] Updated weights for policy 0, policy_version 10990 (0.0035) +[2024-06-10 19:14:07,005][46990] Updated weights for policy 0, policy_version 11000 (0.0040) +[2024-06-10 19:14:08,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 180256768. Throughput: 0: 43867.6. Samples: 180375800. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:14:08,240][46753] Avg episode reward: [(0, '0.154')] +[2024-06-10 19:14:11,295][46990] Updated weights for policy 0, policy_version 11010 (0.0034) +[2024-06-10 19:14:13,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 180502528. Throughput: 0: 43839.1. Samples: 180633860. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:14:13,242][46753] Avg episode reward: [(0, '0.135')] +[2024-06-10 19:14:14,588][46990] Updated weights for policy 0, policy_version 11020 (0.0044) +[2024-06-10 19:14:18,240][46753] Fps is (10 sec: 42597.7, 60 sec: 44236.7, 300 sec: 43598.1). Total num frames: 180682752. Throughput: 0: 43868.8. Samples: 180770480. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:14:18,240][46753] Avg episode reward: [(0, '0.155')] +[2024-06-10 19:14:18,543][46990] Updated weights for policy 0, policy_version 11030 (0.0029) +[2024-06-10 19:14:22,357][46990] Updated weights for policy 0, policy_version 11040 (0.0037) +[2024-06-10 19:14:23,240][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 180928512. Throughput: 0: 44095.6. Samples: 181036120. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:14:23,251][46753] Avg episode reward: [(0, '0.152')] +[2024-06-10 19:14:23,260][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000011043_180928512.pth... +[2024-06-10 19:14:23,316][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000010403_170442752.pth +[2024-06-10 19:14:26,297][46990] Updated weights for policy 0, policy_version 11050 (0.0036) +[2024-06-10 19:14:28,239][46753] Fps is (10 sec: 47514.0, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 181157888. Throughput: 0: 44008.1. Samples: 181292520. Policy #0 lag: (min: 0.0, avg: 9.7, max: 24.0) +[2024-06-10 19:14:28,242][46753] Avg episode reward: [(0, '0.152')] +[2024-06-10 19:14:29,695][46990] Updated weights for policy 0, policy_version 11060 (0.0033) +[2024-06-10 19:14:33,240][46753] Fps is (10 sec: 39321.6, 60 sec: 43690.7, 300 sec: 43542.5). Total num frames: 181321728. Throughput: 0: 43819.4. Samples: 181424540. Policy #0 lag: (min: 1.0, avg: 8.8, max: 21.0) +[2024-06-10 19:14:33,241][46753] Avg episode reward: [(0, '0.157')] +[2024-06-10 19:14:33,740][46990] Updated weights for policy 0, policy_version 11070 (0.0031) +[2024-06-10 19:14:37,015][46990] Updated weights for policy 0, policy_version 11080 (0.0034) +[2024-06-10 19:14:38,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 181567488. Throughput: 0: 43627.0. Samples: 181686520. Policy #0 lag: (min: 1.0, avg: 8.8, max: 21.0) +[2024-06-10 19:14:38,240][46753] Avg episode reward: [(0, '0.151')] +[2024-06-10 19:14:41,446][46990] Updated weights for policy 0, policy_version 11090 (0.0042) +[2024-06-10 19:14:43,239][46753] Fps is (10 sec: 49152.1, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 181813248. Throughput: 0: 43590.6. Samples: 181942020. Policy #0 lag: (min: 0.0, avg: 9.6, max: 22.0) +[2024-06-10 19:14:43,241][46753] Avg episode reward: [(0, '0.157')] +[2024-06-10 19:14:44,564][46990] Updated weights for policy 0, policy_version 11100 (0.0039) +[2024-06-10 19:14:46,211][46970] Signal inference workers to stop experience collection... (2550 times) +[2024-06-10 19:14:46,255][46990] InferenceWorker_p0-w0: stopping experience collection (2550 times) +[2024-06-10 19:14:46,263][46970] Signal inference workers to resume experience collection... (2550 times) +[2024-06-10 19:14:46,274][46990] InferenceWorker_p0-w0: resuming experience collection (2550 times) +[2024-06-10 19:14:48,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43967.0, 300 sec: 43598.1). Total num frames: 181993472. Throughput: 0: 43800.1. Samples: 182087180. Policy #0 lag: (min: 0.0, avg: 9.6, max: 22.0) +[2024-06-10 19:14:48,240][46753] Avg episode reward: [(0, '0.147')] +[2024-06-10 19:14:48,521][46990] Updated weights for policy 0, policy_version 11110 (0.0032) +[2024-06-10 19:14:52,265][46990] Updated weights for policy 0, policy_version 11120 (0.0033) +[2024-06-10 19:14:53,244][46753] Fps is (10 sec: 42579.6, 60 sec: 43960.4, 300 sec: 43819.6). Total num frames: 182239232. Throughput: 0: 43901.4. Samples: 182351560. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 19:14:53,245][46753] Avg episode reward: [(0, '0.157')] +[2024-06-10 19:14:56,173][46990] Updated weights for policy 0, policy_version 11130 (0.0041) +[2024-06-10 19:14:58,239][46753] Fps is (10 sec: 49152.0, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 182484992. Throughput: 0: 43817.0. Samples: 182605620. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 19:14:58,240][46753] Avg episode reward: [(0, '0.158')] +[2024-06-10 19:14:59,602][46990] Updated weights for policy 0, policy_version 11140 (0.0032) +[2024-06-10 19:15:03,239][46753] Fps is (10 sec: 39339.4, 60 sec: 43417.7, 300 sec: 43542.6). Total num frames: 182632448. Throughput: 0: 43806.8. Samples: 182741780. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 19:15:03,240][46753] Avg episode reward: [(0, '0.149')] +[2024-06-10 19:15:03,669][46990] Updated weights for policy 0, policy_version 11150 (0.0032) +[2024-06-10 19:15:06,762][46990] Updated weights for policy 0, policy_version 11160 (0.0022) +[2024-06-10 19:15:08,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 182894592. Throughput: 0: 43596.1. Samples: 182997940. Policy #0 lag: (min: 0.0, avg: 10.8, max: 24.0) +[2024-06-10 19:15:08,240][46753] Avg episode reward: [(0, '0.157')] +[2024-06-10 19:15:11,471][46990] Updated weights for policy 0, policy_version 11170 (0.0043) +[2024-06-10 19:15:13,239][46753] Fps is (10 sec: 49151.6, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 183123968. Throughput: 0: 43744.4. Samples: 183261020. Policy #0 lag: (min: 0.0, avg: 10.8, max: 24.0) +[2024-06-10 19:15:13,240][46753] Avg episode reward: [(0, '0.162')] +[2024-06-10 19:15:14,728][46990] Updated weights for policy 0, policy_version 11180 (0.0029) +[2024-06-10 19:15:18,239][46753] Fps is (10 sec: 40959.6, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 183304192. Throughput: 0: 43869.4. Samples: 183398660. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 19:15:18,240][46753] Avg episode reward: [(0, '0.182')] +[2024-06-10 19:15:18,241][46970] Saving new best policy, reward=0.182! +[2024-06-10 19:15:18,634][46990] Updated weights for policy 0, policy_version 11190 (0.0028) +[2024-06-10 19:15:22,387][46990] Updated weights for policy 0, policy_version 11200 (0.0034) +[2024-06-10 19:15:23,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43690.8, 300 sec: 43765.2). Total num frames: 183549952. Throughput: 0: 43825.4. Samples: 183658660. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 19:15:23,240][46753] Avg episode reward: [(0, '0.152')] +[2024-06-10 19:15:26,014][46990] Updated weights for policy 0, policy_version 11210 (0.0034) +[2024-06-10 19:15:28,239][46753] Fps is (10 sec: 47514.1, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 183779328. Throughput: 0: 43886.3. Samples: 183916900. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:15:28,240][46753] Avg episode reward: [(0, '0.162')] +[2024-06-10 19:15:29,730][46990] Updated weights for policy 0, policy_version 11220 (0.0039) +[2024-06-10 19:15:33,240][46753] Fps is (10 sec: 40958.7, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 183959552. Throughput: 0: 43702.9. Samples: 184053820. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:15:33,244][46753] Avg episode reward: [(0, '0.152')] +[2024-06-10 19:15:33,780][46990] Updated weights for policy 0, policy_version 11230 (0.0038) +[2024-06-10 19:15:36,909][46990] Updated weights for policy 0, policy_version 11240 (0.0041) +[2024-06-10 19:15:38,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 184205312. Throughput: 0: 43575.8. Samples: 184312280. Policy #0 lag: (min: 1.0, avg: 9.7, max: 24.0) +[2024-06-10 19:15:38,240][46753] Avg episode reward: [(0, '0.155')] +[2024-06-10 19:15:41,224][46990] Updated weights for policy 0, policy_version 11250 (0.0032) +[2024-06-10 19:15:43,239][46753] Fps is (10 sec: 47514.7, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 184434688. Throughput: 0: 43604.4. Samples: 184567820. Policy #0 lag: (min: 0.0, avg: 13.0, max: 26.0) +[2024-06-10 19:15:43,240][46753] Avg episode reward: [(0, '0.161')] +[2024-06-10 19:15:44,600][46990] Updated weights for policy 0, policy_version 11260 (0.0043) +[2024-06-10 19:15:48,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 184614912. Throughput: 0: 43626.7. Samples: 184704980. Policy #0 lag: (min: 0.0, avg: 13.0, max: 26.0) +[2024-06-10 19:15:48,240][46753] Avg episode reward: [(0, '0.152')] +[2024-06-10 19:15:48,811][46990] Updated weights for policy 0, policy_version 11270 (0.0044) +[2024-06-10 19:15:52,214][46990] Updated weights for policy 0, policy_version 11280 (0.0033) +[2024-06-10 19:15:53,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43693.9, 300 sec: 43764.7). Total num frames: 184860672. Throughput: 0: 43831.0. Samples: 184970340. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 19:15:53,240][46753] Avg episode reward: [(0, '0.142')] +[2024-06-10 19:15:56,283][46990] Updated weights for policy 0, policy_version 11290 (0.0032) +[2024-06-10 19:15:56,970][46970] Signal inference workers to stop experience collection... (2600 times) +[2024-06-10 19:15:56,976][46970] Signal inference workers to resume experience collection... (2600 times) +[2024-06-10 19:15:57,008][46990] InferenceWorker_p0-w0: stopping experience collection (2600 times) +[2024-06-10 19:15:57,008][46990] InferenceWorker_p0-w0: resuming experience collection (2600 times) +[2024-06-10 19:15:58,239][46753] Fps is (10 sec: 47513.6, 60 sec: 43417.6, 300 sec: 43765.4). Total num frames: 185090048. Throughput: 0: 43724.5. Samples: 185228620. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 19:15:58,240][46753] Avg episode reward: [(0, '0.162')] +[2024-06-10 19:15:59,720][46990] Updated weights for policy 0, policy_version 11300 (0.0048) +[2024-06-10 19:16:03,240][46753] Fps is (10 sec: 40959.5, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 185270272. Throughput: 0: 43554.1. Samples: 185358600. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 19:16:03,248][46753] Avg episode reward: [(0, '0.153')] +[2024-06-10 19:16:03,808][46990] Updated weights for policy 0, policy_version 11310 (0.0029) +[2024-06-10 19:16:06,975][46990] Updated weights for policy 0, policy_version 11320 (0.0032) +[2024-06-10 19:16:08,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 185499648. Throughput: 0: 43661.4. Samples: 185623420. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 19:16:08,240][46753] Avg episode reward: [(0, '0.148')] +[2024-06-10 19:16:11,227][46990] Updated weights for policy 0, policy_version 11330 (0.0035) +[2024-06-10 19:16:13,239][46753] Fps is (10 sec: 47514.2, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 185745408. Throughput: 0: 43684.4. Samples: 185882700. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:16:13,240][46753] Avg episode reward: [(0, '0.157')] +[2024-06-10 19:16:14,685][46990] Updated weights for policy 0, policy_version 11340 (0.0039) +[2024-06-10 19:16:18,244][46753] Fps is (10 sec: 42578.9, 60 sec: 43687.4, 300 sec: 43597.5). Total num frames: 185925632. Throughput: 0: 43642.1. Samples: 186017900. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:16:18,244][46753] Avg episode reward: [(0, '0.154')] +[2024-06-10 19:16:18,598][46990] Updated weights for policy 0, policy_version 11350 (0.0035) +[2024-06-10 19:16:22,084][46990] Updated weights for policy 0, policy_version 11360 (0.0030) +[2024-06-10 19:16:23,241][46753] Fps is (10 sec: 42590.8, 60 sec: 43689.3, 300 sec: 43764.5). Total num frames: 186171392. Throughput: 0: 43775.6. Samples: 186282260. Policy #0 lag: (min: 0.0, avg: 9.0, max: 20.0) +[2024-06-10 19:16:23,242][46753] Avg episode reward: [(0, '0.157')] +[2024-06-10 19:16:23,260][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000011363_186171392.pth... +[2024-06-10 19:16:23,337][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000010722_175669248.pth +[2024-06-10 19:16:26,356][46990] Updated weights for policy 0, policy_version 11370 (0.0038) +[2024-06-10 19:16:28,239][46753] Fps is (10 sec: 47534.6, 60 sec: 43690.6, 300 sec: 43876.5). Total num frames: 186400768. Throughput: 0: 43714.6. Samples: 186534980. Policy #0 lag: (min: 1.0, avg: 12.5, max: 24.0) +[2024-06-10 19:16:28,240][46753] Avg episode reward: [(0, '0.171')] +[2024-06-10 19:16:29,869][46990] Updated weights for policy 0, policy_version 11380 (0.0037) +[2024-06-10 19:16:33,239][46753] Fps is (10 sec: 40967.5, 60 sec: 43690.9, 300 sec: 43653.7). Total num frames: 186580992. Throughput: 0: 43491.1. Samples: 186662080. Policy #0 lag: (min: 1.0, avg: 12.5, max: 24.0) +[2024-06-10 19:16:33,240][46753] Avg episode reward: [(0, '0.159')] +[2024-06-10 19:16:33,617][46990] Updated weights for policy 0, policy_version 11390 (0.0044) +[2024-06-10 19:16:37,442][46990] Updated weights for policy 0, policy_version 11400 (0.0026) +[2024-06-10 19:16:38,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 186810368. Throughput: 0: 43590.7. Samples: 186931920. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 19:16:38,240][46753] Avg episode reward: [(0, '0.169')] +[2024-06-10 19:16:41,293][46990] Updated weights for policy 0, policy_version 11410 (0.0029) +[2024-06-10 19:16:43,240][46753] Fps is (10 sec: 45873.8, 60 sec: 43417.4, 300 sec: 43875.7). Total num frames: 187039744. Throughput: 0: 43497.0. Samples: 187186000. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 19:16:43,240][46753] Avg episode reward: [(0, '0.155')] +[2024-06-10 19:16:45,222][46990] Updated weights for policy 0, policy_version 11420 (0.0037) +[2024-06-10 19:16:48,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 187236352. Throughput: 0: 43573.1. Samples: 187319380. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 19:16:48,240][46753] Avg episode reward: [(0, '0.154')] +[2024-06-10 19:16:48,851][46990] Updated weights for policy 0, policy_version 11430 (0.0040) +[2024-06-10 19:16:52,807][46990] Updated weights for policy 0, policy_version 11440 (0.0039) +[2024-06-10 19:16:53,239][46753] Fps is (10 sec: 42599.7, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 187465728. Throughput: 0: 43514.6. Samples: 187581580. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 19:16:53,240][46753] Avg episode reward: [(0, '0.158')] +[2024-06-10 19:16:56,420][46990] Updated weights for policy 0, policy_version 11450 (0.0039) +[2024-06-10 19:16:58,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43144.6, 300 sec: 43820.3). Total num frames: 187678720. Throughput: 0: 43557.8. Samples: 187842800. Policy #0 lag: (min: 0.0, avg: 11.5, max: 23.0) +[2024-06-10 19:16:58,240][46753] Avg episode reward: [(0, '0.163')] +[2024-06-10 19:17:00,208][46990] Updated weights for policy 0, policy_version 11460 (0.0042) +[2024-06-10 19:17:03,240][46753] Fps is (10 sec: 42596.7, 60 sec: 43690.5, 300 sec: 43598.0). Total num frames: 187891712. Throughput: 0: 43318.6. Samples: 187967060. Policy #0 lag: (min: 0.0, avg: 11.5, max: 23.0) +[2024-06-10 19:17:03,240][46753] Avg episode reward: [(0, '0.179')] +[2024-06-10 19:17:03,808][46990] Updated weights for policy 0, policy_version 11470 (0.0030) +[2024-06-10 19:17:07,915][46990] Updated weights for policy 0, policy_version 11480 (0.0034) +[2024-06-10 19:17:08,239][46753] Fps is (10 sec: 42597.8, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 188104704. Throughput: 0: 43282.1. Samples: 188229880. Policy #0 lag: (min: 1.0, avg: 9.3, max: 20.0) +[2024-06-10 19:17:08,240][46753] Avg episode reward: [(0, '0.164')] +[2024-06-10 19:17:11,000][46990] Updated weights for policy 0, policy_version 11490 (0.0032) +[2024-06-10 19:17:13,016][46970] Signal inference workers to stop experience collection... (2650 times) +[2024-06-10 19:17:13,016][46970] Signal inference workers to resume experience collection... (2650 times) +[2024-06-10 19:17:13,042][46990] InferenceWorker_p0-w0: stopping experience collection (2650 times) +[2024-06-10 19:17:13,043][46990] InferenceWorker_p0-w0: resuming experience collection (2650 times) +[2024-06-10 19:17:13,239][46753] Fps is (10 sec: 45877.3, 60 sec: 43417.7, 300 sec: 43820.3). Total num frames: 188350464. Throughput: 0: 43608.6. Samples: 188497360. Policy #0 lag: (min: 0.0, avg: 9.3, max: 20.0) +[2024-06-10 19:17:13,240][46753] Avg episode reward: [(0, '0.174')] +[2024-06-10 19:17:15,418][46990] Updated weights for policy 0, policy_version 11500 (0.0039) +[2024-06-10 19:17:18,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43693.9, 300 sec: 43653.6). Total num frames: 188547072. Throughput: 0: 43730.2. Samples: 188629940. Policy #0 lag: (min: 0.0, avg: 9.3, max: 20.0) +[2024-06-10 19:17:18,240][46753] Avg episode reward: [(0, '0.158')] +[2024-06-10 19:17:18,672][46990] Updated weights for policy 0, policy_version 11510 (0.0037) +[2024-06-10 19:17:22,612][46990] Updated weights for policy 0, policy_version 11520 (0.0041) +[2024-06-10 19:17:23,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43418.9, 300 sec: 43653.7). Total num frames: 188776448. Throughput: 0: 43561.2. Samples: 188892180. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 19:17:23,240][46753] Avg episode reward: [(0, '0.173')] +[2024-06-10 19:17:26,335][46990] Updated weights for policy 0, policy_version 11530 (0.0028) +[2024-06-10 19:17:28,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43144.6, 300 sec: 43820.2). Total num frames: 188989440. Throughput: 0: 43839.8. Samples: 189158780. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 19:17:28,240][46753] Avg episode reward: [(0, '0.166')] +[2024-06-10 19:17:29,899][46990] Updated weights for policy 0, policy_version 11540 (0.0030) +[2024-06-10 19:17:33,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 189202432. Throughput: 0: 43621.2. Samples: 189282340. Policy #0 lag: (min: 0.0, avg: 10.1, max: 22.0) +[2024-06-10 19:17:33,242][46753] Avg episode reward: [(0, '0.150')] +[2024-06-10 19:17:33,411][46990] Updated weights for policy 0, policy_version 11550 (0.0041) +[2024-06-10 19:17:37,698][46990] Updated weights for policy 0, policy_version 11560 (0.0028) +[2024-06-10 19:17:38,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 189415424. Throughput: 0: 43716.1. Samples: 189548800. Policy #0 lag: (min: 0.0, avg: 10.1, max: 22.0) +[2024-06-10 19:17:38,240][46753] Avg episode reward: [(0, '0.177')] +[2024-06-10 19:17:41,081][46990] Updated weights for policy 0, policy_version 11570 (0.0030) +[2024-06-10 19:17:43,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43417.8, 300 sec: 43765.4). Total num frames: 189644800. Throughput: 0: 43716.4. Samples: 189810040. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 19:17:43,240][46753] Avg episode reward: [(0, '0.169')] +[2024-06-10 19:17:45,448][46990] Updated weights for policy 0, policy_version 11580 (0.0032) +[2024-06-10 19:17:48,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43690.6, 300 sec: 43653.7). Total num frames: 189857792. Throughput: 0: 43859.5. Samples: 189940720. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:17:48,240][46753] Avg episode reward: [(0, '0.173')] +[2024-06-10 19:17:48,685][46990] Updated weights for policy 0, policy_version 11590 (0.0035) +[2024-06-10 19:17:52,727][46990] Updated weights for policy 0, policy_version 11600 (0.0040) +[2024-06-10 19:17:53,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 190087168. Throughput: 0: 43861.9. Samples: 190203660. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:17:53,240][46753] Avg episode reward: [(0, '0.156')] +[2024-06-10 19:17:56,209][46990] Updated weights for policy 0, policy_version 11610 (0.0037) +[2024-06-10 19:17:58,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43144.5, 300 sec: 43653.6). Total num frames: 190267392. Throughput: 0: 43714.1. Samples: 190464500. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:17:58,240][46753] Avg episode reward: [(0, '0.161')] +[2024-06-10 19:17:59,980][46990] Updated weights for policy 0, policy_version 11620 (0.0039) +[2024-06-10 19:18:03,244][46753] Fps is (10 sec: 42578.9, 60 sec: 43687.6, 300 sec: 43708.5). Total num frames: 190513152. Throughput: 0: 43532.9. Samples: 190589120. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:18:03,245][46753] Avg episode reward: [(0, '0.158')] +[2024-06-10 19:18:03,542][46990] Updated weights for policy 0, policy_version 11630 (0.0031) +[2024-06-10 19:18:07,714][46990] Updated weights for policy 0, policy_version 11640 (0.0028) +[2024-06-10 19:18:08,240][46753] Fps is (10 sec: 47513.1, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 190742528. Throughput: 0: 43463.0. Samples: 190848020. Policy #0 lag: (min: 0.0, avg: 9.2, max: 19.0) +[2024-06-10 19:18:08,240][46753] Avg episode reward: [(0, '0.172')] +[2024-06-10 19:18:11,298][46990] Updated weights for policy 0, policy_version 11650 (0.0036) +[2024-06-10 19:18:13,239][46753] Fps is (10 sec: 44256.8, 60 sec: 43417.5, 300 sec: 43820.2). Total num frames: 190955520. Throughput: 0: 43548.4. Samples: 191118460. Policy #0 lag: (min: 0.0, avg: 9.2, max: 19.0) +[2024-06-10 19:18:13,240][46753] Avg episode reward: [(0, '0.167')] +[2024-06-10 19:18:15,013][46990] Updated weights for policy 0, policy_version 11660 (0.0034) +[2024-06-10 19:18:18,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 191168512. Throughput: 0: 43682.7. Samples: 191248060. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:18:18,240][46753] Avg episode reward: [(0, '0.153')] +[2024-06-10 19:18:18,619][46990] Updated weights for policy 0, policy_version 11670 (0.0032) +[2024-06-10 19:18:22,322][46990] Updated weights for policy 0, policy_version 11680 (0.0031) +[2024-06-10 19:18:23,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 191397888. Throughput: 0: 43665.1. Samples: 191513740. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:18:23,240][46753] Avg episode reward: [(0, '0.170')] +[2024-06-10 19:18:23,377][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000011683_191414272.pth... +[2024-06-10 19:18:23,439][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000011043_180928512.pth +[2024-06-10 19:18:26,265][46990] Updated weights for policy 0, policy_version 11690 (0.0045) +[2024-06-10 19:18:28,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43144.6, 300 sec: 43653.7). Total num frames: 191578112. Throughput: 0: 43664.1. Samples: 191774920. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 19:18:28,240][46753] Avg episode reward: [(0, '0.165')] +[2024-06-10 19:18:29,916][46990] Updated weights for policy 0, policy_version 11700 (0.0025) +[2024-06-10 19:18:33,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 191823872. Throughput: 0: 43573.3. Samples: 191901520. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 19:18:33,240][46753] Avg episode reward: [(0, '0.184')] +[2024-06-10 19:18:33,349][46970] Saving new best policy, reward=0.184! +[2024-06-10 19:18:33,554][46990] Updated weights for policy 0, policy_version 11710 (0.0030) +[2024-06-10 19:18:37,479][46990] Updated weights for policy 0, policy_version 11720 (0.0043) +[2024-06-10 19:18:38,239][46753] Fps is (10 sec: 49151.5, 60 sec: 44236.7, 300 sec: 43709.2). Total num frames: 192069632. Throughput: 0: 43626.2. Samples: 192166840. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:18:38,240][46753] Avg episode reward: [(0, '0.168')] +[2024-06-10 19:18:41,225][46990] Updated weights for policy 0, policy_version 11730 (0.0030) +[2024-06-10 19:18:43,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43690.7, 300 sec: 43765.4). Total num frames: 192266240. Throughput: 0: 43814.3. Samples: 192436140. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:18:43,240][46753] Avg episode reward: [(0, '0.171')] +[2024-06-10 19:18:43,244][46970] Signal inference workers to stop experience collection... (2700 times) +[2024-06-10 19:18:43,244][46970] Signal inference workers to resume experience collection... (2700 times) +[2024-06-10 19:18:43,261][46990] InferenceWorker_p0-w0: stopping experience collection (2700 times) +[2024-06-10 19:18:43,261][46990] InferenceWorker_p0-w0: resuming experience collection (2700 times) +[2024-06-10 19:18:44,747][46990] Updated weights for policy 0, policy_version 11740 (0.0031) +[2024-06-10 19:18:48,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 192479232. Throughput: 0: 43902.6. Samples: 192564540. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:18:48,240][46753] Avg episode reward: [(0, '0.167')] +[2024-06-10 19:18:48,585][46990] Updated weights for policy 0, policy_version 11750 (0.0037) +[2024-06-10 19:18:52,348][46990] Updated weights for policy 0, policy_version 11760 (0.0050) +[2024-06-10 19:18:53,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 192724992. Throughput: 0: 43879.7. Samples: 192822600. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 19:18:53,240][46753] Avg episode reward: [(0, '0.165')] +[2024-06-10 19:18:56,097][46990] Updated weights for policy 0, policy_version 11770 (0.0041) +[2024-06-10 19:18:58,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 192888832. Throughput: 0: 43734.3. Samples: 193086500. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 19:18:58,240][46753] Avg episode reward: [(0, '0.166')] +[2024-06-10 19:18:59,768][46990] Updated weights for policy 0, policy_version 11780 (0.0047) +[2024-06-10 19:19:03,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43967.0, 300 sec: 43709.2). Total num frames: 193150976. Throughput: 0: 43624.0. Samples: 193211140. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:19:03,242][46753] Avg episode reward: [(0, '0.178')] +[2024-06-10 19:19:03,591][46990] Updated weights for policy 0, policy_version 11790 (0.0034) +[2024-06-10 19:19:07,105][46990] Updated weights for policy 0, policy_version 11800 (0.0034) +[2024-06-10 19:19:08,239][46753] Fps is (10 sec: 47513.7, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 193363968. Throughput: 0: 43571.7. Samples: 193474460. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:19:08,240][46753] Avg episode reward: [(0, '0.164')] +[2024-06-10 19:19:11,383][46990] Updated weights for policy 0, policy_version 11810 (0.0036) +[2024-06-10 19:19:13,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 193576960. Throughput: 0: 43820.8. Samples: 193746860. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 19:19:13,240][46753] Avg episode reward: [(0, '0.181')] +[2024-06-10 19:19:14,739][46990] Updated weights for policy 0, policy_version 11820 (0.0036) +[2024-06-10 19:19:18,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 193789952. Throughput: 0: 43915.6. Samples: 193877720. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 19:19:18,240][46753] Avg episode reward: [(0, '0.163')] +[2024-06-10 19:19:19,028][46990] Updated weights for policy 0, policy_version 11830 (0.0033) +[2024-06-10 19:19:22,251][46990] Updated weights for policy 0, policy_version 11840 (0.0022) +[2024-06-10 19:19:23,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 194035712. Throughput: 0: 43696.4. Samples: 194133180. Policy #0 lag: (min: 0.0, avg: 8.4, max: 20.0) +[2024-06-10 19:19:23,240][46753] Avg episode reward: [(0, '0.169')] +[2024-06-10 19:19:26,267][46990] Updated weights for policy 0, policy_version 11850 (0.0039) +[2024-06-10 19:19:28,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 194215936. Throughput: 0: 43544.8. Samples: 194395660. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 19:19:28,240][46753] Avg episode reward: [(0, '0.172')] +[2024-06-10 19:19:29,907][46990] Updated weights for policy 0, policy_version 11860 (0.0032) +[2024-06-10 19:19:33,244][46753] Fps is (10 sec: 40942.0, 60 sec: 43687.5, 300 sec: 43653.0). Total num frames: 194445312. Throughput: 0: 43445.1. Samples: 194519760. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 19:19:33,245][46753] Avg episode reward: [(0, '0.174')] +[2024-06-10 19:19:34,047][46990] Updated weights for policy 0, policy_version 11870 (0.0049) +[2024-06-10 19:19:37,151][46990] Updated weights for policy 0, policy_version 11880 (0.0028) +[2024-06-10 19:19:38,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 194674688. Throughput: 0: 43678.6. Samples: 194788140. Policy #0 lag: (min: 0.0, avg: 11.3, max: 24.0) +[2024-06-10 19:19:38,240][46753] Avg episode reward: [(0, '0.177')] +[2024-06-10 19:19:41,662][46990] Updated weights for policy 0, policy_version 11890 (0.0029) +[2024-06-10 19:19:43,239][46753] Fps is (10 sec: 42617.8, 60 sec: 43417.6, 300 sec: 43653.7). Total num frames: 194871296. Throughput: 0: 43714.3. Samples: 195053640. Policy #0 lag: (min: 0.0, avg: 11.3, max: 24.0) +[2024-06-10 19:19:43,240][46753] Avg episode reward: [(0, '0.173')] +[2024-06-10 19:19:44,654][46990] Updated weights for policy 0, policy_version 11900 (0.0029) +[2024-06-10 19:19:48,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43690.7, 300 sec: 43598.8). Total num frames: 195100672. Throughput: 0: 43916.5. Samples: 195187380. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 19:19:48,240][46753] Avg episode reward: [(0, '0.182')] +[2024-06-10 19:19:48,974][46990] Updated weights for policy 0, policy_version 11910 (0.0042) +[2024-06-10 19:19:52,320][46990] Updated weights for policy 0, policy_version 11920 (0.0034) +[2024-06-10 19:19:53,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 195330048. Throughput: 0: 43815.1. Samples: 195446140. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 19:19:53,240][46753] Avg episode reward: [(0, '0.173')] +[2024-06-10 19:19:56,233][46990] Updated weights for policy 0, policy_version 11930 (0.0037) +[2024-06-10 19:19:58,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 195526656. Throughput: 0: 43502.8. Samples: 195704480. Policy #0 lag: (min: 0.0, avg: 10.1, max: 23.0) +[2024-06-10 19:19:58,240][46753] Avg episode reward: [(0, '0.176')] +[2024-06-10 19:19:59,924][46990] Updated weights for policy 0, policy_version 11940 (0.0023) +[2024-06-10 19:20:03,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 195756032. Throughput: 0: 43411.6. Samples: 195831240. Policy #0 lag: (min: 0.0, avg: 10.1, max: 23.0) +[2024-06-10 19:20:03,240][46753] Avg episode reward: [(0, '0.180')] +[2024-06-10 19:20:03,859][46990] Updated weights for policy 0, policy_version 11950 (0.0036) +[2024-06-10 19:20:07,353][46990] Updated weights for policy 0, policy_version 11960 (0.0029) +[2024-06-10 19:20:08,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 195985408. Throughput: 0: 43728.2. Samples: 196100940. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 19:20:08,240][46753] Avg episode reward: [(0, '0.172')] +[2024-06-10 19:20:08,539][46970] Signal inference workers to stop experience collection... (2750 times) +[2024-06-10 19:20:08,540][46970] Signal inference workers to resume experience collection... (2750 times) +[2024-06-10 19:20:08,582][46990] InferenceWorker_p0-w0: stopping experience collection (2750 times) +[2024-06-10 19:20:08,582][46990] InferenceWorker_p0-w0: resuming experience collection (2750 times) +[2024-06-10 19:20:11,486][46990] Updated weights for policy 0, policy_version 11970 (0.0032) +[2024-06-10 19:20:13,244][46753] Fps is (10 sec: 45854.4, 60 sec: 43960.5, 300 sec: 43764.1). Total num frames: 196214784. Throughput: 0: 43720.5. Samples: 196363280. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 19:20:13,245][46753] Avg episode reward: [(0, '0.181')] +[2024-06-10 19:20:14,776][46990] Updated weights for policy 0, policy_version 11980 (0.0036) +[2024-06-10 19:20:18,243][46753] Fps is (10 sec: 42582.9, 60 sec: 43688.1, 300 sec: 43597.6). Total num frames: 196411392. Throughput: 0: 43978.7. Samples: 196498760. Policy #0 lag: (min: 0.0, avg: 9.3, max: 20.0) +[2024-06-10 19:20:18,243][46753] Avg episode reward: [(0, '0.168')] +[2024-06-10 19:20:18,654][46990] Updated weights for policy 0, policy_version 11990 (0.0035) +[2024-06-10 19:20:22,413][46990] Updated weights for policy 0, policy_version 12000 (0.0040) +[2024-06-10 19:20:23,240][46753] Fps is (10 sec: 42616.9, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 196640768. Throughput: 0: 43832.8. Samples: 196760620. Policy #0 lag: (min: 0.0, avg: 9.3, max: 20.0) +[2024-06-10 19:20:23,240][46753] Avg episode reward: [(0, '0.179')] +[2024-06-10 19:20:23,372][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000012003_196657152.pth... +[2024-06-10 19:20:23,436][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000011363_186171392.pth +[2024-06-10 19:20:26,391][46990] Updated weights for policy 0, policy_version 12010 (0.0040) +[2024-06-10 19:20:28,239][46753] Fps is (10 sec: 44252.8, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 196853760. Throughput: 0: 43634.6. Samples: 197017200. Policy #0 lag: (min: 0.0, avg: 11.3, max: 22.0) +[2024-06-10 19:20:28,240][46753] Avg episode reward: [(0, '0.179')] +[2024-06-10 19:20:29,789][46990] Updated weights for policy 0, policy_version 12020 (0.0042) +[2024-06-10 19:20:33,240][46753] Fps is (10 sec: 42598.6, 60 sec: 43693.8, 300 sec: 43598.1). Total num frames: 197066752. Throughput: 0: 43524.8. Samples: 197146000. Policy #0 lag: (min: 0.0, avg: 11.3, max: 22.0) +[2024-06-10 19:20:33,240][46753] Avg episode reward: [(0, '0.170')] +[2024-06-10 19:20:33,985][46990] Updated weights for policy 0, policy_version 12030 (0.0032) +[2024-06-10 19:20:37,220][46990] Updated weights for policy 0, policy_version 12040 (0.0035) +[2024-06-10 19:20:38,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 197296128. Throughput: 0: 43707.1. Samples: 197412960. Policy #0 lag: (min: 0.0, avg: 10.1, max: 22.0) +[2024-06-10 19:20:38,240][46753] Avg episode reward: [(0, '0.173')] +[2024-06-10 19:20:41,405][46990] Updated weights for policy 0, policy_version 12050 (0.0040) +[2024-06-10 19:20:43,240][46753] Fps is (10 sec: 45875.2, 60 sec: 44236.7, 300 sec: 43764.7). Total num frames: 197525504. Throughput: 0: 43752.3. Samples: 197673340. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 19:20:43,240][46753] Avg episode reward: [(0, '0.194')] +[2024-06-10 19:20:43,252][46970] Saving new best policy, reward=0.194! +[2024-06-10 19:20:45,019][46990] Updated weights for policy 0, policy_version 12060 (0.0038) +[2024-06-10 19:20:48,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 197722112. Throughput: 0: 43832.7. Samples: 197803720. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 19:20:48,240][46753] Avg episode reward: [(0, '0.178')] +[2024-06-10 19:20:48,861][46990] Updated weights for policy 0, policy_version 12070 (0.0030) +[2024-06-10 19:20:52,644][46990] Updated weights for policy 0, policy_version 12080 (0.0042) +[2024-06-10 19:20:53,240][46753] Fps is (10 sec: 44236.8, 60 sec: 43963.6, 300 sec: 43653.6). Total num frames: 197967872. Throughput: 0: 43693.1. Samples: 198067140. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:20:53,240][46753] Avg episode reward: [(0, '0.177')] +[2024-06-10 19:20:56,733][46990] Updated weights for policy 0, policy_version 12090 (0.0039) +[2024-06-10 19:20:58,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 198164480. Throughput: 0: 43489.2. Samples: 198320100. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:20:58,240][46753] Avg episode reward: [(0, '0.190')] +[2024-06-10 19:21:00,079][46990] Updated weights for policy 0, policy_version 12100 (0.0032) +[2024-06-10 19:21:03,240][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 198377472. Throughput: 0: 43434.0. Samples: 198453140. Policy #0 lag: (min: 1.0, avg: 11.7, max: 20.0) +[2024-06-10 19:21:03,240][46753] Avg episode reward: [(0, '0.166')] +[2024-06-10 19:21:03,963][46990] Updated weights for policy 0, policy_version 12110 (0.0045) +[2024-06-10 19:21:07,573][46990] Updated weights for policy 0, policy_version 12120 (0.0034) +[2024-06-10 19:21:08,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 198623232. Throughput: 0: 43534.8. Samples: 198719680. Policy #0 lag: (min: 1.0, avg: 11.7, max: 20.0) +[2024-06-10 19:21:08,240][46753] Avg episode reward: [(0, '0.176')] +[2024-06-10 19:21:11,245][46990] Updated weights for policy 0, policy_version 12130 (0.0035) +[2024-06-10 19:21:13,240][46753] Fps is (10 sec: 45875.2, 60 sec: 43693.9, 300 sec: 43765.4). Total num frames: 198836224. Throughput: 0: 43755.8. Samples: 198986220. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 19:21:13,240][46753] Avg episode reward: [(0, '0.165')] +[2024-06-10 19:21:14,808][46990] Updated weights for policy 0, policy_version 12140 (0.0032) +[2024-06-10 19:21:18,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43693.3, 300 sec: 43598.4). Total num frames: 199032832. Throughput: 0: 43831.2. Samples: 199118400. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 19:21:18,248][46753] Avg episode reward: [(0, '0.164')] +[2024-06-10 19:21:18,996][46990] Updated weights for policy 0, policy_version 12150 (0.0043) +[2024-06-10 19:21:22,609][46970] Signal inference workers to stop experience collection... (2800 times) +[2024-06-10 19:21:22,638][46990] InferenceWorker_p0-w0: stopping experience collection (2800 times) +[2024-06-10 19:21:22,660][46970] Signal inference workers to resume experience collection... (2800 times) +[2024-06-10 19:21:22,661][46990] InferenceWorker_p0-w0: resuming experience collection (2800 times) +[2024-06-10 19:21:22,667][46990] Updated weights for policy 0, policy_version 12160 (0.0040) +[2024-06-10 19:21:23,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43963.9, 300 sec: 43653.7). Total num frames: 199278592. Throughput: 0: 43691.6. Samples: 199379080. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:21:23,240][46753] Avg episode reward: [(0, '0.165')] +[2024-06-10 19:21:26,687][46990] Updated weights for policy 0, policy_version 12170 (0.0032) +[2024-06-10 19:21:28,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 199491584. Throughput: 0: 43556.6. Samples: 199633380. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:21:28,240][46753] Avg episode reward: [(0, '0.181')] +[2024-06-10 19:21:29,978][46990] Updated weights for policy 0, policy_version 12180 (0.0024) +[2024-06-10 19:21:33,240][46753] Fps is (10 sec: 40959.5, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 199688192. Throughput: 0: 43652.5. Samples: 199768080. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 19:21:33,240][46753] Avg episode reward: [(0, '0.186')] +[2024-06-10 19:21:33,815][46990] Updated weights for policy 0, policy_version 12190 (0.0045) +[2024-06-10 19:21:37,368][46990] Updated weights for policy 0, policy_version 12200 (0.0042) +[2024-06-10 19:21:38,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 199917568. Throughput: 0: 43696.6. Samples: 200033480. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 19:21:38,240][46753] Avg episode reward: [(0, '0.181')] +[2024-06-10 19:21:41,388][46990] Updated weights for policy 0, policy_version 12210 (0.0037) +[2024-06-10 19:21:43,239][46753] Fps is (10 sec: 45875.9, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 200146944. Throughput: 0: 43766.8. Samples: 200289600. Policy #0 lag: (min: 0.0, avg: 11.5, max: 24.0) +[2024-06-10 19:21:43,240][46753] Avg episode reward: [(0, '0.180')] +[2024-06-10 19:21:45,008][46990] Updated weights for policy 0, policy_version 12220 (0.0038) +[2024-06-10 19:21:48,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 200327168. Throughput: 0: 43721.9. Samples: 200420620. Policy #0 lag: (min: 0.0, avg: 11.5, max: 24.0) +[2024-06-10 19:21:48,240][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:21:49,120][46990] Updated weights for policy 0, policy_version 12230 (0.0040) +[2024-06-10 19:21:52,365][46990] Updated weights for policy 0, policy_version 12240 (0.0049) +[2024-06-10 19:21:53,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 200572928. Throughput: 0: 43664.9. Samples: 200684600. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:21:53,240][46753] Avg episode reward: [(0, '0.184')] +[2024-06-10 19:21:56,770][46990] Updated weights for policy 0, policy_version 12250 (0.0024) +[2024-06-10 19:21:58,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 200785920. Throughput: 0: 43422.3. Samples: 200940220. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 19:21:58,240][46753] Avg episode reward: [(0, '0.172')] +[2024-06-10 19:22:00,150][46990] Updated weights for policy 0, policy_version 12260 (0.0029) +[2024-06-10 19:22:03,239][46753] Fps is (10 sec: 39321.6, 60 sec: 43144.7, 300 sec: 43598.1). Total num frames: 200966144. Throughput: 0: 43452.9. Samples: 201073780. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 19:22:03,240][46753] Avg episode reward: [(0, '0.174')] +[2024-06-10 19:22:03,915][46990] Updated weights for policy 0, policy_version 12270 (0.0034) +[2024-06-10 19:22:07,308][46990] Updated weights for policy 0, policy_version 12280 (0.0030) +[2024-06-10 19:22:08,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43144.5, 300 sec: 43598.1). Total num frames: 201211904. Throughput: 0: 43499.5. Samples: 201336560. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 19:22:08,240][46753] Avg episode reward: [(0, '0.180')] +[2024-06-10 19:22:11,621][46990] Updated weights for policy 0, policy_version 12290 (0.0032) +[2024-06-10 19:22:13,240][46753] Fps is (10 sec: 47512.6, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 201441280. Throughput: 0: 43855.4. Samples: 201606880. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 19:22:13,240][46753] Avg episode reward: [(0, '0.181')] +[2024-06-10 19:22:14,557][46990] Updated weights for policy 0, policy_version 12300 (0.0031) +[2024-06-10 19:22:18,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 201637888. Throughput: 0: 43599.3. Samples: 201730040. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 19:22:18,240][46753] Avg episode reward: [(0, '0.186')] +[2024-06-10 19:22:19,369][46990] Updated weights for policy 0, policy_version 12310 (0.0037) +[2024-06-10 19:22:22,118][46990] Updated weights for policy 0, policy_version 12320 (0.0038) +[2024-06-10 19:22:23,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43144.5, 300 sec: 43653.6). Total num frames: 201867264. Throughput: 0: 43517.8. Samples: 201991780. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 19:22:23,240][46753] Avg episode reward: [(0, '0.189')] +[2024-06-10 19:22:23,248][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000012322_201883648.pth... +[2024-06-10 19:22:23,310][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000011683_191414272.pth +[2024-06-10 19:22:26,602][46990] Updated weights for policy 0, policy_version 12330 (0.0032) +[2024-06-10 19:22:28,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 202096640. Throughput: 0: 43778.7. Samples: 202259640. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:22:28,240][46753] Avg episode reward: [(0, '0.177')] +[2024-06-10 19:22:29,781][46990] Updated weights for policy 0, policy_version 12340 (0.0050) +[2024-06-10 19:22:33,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 202309632. Throughput: 0: 43808.9. Samples: 202392020. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:22:33,240][46753] Avg episode reward: [(0, '0.167')] +[2024-06-10 19:22:33,682][46990] Updated weights for policy 0, policy_version 12350 (0.0039) +[2024-06-10 19:22:36,981][46990] Updated weights for policy 0, policy_version 12360 (0.0033) +[2024-06-10 19:22:38,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 202522624. Throughput: 0: 43780.8. Samples: 202654740. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 19:22:38,240][46753] Avg episode reward: [(0, '0.174')] +[2024-06-10 19:22:41,315][46990] Updated weights for policy 0, policy_version 12370 (0.0035) +[2024-06-10 19:22:43,070][46970] Signal inference workers to stop experience collection... (2850 times) +[2024-06-10 19:22:43,070][46970] Signal inference workers to resume experience collection... (2850 times) +[2024-06-10 19:22:43,082][46990] InferenceWorker_p0-w0: stopping experience collection (2850 times) +[2024-06-10 19:22:43,082][46990] InferenceWorker_p0-w0: resuming experience collection (2850 times) +[2024-06-10 19:22:43,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43417.5, 300 sec: 43709.2). Total num frames: 202752000. Throughput: 0: 44030.6. Samples: 202921600. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 19:22:43,240][46753] Avg episode reward: [(0, '0.169')] +[2024-06-10 19:22:44,440][46990] Updated weights for policy 0, policy_version 12380 (0.0041) +[2024-06-10 19:22:48,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.8, 300 sec: 43653.7). Total num frames: 202964992. Throughput: 0: 43856.0. Samples: 203047300. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 19:22:48,240][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:22:48,981][46990] Updated weights for policy 0, policy_version 12390 (0.0037) +[2024-06-10 19:22:51,918][46990] Updated weights for policy 0, policy_version 12400 (0.0031) +[2024-06-10 19:22:53,244][46753] Fps is (10 sec: 42579.6, 60 sec: 43414.3, 300 sec: 43764.1). Total num frames: 203177984. Throughput: 0: 43754.7. Samples: 203305720. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 19:22:53,244][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:22:56,614][46990] Updated weights for policy 0, policy_version 12410 (0.0027) +[2024-06-10 19:22:58,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.7, 300 sec: 43709.9). Total num frames: 203407360. Throughput: 0: 43667.3. Samples: 203571900. Policy #0 lag: (min: 0.0, avg: 12.3, max: 23.0) +[2024-06-10 19:22:58,240][46753] Avg episode reward: [(0, '0.169')] +[2024-06-10 19:22:59,369][46990] Updated weights for policy 0, policy_version 12420 (0.0032) +[2024-06-10 19:23:03,239][46753] Fps is (10 sec: 44257.0, 60 sec: 44236.8, 300 sec: 43653.7). Total num frames: 203620352. Throughput: 0: 43921.8. Samples: 203706520. Policy #0 lag: (min: 0.0, avg: 12.3, max: 23.0) +[2024-06-10 19:23:03,240][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:23:03,708][46990] Updated weights for policy 0, policy_version 12430 (0.0044) +[2024-06-10 19:23:06,623][46990] Updated weights for policy 0, policy_version 12440 (0.0022) +[2024-06-10 19:23:08,240][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 203833344. Throughput: 0: 43803.0. Samples: 203962920. Policy #0 lag: (min: 0.0, avg: 8.6, max: 19.0) +[2024-06-10 19:23:08,244][46753] Avg episode reward: [(0, '0.181')] +[2024-06-10 19:23:10,882][46990] Updated weights for policy 0, policy_version 12450 (0.0037) +[2024-06-10 19:23:13,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43417.8, 300 sec: 43653.7). Total num frames: 204046336. Throughput: 0: 43871.5. Samples: 204233860. Policy #0 lag: (min: 0.0, avg: 8.6, max: 19.0) +[2024-06-10 19:23:13,240][46753] Avg episode reward: [(0, '0.184')] +[2024-06-10 19:23:14,265][46990] Updated weights for policy 0, policy_version 12460 (0.0030) +[2024-06-10 19:23:18,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43963.7, 300 sec: 43653.7). Total num frames: 204275712. Throughput: 0: 43772.5. Samples: 204361780. Policy #0 lag: (min: 2.0, avg: 12.0, max: 24.0) +[2024-06-10 19:23:18,240][46753] Avg episode reward: [(0, '0.176')] +[2024-06-10 19:23:18,757][46990] Updated weights for policy 0, policy_version 12470 (0.0029) +[2024-06-10 19:23:22,082][46990] Updated weights for policy 0, policy_version 12480 (0.0032) +[2024-06-10 19:23:23,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 204488704. Throughput: 0: 43577.7. Samples: 204615740. Policy #0 lag: (min: 0.0, avg: 11.7, max: 22.0) +[2024-06-10 19:23:23,240][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:23:26,609][46990] Updated weights for policy 0, policy_version 12490 (0.0033) +[2024-06-10 19:23:28,244][46753] Fps is (10 sec: 44216.6, 60 sec: 43687.3, 300 sec: 43708.5). Total num frames: 204718080. Throughput: 0: 43451.7. Samples: 204877120. Policy #0 lag: (min: 0.0, avg: 11.7, max: 22.0) +[2024-06-10 19:23:28,245][46753] Avg episode reward: [(0, '0.171')] +[2024-06-10 19:23:29,315][46990] Updated weights for policy 0, policy_version 12500 (0.0028) +[2024-06-10 19:23:33,244][46753] Fps is (10 sec: 44217.3, 60 sec: 43687.4, 300 sec: 43597.4). Total num frames: 204931072. Throughput: 0: 43609.8. Samples: 205009940. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 19:23:33,244][46753] Avg episode reward: [(0, '0.188')] +[2024-06-10 19:23:33,753][46990] Updated weights for policy 0, policy_version 12510 (0.0043) +[2024-06-10 19:23:36,827][46990] Updated weights for policy 0, policy_version 12520 (0.0035) +[2024-06-10 19:23:38,239][46753] Fps is (10 sec: 40978.7, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 205127680. Throughput: 0: 43513.7. Samples: 205263640. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 19:23:38,240][46753] Avg episode reward: [(0, '0.186')] +[2024-06-10 19:23:40,878][46990] Updated weights for policy 0, policy_version 12530 (0.0034) +[2024-06-10 19:23:43,239][46753] Fps is (10 sec: 44257.0, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 205373440. Throughput: 0: 43547.2. Samples: 205531520. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 19:23:43,240][46753] Avg episode reward: [(0, '0.188')] +[2024-06-10 19:23:43,950][46970] Signal inference workers to stop experience collection... (2900 times) +[2024-06-10 19:23:43,991][46990] InferenceWorker_p0-w0: stopping experience collection (2900 times) +[2024-06-10 19:23:43,997][46970] Signal inference workers to resume experience collection... (2900 times) +[2024-06-10 19:23:44,009][46990] InferenceWorker_p0-w0: resuming experience collection (2900 times) +[2024-06-10 19:23:44,308][46990] Updated weights for policy 0, policy_version 12540 (0.0034) +[2024-06-10 19:23:48,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 205586432. Throughput: 0: 43536.0. Samples: 205665640. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 19:23:48,240][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:23:48,553][46990] Updated weights for policy 0, policy_version 12550 (0.0032) +[2024-06-10 19:23:52,203][46990] Updated weights for policy 0, policy_version 12560 (0.0033) +[2024-06-10 19:23:53,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43693.9, 300 sec: 43764.7). Total num frames: 205799424. Throughput: 0: 43458.7. Samples: 205918560. Policy #0 lag: (min: 0.0, avg: 12.6, max: 23.0) +[2024-06-10 19:23:53,240][46753] Avg episode reward: [(0, '0.193')] +[2024-06-10 19:23:56,377][46990] Updated weights for policy 0, policy_version 12570 (0.0041) +[2024-06-10 19:23:58,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 206012416. Throughput: 0: 43427.0. Samples: 206188080. Policy #0 lag: (min: 0.0, avg: 12.6, max: 23.0) +[2024-06-10 19:23:58,240][46753] Avg episode reward: [(0, '0.187')] +[2024-06-10 19:23:59,323][46990] Updated weights for policy 0, policy_version 12580 (0.0040) +[2024-06-10 19:24:03,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 206241792. Throughput: 0: 43484.4. Samples: 206318580. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 19:24:03,240][46753] Avg episode reward: [(0, '0.184')] +[2024-06-10 19:24:03,787][46990] Updated weights for policy 0, policy_version 12590 (0.0028) +[2024-06-10 19:24:06,810][46990] Updated weights for policy 0, policy_version 12600 (0.0036) +[2024-06-10 19:24:08,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 206454784. Throughput: 0: 43519.2. Samples: 206574100. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 19:24:08,240][46753] Avg episode reward: [(0, '0.174')] +[2024-06-10 19:24:11,089][46990] Updated weights for policy 0, policy_version 12610 (0.0044) +[2024-06-10 19:24:13,240][46753] Fps is (10 sec: 42597.9, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 206667776. Throughput: 0: 43643.0. Samples: 206840860. Policy #0 lag: (min: 0.0, avg: 10.8, max: 20.0) +[2024-06-10 19:24:13,240][46753] Avg episode reward: [(0, '0.195')] +[2024-06-10 19:24:14,294][46990] Updated weights for policy 0, policy_version 12620 (0.0028) +[2024-06-10 19:24:18,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 206880768. Throughput: 0: 43566.2. Samples: 206970220. Policy #0 lag: (min: 0.0, avg: 10.8, max: 20.0) +[2024-06-10 19:24:18,240][46753] Avg episode reward: [(0, '0.178')] +[2024-06-10 19:24:18,664][46990] Updated weights for policy 0, policy_version 12630 (0.0040) +[2024-06-10 19:24:22,216][46990] Updated weights for policy 0, policy_version 12640 (0.0039) +[2024-06-10 19:24:23,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 207110144. Throughput: 0: 43543.1. Samples: 207223080. Policy #0 lag: (min: 0.0, avg: 10.3, max: 20.0) +[2024-06-10 19:24:23,240][46753] Avg episode reward: [(0, '0.193')] +[2024-06-10 19:24:23,262][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000012641_207110144.pth... +[2024-06-10 19:24:23,326][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000012003_196657152.pth +[2024-06-10 19:24:26,337][46990] Updated weights for policy 0, policy_version 12650 (0.0035) +[2024-06-10 19:24:28,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43420.9, 300 sec: 43654.3). Total num frames: 207323136. Throughput: 0: 43541.8. Samples: 207490900. Policy #0 lag: (min: 0.0, avg: 10.3, max: 20.0) +[2024-06-10 19:24:28,240][46753] Avg episode reward: [(0, '0.184')] +[2024-06-10 19:24:29,533][46990] Updated weights for policy 0, policy_version 12660 (0.0042) +[2024-06-10 19:24:33,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43693.9, 300 sec: 43653.6). Total num frames: 207552512. Throughput: 0: 43469.2. Samples: 207621760. Policy #0 lag: (min: 0.0, avg: 11.6, max: 20.0) +[2024-06-10 19:24:33,240][46753] Avg episode reward: [(0, '0.184')] +[2024-06-10 19:24:33,898][46990] Updated weights for policy 0, policy_version 12670 (0.0044) +[2024-06-10 19:24:37,118][46990] Updated weights for policy 0, policy_version 12680 (0.0036) +[2024-06-10 19:24:38,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 207749120. Throughput: 0: 43655.2. Samples: 207883040. Policy #0 lag: (min: 0.0, avg: 11.6, max: 20.0) +[2024-06-10 19:24:38,240][46753] Avg episode reward: [(0, '0.173')] +[2024-06-10 19:24:41,119][46990] Updated weights for policy 0, policy_version 12690 (0.0047) +[2024-06-10 19:24:43,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 207994880. Throughput: 0: 43514.2. Samples: 208146220. Policy #0 lag: (min: 1.0, avg: 7.6, max: 20.0) +[2024-06-10 19:24:43,240][46753] Avg episode reward: [(0, '0.188')] +[2024-06-10 19:24:44,595][46990] Updated weights for policy 0, policy_version 12700 (0.0025) +[2024-06-10 19:24:48,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 208207872. Throughput: 0: 43644.5. Samples: 208282580. Policy #0 lag: (min: 1.0, avg: 7.6, max: 20.0) +[2024-06-10 19:24:48,240][46753] Avg episode reward: [(0, '0.171')] +[2024-06-10 19:24:48,475][46990] Updated weights for policy 0, policy_version 12710 (0.0037) +[2024-06-10 19:24:51,997][46990] Updated weights for policy 0, policy_version 12720 (0.0044) +[2024-06-10 19:24:53,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 208420864. Throughput: 0: 43545.8. Samples: 208533660. Policy #0 lag: (min: 1.0, avg: 11.0, max: 21.0) +[2024-06-10 19:24:53,240][46753] Avg episode reward: [(0, '0.189')] +[2024-06-10 19:24:55,813][46970] Signal inference workers to stop experience collection... (2950 times) +[2024-06-10 19:24:55,814][46970] Signal inference workers to resume experience collection... (2950 times) +[2024-06-10 19:24:55,836][46990] InferenceWorker_p0-w0: stopping experience collection (2950 times) +[2024-06-10 19:24:55,836][46990] InferenceWorker_p0-w0: resuming experience collection (2950 times) +[2024-06-10 19:24:56,242][46990] Updated weights for policy 0, policy_version 12730 (0.0041) +[2024-06-10 19:24:58,239][46753] Fps is (10 sec: 45875.2, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 208666624. Throughput: 0: 43601.9. Samples: 208802940. Policy #0 lag: (min: 1.0, avg: 11.0, max: 21.0) +[2024-06-10 19:24:58,240][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:24:59,772][46990] Updated weights for policy 0, policy_version 12740 (0.0034) +[2024-06-10 19:25:03,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 208846848. Throughput: 0: 43676.8. Samples: 208935680. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:25:03,240][46753] Avg episode reward: [(0, '0.169')] +[2024-06-10 19:25:03,822][46990] Updated weights for policy 0, policy_version 12750 (0.0034) +[2024-06-10 19:25:07,507][46990] Updated weights for policy 0, policy_version 12760 (0.0025) +[2024-06-10 19:25:08,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.7, 300 sec: 43598.8). Total num frames: 209076224. Throughput: 0: 43801.8. Samples: 209194160. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:25:08,240][46753] Avg episode reward: [(0, '0.179')] +[2024-06-10 19:25:11,077][46990] Updated weights for policy 0, policy_version 12770 (0.0043) +[2024-06-10 19:25:13,240][46753] Fps is (10 sec: 47513.1, 60 sec: 44236.8, 300 sec: 43765.2). Total num frames: 209321984. Throughput: 0: 43682.5. Samples: 209456620. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 19:25:13,251][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:25:14,823][46990] Updated weights for policy 0, policy_version 12780 (0.0040) +[2024-06-10 19:25:18,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43963.7, 300 sec: 43653.7). Total num frames: 209518592. Throughput: 0: 43773.8. Samples: 209591580. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 19:25:18,249][46753] Avg episode reward: [(0, '0.174')] +[2024-06-10 19:25:18,689][46990] Updated weights for policy 0, policy_version 12790 (0.0033) +[2024-06-10 19:25:22,332][46990] Updated weights for policy 0, policy_version 12800 (0.0046) +[2024-06-10 19:25:23,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 209731584. Throughput: 0: 43654.7. Samples: 209847500. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:25:23,240][46753] Avg episode reward: [(0, '0.190')] +[2024-06-10 19:25:26,621][46990] Updated weights for policy 0, policy_version 12810 (0.0038) +[2024-06-10 19:25:28,239][46753] Fps is (10 sec: 45875.5, 60 sec: 44236.7, 300 sec: 43764.7). Total num frames: 209977344. Throughput: 0: 43521.8. Samples: 210104700. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:25:28,240][46753] Avg episode reward: [(0, '0.186')] +[2024-06-10 19:25:29,929][46990] Updated weights for policy 0, policy_version 12820 (0.0037) +[2024-06-10 19:25:33,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 210157568. Throughput: 0: 43528.4. Samples: 210241360. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 19:25:33,250][46753] Avg episode reward: [(0, '0.187')] +[2024-06-10 19:25:33,932][46990] Updated weights for policy 0, policy_version 12830 (0.0031) +[2024-06-10 19:25:37,974][46990] Updated weights for policy 0, policy_version 12840 (0.0034) +[2024-06-10 19:25:38,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 210386944. Throughput: 0: 43902.3. Samples: 210509260. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 19:25:38,240][46753] Avg episode reward: [(0, '0.179')] +[2024-06-10 19:25:41,181][46990] Updated weights for policy 0, policy_version 12850 (0.0035) +[2024-06-10 19:25:43,240][46753] Fps is (10 sec: 45874.9, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 210616320. Throughput: 0: 43724.8. Samples: 210770560. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:25:43,240][46753] Avg episode reward: [(0, '0.190')] +[2024-06-10 19:25:45,192][46990] Updated weights for policy 0, policy_version 12860 (0.0028) +[2024-06-10 19:25:48,240][46753] Fps is (10 sec: 44236.1, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 210829312. Throughput: 0: 43764.8. Samples: 210905100. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:25:48,240][46753] Avg episode reward: [(0, '0.181')] +[2024-06-10 19:25:48,831][46990] Updated weights for policy 0, policy_version 12870 (0.0034) +[2024-06-10 19:25:52,708][46990] Updated weights for policy 0, policy_version 12880 (0.0034) +[2024-06-10 19:25:53,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 211042304. Throughput: 0: 43712.5. Samples: 211161220. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 19:25:53,240][46753] Avg episode reward: [(0, '0.176')] +[2024-06-10 19:25:56,102][46990] Updated weights for policy 0, policy_version 12890 (0.0029) +[2024-06-10 19:25:58,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 211271680. Throughput: 0: 43681.0. Samples: 211422260. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 19:25:58,240][46753] Avg episode reward: [(0, '0.206')] +[2024-06-10 19:25:58,240][46970] Saving new best policy, reward=0.206! +[2024-06-10 19:26:00,394][46990] Updated weights for policy 0, policy_version 12900 (0.0025) +[2024-06-10 19:26:03,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 211484672. Throughput: 0: 43636.0. Samples: 211555200. Policy #0 lag: (min: 0.0, avg: 11.6, max: 20.0) +[2024-06-10 19:26:03,240][46753] Avg episode reward: [(0, '0.174')] +[2024-06-10 19:26:03,645][46990] Updated weights for policy 0, policy_version 12910 (0.0037) +[2024-06-10 19:26:08,091][46990] Updated weights for policy 0, policy_version 12920 (0.0036) +[2024-06-10 19:26:08,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 211681280. Throughput: 0: 43755.6. Samples: 211816500. Policy #0 lag: (min: 0.0, avg: 9.0, max: 21.0) +[2024-06-10 19:26:08,240][46753] Avg episode reward: [(0, '0.183')] +[2024-06-10 19:26:11,205][46970] Signal inference workers to stop experience collection... (3000 times) +[2024-06-10 19:26:11,250][46990] InferenceWorker_p0-w0: stopping experience collection (3000 times) +[2024-06-10 19:26:11,256][46970] Signal inference workers to resume experience collection... (3000 times) +[2024-06-10 19:26:11,269][46990] InferenceWorker_p0-w0: resuming experience collection (3000 times) +[2024-06-10 19:26:11,271][46990] Updated weights for policy 0, policy_version 12930 (0.0036) +[2024-06-10 19:26:13,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 211927040. Throughput: 0: 43785.8. Samples: 212075060. Policy #0 lag: (min: 0.0, avg: 9.0, max: 21.0) +[2024-06-10 19:26:13,240][46753] Avg episode reward: [(0, '0.194')] +[2024-06-10 19:26:15,665][46990] Updated weights for policy 0, policy_version 12940 (0.0045) +[2024-06-10 19:26:18,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43417.7, 300 sec: 43542.6). Total num frames: 212123648. Throughput: 0: 43659.7. Samples: 212206040. Policy #0 lag: (min: 1.0, avg: 8.8, max: 22.0) +[2024-06-10 19:26:18,240][46753] Avg episode reward: [(0, '0.194')] +[2024-06-10 19:26:18,928][46990] Updated weights for policy 0, policy_version 12950 (0.0028) +[2024-06-10 19:26:23,014][46990] Updated weights for policy 0, policy_version 12960 (0.0039) +[2024-06-10 19:26:23,244][46753] Fps is (10 sec: 40941.2, 60 sec: 43414.3, 300 sec: 43541.9). Total num frames: 212336640. Throughput: 0: 43579.2. Samples: 212470520. Policy #0 lag: (min: 1.0, avg: 8.8, max: 22.0) +[2024-06-10 19:26:23,244][46753] Avg episode reward: [(0, '0.174')] +[2024-06-10 19:26:23,381][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000012961_212353024.pth... +[2024-06-10 19:26:23,438][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000012322_201883648.pth +[2024-06-10 19:26:26,226][46990] Updated weights for policy 0, policy_version 12970 (0.0032) +[2024-06-10 19:26:28,239][46753] Fps is (10 sec: 45874.5, 60 sec: 43417.5, 300 sec: 43709.2). Total num frames: 212582400. Throughput: 0: 43423.1. Samples: 212724600. Policy #0 lag: (min: 0.0, avg: 11.9, max: 24.0) +[2024-06-10 19:26:28,240][46753] Avg episode reward: [(0, '0.178')] +[2024-06-10 19:26:30,447][46990] Updated weights for policy 0, policy_version 12980 (0.0039) +[2024-06-10 19:26:33,240][46753] Fps is (10 sec: 44252.8, 60 sec: 43690.0, 300 sec: 43598.0). Total num frames: 212779008. Throughput: 0: 43609.9. Samples: 212867580. Policy #0 lag: (min: 0.0, avg: 11.9, max: 24.0) +[2024-06-10 19:26:33,241][46753] Avg episode reward: [(0, '0.187')] +[2024-06-10 19:26:33,602][46990] Updated weights for policy 0, policy_version 12990 (0.0034) +[2024-06-10 19:26:38,102][46990] Updated weights for policy 0, policy_version 13000 (0.0031) +[2024-06-10 19:26:38,240][46753] Fps is (10 sec: 40959.9, 60 sec: 43417.5, 300 sec: 43542.5). Total num frames: 212992000. Throughput: 0: 43589.2. Samples: 213122740. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 19:26:38,240][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:26:41,189][46990] Updated weights for policy 0, policy_version 13010 (0.0036) +[2024-06-10 19:26:43,239][46753] Fps is (10 sec: 45879.0, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 213237760. Throughput: 0: 43512.8. Samples: 213380340. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 19:26:43,240][46753] Avg episode reward: [(0, '0.181')] +[2024-06-10 19:26:45,646][46990] Updated weights for policy 0, policy_version 13020 (0.0039) +[2024-06-10 19:26:48,240][46753] Fps is (10 sec: 44236.7, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 213434368. Throughput: 0: 43651.9. Samples: 213519540. Policy #0 lag: (min: 1.0, avg: 8.0, max: 21.0) +[2024-06-10 19:26:48,240][46753] Avg episode reward: [(0, '0.184')] +[2024-06-10 19:26:48,863][46990] Updated weights for policy 0, policy_version 13030 (0.0045) +[2024-06-10 19:26:52,893][46990] Updated weights for policy 0, policy_version 13040 (0.0037) +[2024-06-10 19:26:53,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 213647360. Throughput: 0: 43595.4. Samples: 213778300. Policy #0 lag: (min: 1.0, avg: 8.0, max: 21.0) +[2024-06-10 19:26:53,240][46753] Avg episode reward: [(0, '0.196')] +[2024-06-10 19:26:56,174][46990] Updated weights for policy 0, policy_version 13050 (0.0040) +[2024-06-10 19:26:58,239][46753] Fps is (10 sec: 45876.0, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 213893120. Throughput: 0: 43469.7. Samples: 214031200. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 19:26:58,240][46753] Avg episode reward: [(0, '0.184')] +[2024-06-10 19:27:00,149][46990] Updated weights for policy 0, policy_version 13060 (0.0033) +[2024-06-10 19:27:03,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 214106112. Throughput: 0: 43756.0. Samples: 214175060. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 19:27:03,240][46753] Avg episode reward: [(0, '0.193')] +[2024-06-10 19:27:03,469][46990] Updated weights for policy 0, policy_version 13070 (0.0025) +[2024-06-10 19:27:07,823][46990] Updated weights for policy 0, policy_version 13080 (0.0036) +[2024-06-10 19:27:08,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 214302720. Throughput: 0: 43658.6. Samples: 214434960. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 19:27:08,240][46753] Avg episode reward: [(0, '0.201')] +[2024-06-10 19:27:11,100][46990] Updated weights for policy 0, policy_version 13090 (0.0037) +[2024-06-10 19:27:12,299][46970] Signal inference workers to stop experience collection... (3050 times) +[2024-06-10 19:27:12,330][46990] InferenceWorker_p0-w0: stopping experience collection (3050 times) +[2024-06-10 19:27:12,410][46970] Signal inference workers to resume experience collection... (3050 times) +[2024-06-10 19:27:12,410][46990] InferenceWorker_p0-w0: resuming experience collection (3050 times) +[2024-06-10 19:27:13,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 214548480. Throughput: 0: 43605.0. Samples: 214686820. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 19:27:13,240][46753] Avg episode reward: [(0, '0.197')] +[2024-06-10 19:27:15,302][46990] Updated weights for policy 0, policy_version 13100 (0.0035) +[2024-06-10 19:27:18,244][46753] Fps is (10 sec: 44216.6, 60 sec: 43687.3, 300 sec: 43653.0). Total num frames: 214745088. Throughput: 0: 43535.6. Samples: 214826840. Policy #0 lag: (min: 0.0, avg: 7.7, max: 20.0) +[2024-06-10 19:27:18,245][46753] Avg episode reward: [(0, '0.193')] +[2024-06-10 19:27:18,692][46990] Updated weights for policy 0, policy_version 13110 (0.0037) +[2024-06-10 19:27:22,707][46990] Updated weights for policy 0, policy_version 13120 (0.0043) +[2024-06-10 19:27:23,240][46753] Fps is (10 sec: 40959.5, 60 sec: 43693.9, 300 sec: 43598.1). Total num frames: 214958080. Throughput: 0: 43672.0. Samples: 215087980. Policy #0 lag: (min: 0.0, avg: 7.7, max: 20.0) +[2024-06-10 19:27:23,240][46753] Avg episode reward: [(0, '0.197')] +[2024-06-10 19:27:26,026][46990] Updated weights for policy 0, policy_version 13130 (0.0046) +[2024-06-10 19:27:28,239][46753] Fps is (10 sec: 45895.9, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 215203840. Throughput: 0: 43811.1. Samples: 215351840. Policy #0 lag: (min: 0.0, avg: 12.8, max: 26.0) +[2024-06-10 19:27:28,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:27:28,241][46970] Saving new best policy, reward=0.220! +[2024-06-10 19:27:29,851][46990] Updated weights for policy 0, policy_version 13140 (0.0032) +[2024-06-10 19:27:33,239][46753] Fps is (10 sec: 47514.0, 60 sec: 44237.4, 300 sec: 43764.7). Total num frames: 215433216. Throughput: 0: 43774.7. Samples: 215489400. Policy #0 lag: (min: 0.0, avg: 12.8, max: 26.0) +[2024-06-10 19:27:33,240][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:27:33,378][46990] Updated weights for policy 0, policy_version 13150 (0.0038) +[2024-06-10 19:27:37,720][46990] Updated weights for policy 0, policy_version 13160 (0.0036) +[2024-06-10 19:27:38,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 215613440. Throughput: 0: 43756.1. Samples: 215747320. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 19:27:38,240][46753] Avg episode reward: [(0, '0.196')] +[2024-06-10 19:27:40,950][46990] Updated weights for policy 0, policy_version 13170 (0.0036) +[2024-06-10 19:27:43,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 215859200. Throughput: 0: 43853.6. Samples: 216004620. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 19:27:43,240][46753] Avg episode reward: [(0, '0.187')] +[2024-06-10 19:27:45,179][46990] Updated weights for policy 0, policy_version 13180 (0.0033) +[2024-06-10 19:27:48,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.8, 300 sec: 43654.3). Total num frames: 216055808. Throughput: 0: 43627.6. Samples: 216138300. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 19:27:48,240][46753] Avg episode reward: [(0, '0.197')] +[2024-06-10 19:27:48,554][46990] Updated weights for policy 0, policy_version 13190 (0.0037) +[2024-06-10 19:27:52,651][46990] Updated weights for policy 0, policy_version 13200 (0.0031) +[2024-06-10 19:27:53,239][46753] Fps is (10 sec: 40960.7, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 216268800. Throughput: 0: 43720.5. Samples: 216402380. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 19:27:53,240][46753] Avg episode reward: [(0, '0.206')] +[2024-06-10 19:27:55,923][46990] Updated weights for policy 0, policy_version 13210 (0.0036) +[2024-06-10 19:27:58,239][46753] Fps is (10 sec: 45874.7, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 216514560. Throughput: 0: 44013.3. Samples: 216667420. Policy #0 lag: (min: 0.0, avg: 12.7, max: 26.0) +[2024-06-10 19:27:58,240][46753] Avg episode reward: [(0, '0.191')] +[2024-06-10 19:27:59,817][46990] Updated weights for policy 0, policy_version 13220 (0.0037) +[2024-06-10 19:28:03,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 216727552. Throughput: 0: 43860.5. Samples: 216800360. Policy #0 lag: (min: 0.0, avg: 12.7, max: 26.0) +[2024-06-10 19:28:03,240][46753] Avg episode reward: [(0, '0.189')] +[2024-06-10 19:28:03,411][46990] Updated weights for policy 0, policy_version 13230 (0.0034) +[2024-06-10 19:28:07,796][46990] Updated weights for policy 0, policy_version 13240 (0.0040) +[2024-06-10 19:28:08,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 216940544. Throughput: 0: 43784.1. Samples: 217058260. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 19:28:08,240][46753] Avg episode reward: [(0, '0.202')] +[2024-06-10 19:28:11,139][46990] Updated weights for policy 0, policy_version 13250 (0.0031) +[2024-06-10 19:28:13,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 217169920. Throughput: 0: 43656.1. Samples: 217316360. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 19:28:13,240][46753] Avg episode reward: [(0, '0.196')] +[2024-06-10 19:28:15,366][46990] Updated weights for policy 0, policy_version 13260 (0.0035) +[2024-06-10 19:28:17,536][46970] Signal inference workers to stop experience collection... (3100 times) +[2024-06-10 19:28:17,537][46970] Signal inference workers to resume experience collection... (3100 times) +[2024-06-10 19:28:17,580][46990] InferenceWorker_p0-w0: stopping experience collection (3100 times) +[2024-06-10 19:28:17,580][46990] InferenceWorker_p0-w0: resuming experience collection (3100 times) +[2024-06-10 19:28:18,240][46753] Fps is (10 sec: 44236.2, 60 sec: 43967.0, 300 sec: 43709.2). Total num frames: 217382912. Throughput: 0: 43545.2. Samples: 217448940. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:28:18,240][46753] Avg episode reward: [(0, '0.204')] +[2024-06-10 19:28:18,615][46990] Updated weights for policy 0, policy_version 13270 (0.0040) +[2024-06-10 19:28:22,883][46990] Updated weights for policy 0, policy_version 13280 (0.0039) +[2024-06-10 19:28:23,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.7, 300 sec: 43598.8). Total num frames: 217579520. Throughput: 0: 43671.0. Samples: 217712520. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:28:23,240][46753] Avg episode reward: [(0, '0.179')] +[2024-06-10 19:28:23,249][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000013280_217579520.pth... +[2024-06-10 19:28:23,310][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000012641_207110144.pth +[2024-06-10 19:28:26,141][46990] Updated weights for policy 0, policy_version 13290 (0.0032) +[2024-06-10 19:28:28,239][46753] Fps is (10 sec: 42599.3, 60 sec: 43417.7, 300 sec: 43654.3). Total num frames: 217808896. Throughput: 0: 43851.7. Samples: 217977940. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 19:28:28,240][46753] Avg episode reward: [(0, '0.202')] +[2024-06-10 19:28:30,644][46990] Updated weights for policy 0, policy_version 13300 (0.0026) +[2024-06-10 19:28:33,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 218038272. Throughput: 0: 43915.5. Samples: 218114500. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 19:28:33,240][46753] Avg episode reward: [(0, '0.193')] +[2024-06-10 19:28:33,627][46990] Updated weights for policy 0, policy_version 13310 (0.0024) +[2024-06-10 19:28:37,801][46990] Updated weights for policy 0, policy_version 13320 (0.0042) +[2024-06-10 19:28:38,244][46753] Fps is (10 sec: 42579.1, 60 sec: 43687.4, 300 sec: 43597.4). Total num frames: 218234880. Throughput: 0: 43727.6. Samples: 218370320. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 19:28:38,244][46753] Avg episode reward: [(0, '0.190')] +[2024-06-10 19:28:41,229][46990] Updated weights for policy 0, policy_version 13330 (0.0043) +[2024-06-10 19:28:43,239][46753] Fps is (10 sec: 42599.3, 60 sec: 43417.8, 300 sec: 43653.6). Total num frames: 218464256. Throughput: 0: 43620.2. Samples: 218630320. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 19:28:43,240][46753] Avg episode reward: [(0, '0.213')] +[2024-06-10 19:28:45,423][46990] Updated weights for policy 0, policy_version 13340 (0.0050) +[2024-06-10 19:28:48,239][46753] Fps is (10 sec: 45896.1, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 218693632. Throughput: 0: 43587.1. Samples: 218761780. Policy #0 lag: (min: 0.0, avg: 9.0, max: 21.0) +[2024-06-10 19:28:48,240][46753] Avg episode reward: [(0, '0.201')] +[2024-06-10 19:28:48,689][46990] Updated weights for policy 0, policy_version 13350 (0.0029) +[2024-06-10 19:28:52,972][46990] Updated weights for policy 0, policy_version 13360 (0.0041) +[2024-06-10 19:28:53,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 218890240. Throughput: 0: 43665.9. Samples: 219023220. Policy #0 lag: (min: 0.0, avg: 9.0, max: 21.0) +[2024-06-10 19:28:53,240][46753] Avg episode reward: [(0, '0.193')] +[2024-06-10 19:28:56,101][46990] Updated weights for policy 0, policy_version 13370 (0.0034) +[2024-06-10 19:28:58,239][46753] Fps is (10 sec: 40959.6, 60 sec: 43144.6, 300 sec: 43598.1). Total num frames: 219103232. Throughput: 0: 43879.5. Samples: 219290940. Policy #0 lag: (min: 0.0, avg: 12.7, max: 26.0) +[2024-06-10 19:28:58,240][46753] Avg episode reward: [(0, '0.192')] +[2024-06-10 19:29:00,302][46990] Updated weights for policy 0, policy_version 13380 (0.0033) +[2024-06-10 19:29:03,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 219348992. Throughput: 0: 43846.9. Samples: 219422040. Policy #0 lag: (min: 0.0, avg: 12.7, max: 26.0) +[2024-06-10 19:29:03,240][46753] Avg episode reward: [(0, '0.197')] +[2024-06-10 19:29:03,585][46990] Updated weights for policy 0, policy_version 13390 (0.0025) +[2024-06-10 19:29:07,725][46990] Updated weights for policy 0, policy_version 13400 (0.0049) +[2024-06-10 19:29:08,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 219561984. Throughput: 0: 43760.5. Samples: 219681740. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:29:08,240][46753] Avg episode reward: [(0, '0.203')] +[2024-06-10 19:29:11,312][46990] Updated weights for policy 0, policy_version 13410 (0.0037) +[2024-06-10 19:29:13,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 219774976. Throughput: 0: 43578.6. Samples: 219938980. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:29:13,240][46753] Avg episode reward: [(0, '0.194')] +[2024-06-10 19:29:15,314][46990] Updated weights for policy 0, policy_version 13420 (0.0030) +[2024-06-10 19:29:18,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.7, 300 sec: 43653.6). Total num frames: 219987968. Throughput: 0: 43432.0. Samples: 220068940. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 19:29:18,242][46753] Avg episode reward: [(0, '0.207')] +[2024-06-10 19:29:18,626][46990] Updated weights for policy 0, policy_version 13430 (0.0029) +[2024-06-10 19:29:22,768][46990] Updated weights for policy 0, policy_version 13440 (0.0041) +[2024-06-10 19:29:23,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 220217344. Throughput: 0: 43658.9. Samples: 220334780. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 19:29:23,240][46753] Avg episode reward: [(0, '0.201')] +[2024-06-10 19:29:26,352][46990] Updated weights for policy 0, policy_version 13450 (0.0038) +[2024-06-10 19:29:28,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 220413952. Throughput: 0: 43706.5. Samples: 220597120. Policy #0 lag: (min: 0.0, avg: 10.8, max: 24.0) +[2024-06-10 19:29:28,240][46753] Avg episode reward: [(0, '0.200')] +[2024-06-10 19:29:30,211][46990] Updated weights for policy 0, policy_version 13460 (0.0030) +[2024-06-10 19:29:33,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 220659712. Throughput: 0: 43696.8. Samples: 220728140. Policy #0 lag: (min: 0.0, avg: 10.8, max: 24.0) +[2024-06-10 19:29:33,240][46753] Avg episode reward: [(0, '0.205')] +[2024-06-10 19:29:34,150][46990] Updated weights for policy 0, policy_version 13470 (0.0036) +[2024-06-10 19:29:37,778][46990] Updated weights for policy 0, policy_version 13480 (0.0035) +[2024-06-10 19:29:38,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43967.0, 300 sec: 43653.6). Total num frames: 220872704. Throughput: 0: 43704.0. Samples: 220989900. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 19:29:38,240][46753] Avg episode reward: [(0, '0.210')] +[2024-06-10 19:29:41,378][46990] Updated weights for policy 0, policy_version 13490 (0.0036) +[2024-06-10 19:29:43,244][46753] Fps is (10 sec: 42579.1, 60 sec: 43687.3, 300 sec: 43653.0). Total num frames: 221085696. Throughput: 0: 43476.1. Samples: 221247560. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 19:29:43,245][46753] Avg episode reward: [(0, '0.195')] +[2024-06-10 19:29:45,259][46990] Updated weights for policy 0, policy_version 13500 (0.0030) +[2024-06-10 19:29:48,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43417.6, 300 sec: 43653.7). Total num frames: 221298688. Throughput: 0: 43600.4. Samples: 221384060. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:29:48,240][46753] Avg episode reward: [(0, '0.198')] +[2024-06-10 19:29:48,769][46990] Updated weights for policy 0, policy_version 13510 (0.0037) +[2024-06-10 19:29:52,667][46990] Updated weights for policy 0, policy_version 13520 (0.0031) +[2024-06-10 19:29:53,240][46753] Fps is (10 sec: 44256.3, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 221528064. Throughput: 0: 43699.9. Samples: 221648240. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:29:53,240][46753] Avg episode reward: [(0, '0.207')] +[2024-06-10 19:29:53,362][46970] Signal inference workers to stop experience collection... (3150 times) +[2024-06-10 19:29:53,408][46990] InferenceWorker_p0-w0: stopping experience collection (3150 times) +[2024-06-10 19:29:53,418][46970] Signal inference workers to resume experience collection... (3150 times) +[2024-06-10 19:29:53,428][46990] InferenceWorker_p0-w0: resuming experience collection (3150 times) +[2024-06-10 19:29:56,572][46990] Updated weights for policy 0, policy_version 13530 (0.0034) +[2024-06-10 19:29:58,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 221741056. Throughput: 0: 43872.0. Samples: 221913220. Policy #0 lag: (min: 0.0, avg: 10.1, max: 22.0) +[2024-06-10 19:29:58,240][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:30:00,006][46990] Updated weights for policy 0, policy_version 13540 (0.0036) +[2024-06-10 19:30:03,240][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.5, 300 sec: 43709.2). Total num frames: 221970432. Throughput: 0: 43750.6. Samples: 222037720. Policy #0 lag: (min: 0.0, avg: 10.1, max: 22.0) +[2024-06-10 19:30:03,240][46753] Avg episode reward: [(0, '0.193')] +[2024-06-10 19:30:04,205][46990] Updated weights for policy 0, policy_version 13550 (0.0037) +[2024-06-10 19:30:07,656][46990] Updated weights for policy 0, policy_version 13560 (0.0039) +[2024-06-10 19:30:08,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 222183424. Throughput: 0: 43817.9. Samples: 222306580. Policy #0 lag: (min: 0.0, avg: 10.1, max: 22.0) +[2024-06-10 19:30:08,240][46753] Avg episode reward: [(0, '0.187')] +[2024-06-10 19:30:11,686][46990] Updated weights for policy 0, policy_version 13570 (0.0038) +[2024-06-10 19:30:13,240][46753] Fps is (10 sec: 44236.9, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 222412800. Throughput: 0: 43625.7. Samples: 222560280. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:30:13,240][46753] Avg episode reward: [(0, '0.194')] +[2024-06-10 19:30:15,166][46990] Updated weights for policy 0, policy_version 13580 (0.0030) +[2024-06-10 19:30:18,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 222625792. Throughput: 0: 43751.1. Samples: 222696940. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:30:18,240][46753] Avg episode reward: [(0, '0.207')] +[2024-06-10 19:30:18,898][46990] Updated weights for policy 0, policy_version 13590 (0.0029) +[2024-06-10 19:30:22,459][46990] Updated weights for policy 0, policy_version 13600 (0.0039) +[2024-06-10 19:30:23,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43963.8, 300 sec: 43653.6). Total num frames: 222855168. Throughput: 0: 43841.8. Samples: 222962780. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:30:23,240][46753] Avg episode reward: [(0, '0.199')] +[2024-06-10 19:30:23,269][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000013603_222871552.pth... +[2024-06-10 19:30:23,314][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000012961_212353024.pth +[2024-06-10 19:30:26,554][46990] Updated weights for policy 0, policy_version 13610 (0.0034) +[2024-06-10 19:30:28,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 223035392. Throughput: 0: 44088.9. Samples: 223231360. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:30:28,240][46753] Avg episode reward: [(0, '0.208')] +[2024-06-10 19:30:29,685][46990] Updated weights for policy 0, policy_version 13620 (0.0032) +[2024-06-10 19:30:33,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43690.5, 300 sec: 43709.2). Total num frames: 223281152. Throughput: 0: 43813.1. Samples: 223355660. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 19:30:33,240][46753] Avg episode reward: [(0, '0.201')] +[2024-06-10 19:30:34,104][46990] Updated weights for policy 0, policy_version 13630 (0.0039) +[2024-06-10 19:30:37,455][46990] Updated weights for policy 0, policy_version 13640 (0.0027) +[2024-06-10 19:30:38,239][46753] Fps is (10 sec: 47513.4, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 223510528. Throughput: 0: 43787.2. Samples: 223618660. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 19:30:38,240][46753] Avg episode reward: [(0, '0.208')] +[2024-06-10 19:30:41,580][46990] Updated weights for policy 0, policy_version 13650 (0.0026) +[2024-06-10 19:30:43,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43967.0, 300 sec: 43709.2). Total num frames: 223723520. Throughput: 0: 43628.8. Samples: 223876520. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 19:30:43,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:30:45,246][46990] Updated weights for policy 0, policy_version 13660 (0.0041) +[2024-06-10 19:30:48,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 223920128. Throughput: 0: 43812.1. Samples: 224009260. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 19:30:48,240][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:30:48,716][46990] Updated weights for policy 0, policy_version 13670 (0.0033) +[2024-06-10 19:30:52,360][46990] Updated weights for policy 0, policy_version 13680 (0.0032) +[2024-06-10 19:30:53,240][46753] Fps is (10 sec: 45874.7, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 224182272. Throughput: 0: 43818.9. Samples: 224278440. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 19:30:53,240][46753] Avg episode reward: [(0, '0.205')] +[2024-06-10 19:30:56,349][46990] Updated weights for policy 0, policy_version 13690 (0.0028) +[2024-06-10 19:30:58,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 224362496. Throughput: 0: 44040.1. Samples: 224542080. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 19:30:58,240][46753] Avg episode reward: [(0, '0.209')] +[2024-06-10 19:30:59,640][46990] Updated weights for policy 0, policy_version 13700 (0.0029) +[2024-06-10 19:31:03,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 224591872. Throughput: 0: 43825.7. Samples: 224669100. Policy #0 lag: (min: 1.0, avg: 10.7, max: 23.0) +[2024-06-10 19:31:03,240][46753] Avg episode reward: [(0, '0.199')] +[2024-06-10 19:31:03,998][46990] Updated weights for policy 0, policy_version 13710 (0.0027) +[2024-06-10 19:31:07,391][46990] Updated weights for policy 0, policy_version 13720 (0.0034) +[2024-06-10 19:31:08,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 224821248. Throughput: 0: 43745.8. Samples: 224931340. Policy #0 lag: (min: 1.0, avg: 10.7, max: 23.0) +[2024-06-10 19:31:08,240][46753] Avg episode reward: [(0, '0.207')] +[2024-06-10 19:31:11,178][46990] Updated weights for policy 0, policy_version 13730 (0.0039) +[2024-06-10 19:31:13,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 225034240. Throughput: 0: 43654.3. Samples: 225195800. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 19:31:13,240][46753] Avg episode reward: [(0, '0.199')] +[2024-06-10 19:31:14,925][46990] Updated weights for policy 0, policy_version 13740 (0.0026) +[2024-06-10 19:31:18,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43963.6, 300 sec: 43820.9). Total num frames: 225263616. Throughput: 0: 43818.7. Samples: 225327500. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 19:31:18,240][46753] Avg episode reward: [(0, '0.217')] +[2024-06-10 19:31:18,738][46990] Updated weights for policy 0, policy_version 13750 (0.0045) +[2024-06-10 19:31:22,165][46990] Updated weights for policy 0, policy_version 13760 (0.0038) +[2024-06-10 19:31:23,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 225492992. Throughput: 0: 43966.7. Samples: 225597160. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 19:31:23,240][46753] Avg episode reward: [(0, '0.204')] +[2024-06-10 19:31:25,903][46990] Updated weights for policy 0, policy_version 13770 (0.0047) +[2024-06-10 19:31:28,240][46753] Fps is (10 sec: 42598.2, 60 sec: 44236.6, 300 sec: 43764.8). Total num frames: 225689600. Throughput: 0: 44027.9. Samples: 225857780. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 19:31:28,240][46753] Avg episode reward: [(0, '0.206')] +[2024-06-10 19:31:29,530][46990] Updated weights for policy 0, policy_version 13780 (0.0033) +[2024-06-10 19:31:33,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 225918976. Throughput: 0: 43856.4. Samples: 225982800. Policy #0 lag: (min: 0.0, avg: 12.6, max: 23.0) +[2024-06-10 19:31:33,240][46753] Avg episode reward: [(0, '0.196')] +[2024-06-10 19:31:33,491][46990] Updated weights for policy 0, policy_version 13790 (0.0036) +[2024-06-10 19:31:34,654][46970] Signal inference workers to stop experience collection... (3200 times) +[2024-06-10 19:31:34,654][46970] Signal inference workers to resume experience collection... (3200 times) +[2024-06-10 19:31:34,677][46990] InferenceWorker_p0-w0: stopping experience collection (3200 times) +[2024-06-10 19:31:34,677][46990] InferenceWorker_p0-w0: resuming experience collection (3200 times) +[2024-06-10 19:31:37,443][46990] Updated weights for policy 0, policy_version 13800 (0.0031) +[2024-06-10 19:31:38,240][46753] Fps is (10 sec: 45875.4, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 226148352. Throughput: 0: 43698.2. Samples: 226244860. Policy #0 lag: (min: 0.0, avg: 12.6, max: 23.0) +[2024-06-10 19:31:38,240][46753] Avg episode reward: [(0, '0.201')] +[2024-06-10 19:31:41,149][46990] Updated weights for policy 0, policy_version 13810 (0.0038) +[2024-06-10 19:31:43,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 226328576. Throughput: 0: 43627.2. Samples: 226505300. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:31:43,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:31:43,251][46970] Saving new best policy, reward=0.224! +[2024-06-10 19:31:44,752][46990] Updated weights for policy 0, policy_version 13820 (0.0037) +[2024-06-10 19:31:48,239][46753] Fps is (10 sec: 42598.6, 60 sec: 44236.7, 300 sec: 43820.3). Total num frames: 226574336. Throughput: 0: 43687.0. Samples: 226635020. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:31:48,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:31:48,924][46990] Updated weights for policy 0, policy_version 13830 (0.0040) +[2024-06-10 19:31:52,199][46990] Updated weights for policy 0, policy_version 13840 (0.0032) +[2024-06-10 19:31:53,239][46753] Fps is (10 sec: 47513.3, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 226803712. Throughput: 0: 43752.5. Samples: 226900200. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 19:31:53,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:31:55,963][46990] Updated weights for policy 0, policy_version 13850 (0.0036) +[2024-06-10 19:31:58,239][46753] Fps is (10 sec: 42599.1, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 227000320. Throughput: 0: 43821.8. Samples: 227167780. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 19:31:58,240][46753] Avg episode reward: [(0, '0.221')] +[2024-06-10 19:31:59,478][46990] Updated weights for policy 0, policy_version 13860 (0.0033) +[2024-06-10 19:32:03,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 227229696. Throughput: 0: 43801.1. Samples: 227298540. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 19:32:03,240][46753] Avg episode reward: [(0, '0.204')] +[2024-06-10 19:32:03,261][46990] Updated weights for policy 0, policy_version 13870 (0.0036) +[2024-06-10 19:32:07,145][46990] Updated weights for policy 0, policy_version 13880 (0.0044) +[2024-06-10 19:32:08,241][46753] Fps is (10 sec: 45869.4, 60 sec: 43962.9, 300 sec: 43764.5). Total num frames: 227459072. Throughput: 0: 43629.0. Samples: 227560520. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 19:32:08,241][46753] Avg episode reward: [(0, '0.213')] +[2024-06-10 19:32:10,827][46990] Updated weights for policy 0, policy_version 13890 (0.0033) +[2024-06-10 19:32:13,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43765.4). Total num frames: 227655680. Throughput: 0: 43689.5. Samples: 227823800. Policy #0 lag: (min: 0.0, avg: 8.9, max: 20.0) +[2024-06-10 19:32:13,240][46753] Avg episode reward: [(0, '0.192')] +[2024-06-10 19:32:14,665][46990] Updated weights for policy 0, policy_version 13900 (0.0041) +[2024-06-10 19:32:18,239][46753] Fps is (10 sec: 42603.3, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 227885056. Throughput: 0: 43782.7. Samples: 227953020. Policy #0 lag: (min: 0.0, avg: 8.9, max: 20.0) +[2024-06-10 19:32:18,240][46753] Avg episode reward: [(0, '0.201')] +[2024-06-10 19:32:18,429][46990] Updated weights for policy 0, policy_version 13910 (0.0037) +[2024-06-10 19:32:22,102][46990] Updated weights for policy 0, policy_version 13920 (0.0040) +[2024-06-10 19:32:23,240][46753] Fps is (10 sec: 47512.9, 60 sec: 43963.6, 300 sec: 43820.3). Total num frames: 228130816. Throughput: 0: 44018.3. Samples: 228225680. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 19:32:23,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:32:23,247][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000013924_228130816.pth... +[2024-06-10 19:32:23,300][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000013280_217579520.pth +[2024-06-10 19:32:25,761][46990] Updated weights for policy 0, policy_version 13930 (0.0043) +[2024-06-10 19:32:28,244][46753] Fps is (10 sec: 44216.6, 60 sec: 43960.5, 300 sec: 43708.5). Total num frames: 228327424. Throughput: 0: 43990.1. Samples: 228485060. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 19:32:28,245][46753] Avg episode reward: [(0, '0.201')] +[2024-06-10 19:32:29,433][46990] Updated weights for policy 0, policy_version 13940 (0.0042) +[2024-06-10 19:32:33,213][46990] Updated weights for policy 0, policy_version 13950 (0.0044) +[2024-06-10 19:32:33,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 228556800. Throughput: 0: 43979.6. Samples: 228614100. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:32:33,240][46753] Avg episode reward: [(0, '0.197')] +[2024-06-10 19:32:37,196][46990] Updated weights for policy 0, policy_version 13960 (0.0042) +[2024-06-10 19:32:38,240][46753] Fps is (10 sec: 44256.5, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 228769792. Throughput: 0: 43897.2. Samples: 228875580. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:32:38,240][46753] Avg episode reward: [(0, '0.193')] +[2024-06-10 19:32:41,076][46990] Updated weights for policy 0, policy_version 13970 (0.0038) +[2024-06-10 19:32:43,239][46753] Fps is (10 sec: 42598.2, 60 sec: 44236.7, 300 sec: 43820.2). Total num frames: 228982784. Throughput: 0: 43732.8. Samples: 229135760. Policy #0 lag: (min: 2.0, avg: 10.6, max: 22.0) +[2024-06-10 19:32:43,243][46753] Avg episode reward: [(0, '0.210')] +[2024-06-10 19:32:44,500][46970] Signal inference workers to stop experience collection... (3250 times) +[2024-06-10 19:32:44,555][46990] InferenceWorker_p0-w0: stopping experience collection (3250 times) +[2024-06-10 19:32:44,613][46970] Signal inference workers to resume experience collection... (3250 times) +[2024-06-10 19:32:44,614][46990] InferenceWorker_p0-w0: resuming experience collection (3250 times) +[2024-06-10 19:32:44,774][46990] Updated weights for policy 0, policy_version 13980 (0.0037) +[2024-06-10 19:32:48,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 229195776. Throughput: 0: 43658.4. Samples: 229263180. Policy #0 lag: (min: 2.0, avg: 10.6, max: 22.0) +[2024-06-10 19:32:48,240][46753] Avg episode reward: [(0, '0.194')] +[2024-06-10 19:32:48,392][46990] Updated weights for policy 0, policy_version 13990 (0.0033) +[2024-06-10 19:32:51,960][46990] Updated weights for policy 0, policy_version 14000 (0.0029) +[2024-06-10 19:32:53,240][46753] Fps is (10 sec: 45874.9, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 229441536. Throughput: 0: 43833.9. Samples: 229533000. Policy #0 lag: (min: 0.0, avg: 11.6, max: 25.0) +[2024-06-10 19:32:53,240][46753] Avg episode reward: [(0, '0.207')] +[2024-06-10 19:32:56,119][46990] Updated weights for policy 0, policy_version 14010 (0.0037) +[2024-06-10 19:32:58,239][46753] Fps is (10 sec: 42599.3, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 229621760. Throughput: 0: 43812.4. Samples: 229795360. Policy #0 lag: (min: 0.0, avg: 11.6, max: 25.0) +[2024-06-10 19:32:58,240][46753] Avg episode reward: [(0, '0.210')] +[2024-06-10 19:32:59,592][46990] Updated weights for policy 0, policy_version 14020 (0.0029) +[2024-06-10 19:33:03,239][46753] Fps is (10 sec: 40960.8, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 229851136. Throughput: 0: 43785.5. Samples: 229923360. Policy #0 lag: (min: 0.0, avg: 11.6, max: 25.0) +[2024-06-10 19:33:03,240][46753] Avg episode reward: [(0, '0.205')] +[2024-06-10 19:33:03,288][46990] Updated weights for policy 0, policy_version 14030 (0.0038) +[2024-06-10 19:33:06,981][46990] Updated weights for policy 0, policy_version 14040 (0.0041) +[2024-06-10 19:33:08,240][46753] Fps is (10 sec: 47512.6, 60 sec: 43964.5, 300 sec: 43820.2). Total num frames: 230096896. Throughput: 0: 43732.4. Samples: 230193640. Policy #0 lag: (min: 1.0, avg: 9.6, max: 22.0) +[2024-06-10 19:33:08,240][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:33:10,863][46990] Updated weights for policy 0, policy_version 14050 (0.0033) +[2024-06-10 19:33:13,244][46753] Fps is (10 sec: 45854.1, 60 sec: 44233.5, 300 sec: 43819.6). Total num frames: 230309888. Throughput: 0: 43854.8. Samples: 230458520. Policy #0 lag: (min: 1.0, avg: 9.6, max: 22.0) +[2024-06-10 19:33:13,244][46753] Avg episode reward: [(0, '0.204')] +[2024-06-10 19:33:14,539][46990] Updated weights for policy 0, policy_version 14060 (0.0029) +[2024-06-10 19:33:18,239][46753] Fps is (10 sec: 40960.9, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 230506496. Throughput: 0: 43836.5. Samples: 230586740. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 19:33:18,240][46753] Avg episode reward: [(0, '0.209')] +[2024-06-10 19:33:18,466][46990] Updated weights for policy 0, policy_version 14070 (0.0028) +[2024-06-10 19:33:21,759][46990] Updated weights for policy 0, policy_version 14080 (0.0049) +[2024-06-10 19:33:23,239][46753] Fps is (10 sec: 44256.7, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 230752256. Throughput: 0: 43892.6. Samples: 230850740. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 19:33:23,240][46753] Avg episode reward: [(0, '0.210')] +[2024-06-10 19:33:26,209][46990] Updated weights for policy 0, policy_version 14090 (0.0040) +[2024-06-10 19:33:28,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43694.0, 300 sec: 43764.7). Total num frames: 230948864. Throughput: 0: 44000.0. Samples: 231115760. Policy #0 lag: (min: 0.0, avg: 10.6, max: 20.0) +[2024-06-10 19:33:28,240][46753] Avg episode reward: [(0, '0.202')] +[2024-06-10 19:33:29,119][46990] Updated weights for policy 0, policy_version 14100 (0.0037) +[2024-06-10 19:33:33,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43144.6, 300 sec: 43765.4). Total num frames: 231145472. Throughput: 0: 43997.5. Samples: 231243060. Policy #0 lag: (min: 0.0, avg: 10.6, max: 20.0) +[2024-06-10 19:33:33,240][46753] Avg episode reward: [(0, '0.199')] +[2024-06-10 19:33:33,454][46990] Updated weights for policy 0, policy_version 14110 (0.0041) +[2024-06-10 19:33:36,767][46990] Updated weights for policy 0, policy_version 14120 (0.0023) +[2024-06-10 19:33:38,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 231407616. Throughput: 0: 43869.4. Samples: 231507120. Policy #0 lag: (min: 0.0, avg: 9.9, max: 23.0) +[2024-06-10 19:33:38,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:33:40,788][46990] Updated weights for policy 0, policy_version 14130 (0.0034) +[2024-06-10 19:33:43,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 231620608. Throughput: 0: 44003.2. Samples: 231775500. Policy #0 lag: (min: 0.0, avg: 9.9, max: 23.0) +[2024-06-10 19:33:43,240][46753] Avg episode reward: [(0, '0.213')] +[2024-06-10 19:33:44,054][46990] Updated weights for policy 0, policy_version 14140 (0.0038) +[2024-06-10 19:33:48,240][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 231817216. Throughput: 0: 44035.8. Samples: 231904980. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 19:33:48,240][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:33:48,340][46990] Updated weights for policy 0, policy_version 14150 (0.0044) +[2024-06-10 19:33:51,404][46990] Updated weights for policy 0, policy_version 14160 (0.0033) +[2024-06-10 19:33:53,240][46753] Fps is (10 sec: 45874.2, 60 sec: 43963.7, 300 sec: 43986.9). Total num frames: 232079360. Throughput: 0: 43926.3. Samples: 232170320. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 19:33:53,249][46753] Avg episode reward: [(0, '0.217')] +[2024-06-10 19:33:55,922][46990] Updated weights for policy 0, policy_version 14170 (0.0025) +[2024-06-10 19:33:58,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 232259584. Throughput: 0: 43902.6. Samples: 232433940. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 19:33:58,240][46753] Avg episode reward: [(0, '0.201')] +[2024-06-10 19:33:59,222][46990] Updated weights for policy 0, policy_version 14180 (0.0042) +[2024-06-10 19:34:03,239][46753] Fps is (10 sec: 39322.6, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 232472576. Throughput: 0: 43912.1. Samples: 232562780. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 19:34:03,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:34:03,257][46990] Updated weights for policy 0, policy_version 14190 (0.0034) +[2024-06-10 19:34:06,590][46990] Updated weights for policy 0, policy_version 14200 (0.0036) +[2024-06-10 19:34:08,239][46753] Fps is (10 sec: 47513.7, 60 sec: 43963.9, 300 sec: 43931.3). Total num frames: 232734720. Throughput: 0: 43997.8. Samples: 232830640. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:34:08,240][46753] Avg episode reward: [(0, '0.200')] +[2024-06-10 19:34:10,971][46990] Updated weights for policy 0, policy_version 14210 (0.0041) +[2024-06-10 19:34:13,217][46970] Signal inference workers to stop experience collection... (3300 times) +[2024-06-10 19:34:13,220][46970] Signal inference workers to resume experience collection... (3300 times) +[2024-06-10 19:34:13,236][46990] InferenceWorker_p0-w0: stopping experience collection (3300 times) +[2024-06-10 19:34:13,237][46990] InferenceWorker_p0-w0: resuming experience collection (3300 times) +[2024-06-10 19:34:13,239][46753] Fps is (10 sec: 45874.7, 60 sec: 43693.9, 300 sec: 43875.8). Total num frames: 232931328. Throughput: 0: 44075.1. Samples: 233099140. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:34:13,240][46753] Avg episode reward: [(0, '0.195')] +[2024-06-10 19:34:13,979][46990] Updated weights for policy 0, policy_version 14220 (0.0042) +[2024-06-10 19:34:18,218][46990] Updated weights for policy 0, policy_version 14230 (0.0037) +[2024-06-10 19:34:18,244][46753] Fps is (10 sec: 40941.6, 60 sec: 43960.4, 300 sec: 43819.6). Total num frames: 233144320. Throughput: 0: 44174.6. Samples: 233231120. Policy #0 lag: (min: 0.0, avg: 10.7, max: 20.0) +[2024-06-10 19:34:18,245][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:34:21,363][46990] Updated weights for policy 0, policy_version 14240 (0.0035) +[2024-06-10 19:34:23,244][46753] Fps is (10 sec: 45854.5, 60 sec: 43960.4, 300 sec: 43986.2). Total num frames: 233390080. Throughput: 0: 44062.8. Samples: 233490140. Policy #0 lag: (min: 0.0, avg: 10.7, max: 20.0) +[2024-06-10 19:34:23,244][46753] Avg episode reward: [(0, '0.215')] +[2024-06-10 19:34:23,251][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000014245_233390080.pth... +[2024-06-10 19:34:23,300][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000013603_222871552.pth +[2024-06-10 19:34:25,643][46990] Updated weights for policy 0, policy_version 14250 (0.0036) +[2024-06-10 19:34:28,239][46753] Fps is (10 sec: 42617.3, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 233570304. Throughput: 0: 43972.7. Samples: 233754280. Policy #0 lag: (min: 1.0, avg: 11.0, max: 23.0) +[2024-06-10 19:34:28,243][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:34:29,010][46990] Updated weights for policy 0, policy_version 14260 (0.0038) +[2024-06-10 19:34:33,174][46990] Updated weights for policy 0, policy_version 14270 (0.0036) +[2024-06-10 19:34:33,244][46753] Fps is (10 sec: 40960.0, 60 sec: 44233.4, 300 sec: 43819.6). Total num frames: 233799680. Throughput: 0: 43917.9. Samples: 233881480. Policy #0 lag: (min: 1.0, avg: 11.0, max: 23.0) +[2024-06-10 19:34:33,244][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:34:36,547][46990] Updated weights for policy 0, policy_version 14280 (0.0033) +[2024-06-10 19:34:38,240][46753] Fps is (10 sec: 47513.4, 60 sec: 43963.7, 300 sec: 43932.0). Total num frames: 234045440. Throughput: 0: 43801.4. Samples: 234141380. Policy #0 lag: (min: 0.0, avg: 10.9, max: 23.0) +[2024-06-10 19:34:38,240][46753] Avg episode reward: [(0, '0.215')] +[2024-06-10 19:34:40,738][46990] Updated weights for policy 0, policy_version 14290 (0.0033) +[2024-06-10 19:34:43,239][46753] Fps is (10 sec: 42617.6, 60 sec: 43417.5, 300 sec: 43820.2). Total num frames: 234225664. Throughput: 0: 43969.4. Samples: 234412560. Policy #0 lag: (min: 0.0, avg: 10.9, max: 23.0) +[2024-06-10 19:34:43,240][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:34:43,842][46990] Updated weights for policy 0, policy_version 14300 (0.0022) +[2024-06-10 19:34:47,995][46990] Updated weights for policy 0, policy_version 14310 (0.0042) +[2024-06-10 19:34:48,239][46753] Fps is (10 sec: 40960.7, 60 sec: 43963.9, 300 sec: 43820.3). Total num frames: 234455040. Throughput: 0: 43969.7. Samples: 234541420. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:34:48,240][46753] Avg episode reward: [(0, '0.198')] +[2024-06-10 19:34:51,263][46990] Updated weights for policy 0, policy_version 14320 (0.0036) +[2024-06-10 19:34:53,239][46753] Fps is (10 sec: 47513.4, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 234700800. Throughput: 0: 43877.3. Samples: 234805120. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:34:53,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:34:55,119][46990] Updated weights for policy 0, policy_version 14330 (0.0044) +[2024-06-10 19:34:58,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 234881024. Throughput: 0: 43813.7. Samples: 235070760. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:34:58,240][46753] Avg episode reward: [(0, '0.207')] +[2024-06-10 19:34:59,193][46990] Updated weights for policy 0, policy_version 14340 (0.0035) +[2024-06-10 19:35:03,004][46990] Updated weights for policy 0, policy_version 14350 (0.0046) +[2024-06-10 19:35:03,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 235110400. Throughput: 0: 43711.6. Samples: 235197940. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 19:35:03,240][46753] Avg episode reward: [(0, '0.204')] +[2024-06-10 19:35:06,517][46990] Updated weights for policy 0, policy_version 14360 (0.0051) +[2024-06-10 19:35:08,240][46753] Fps is (10 sec: 47513.2, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 235356160. Throughput: 0: 43851.8. Samples: 235463280. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 19:35:08,240][46753] Avg episode reward: [(0, '0.204')] +[2024-06-10 19:35:10,393][46990] Updated weights for policy 0, policy_version 14370 (0.0042) +[2024-06-10 19:35:13,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 235552768. Throughput: 0: 44031.1. Samples: 235735680. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:35:13,240][46753] Avg episode reward: [(0, '0.196')] +[2024-06-10 19:35:13,659][46990] Updated weights for policy 0, policy_version 14380 (0.0038) +[2024-06-10 19:35:17,472][46990] Updated weights for policy 0, policy_version 14390 (0.0028) +[2024-06-10 19:35:18,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43694.0, 300 sec: 43764.7). Total num frames: 235765760. Throughput: 0: 44034.2. Samples: 235862820. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:35:18,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:35:21,059][46990] Updated weights for policy 0, policy_version 14400 (0.0032) +[2024-06-10 19:35:23,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43694.0, 300 sec: 43986.9). Total num frames: 236011520. Throughput: 0: 44074.0. Samples: 236124700. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:35:23,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:35:25,343][46990] Updated weights for policy 0, policy_version 14410 (0.0029) +[2024-06-10 19:35:28,239][46753] Fps is (10 sec: 45875.2, 60 sec: 44236.9, 300 sec: 43875.8). Total num frames: 236224512. Throughput: 0: 43904.9. Samples: 236388280. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:35:28,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:35:28,802][46990] Updated weights for policy 0, policy_version 14420 (0.0043) +[2024-06-10 19:35:32,975][46990] Updated weights for policy 0, policy_version 14430 (0.0032) +[2024-06-10 19:35:33,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43967.0, 300 sec: 43820.3). Total num frames: 236437504. Throughput: 0: 43828.4. Samples: 236513700. Policy #0 lag: (min: 0.0, avg: 10.2, max: 20.0) +[2024-06-10 19:35:33,240][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:35:36,277][46990] Updated weights for policy 0, policy_version 14440 (0.0037) +[2024-06-10 19:35:37,422][46970] Signal inference workers to stop experience collection... (3350 times) +[2024-06-10 19:35:37,422][46970] Signal inference workers to resume experience collection... (3350 times) +[2024-06-10 19:35:37,475][46990] InferenceWorker_p0-w0: stopping experience collection (3350 times) +[2024-06-10 19:35:37,476][46990] InferenceWorker_p0-w0: resuming experience collection (3350 times) +[2024-06-10 19:35:38,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 236666880. Throughput: 0: 43902.8. Samples: 236780740. Policy #0 lag: (min: 0.0, avg: 10.2, max: 20.0) +[2024-06-10 19:35:38,240][46753] Avg episode reward: [(0, '0.228')] +[2024-06-10 19:35:38,240][46970] Saving new best policy, reward=0.228! +[2024-06-10 19:35:40,233][46990] Updated weights for policy 0, policy_version 14450 (0.0030) +[2024-06-10 19:35:43,241][46753] Fps is (10 sec: 45869.6, 60 sec: 44508.9, 300 sec: 43986.7). Total num frames: 236896256. Throughput: 0: 43885.0. Samples: 237045640. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 19:35:43,241][46753] Avg episode reward: [(0, '0.209')] +[2024-06-10 19:35:43,454][46990] Updated weights for policy 0, policy_version 14460 (0.0041) +[2024-06-10 19:35:47,378][46990] Updated weights for policy 0, policy_version 14470 (0.0044) +[2024-06-10 19:35:48,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 237092864. Throughput: 0: 43970.6. Samples: 237176620. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 19:35:48,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:35:50,950][46990] Updated weights for policy 0, policy_version 14480 (0.0037) +[2024-06-10 19:35:53,239][46753] Fps is (10 sec: 42603.6, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 237322240. Throughput: 0: 43924.1. Samples: 237439860. Policy #0 lag: (min: 0.0, avg: 11.5, max: 23.0) +[2024-06-10 19:35:53,240][46753] Avg episode reward: [(0, '0.203')] +[2024-06-10 19:35:54,876][46990] Updated weights for policy 0, policy_version 14490 (0.0033) +[2024-06-10 19:35:58,239][46753] Fps is (10 sec: 44236.4, 60 sec: 44236.8, 300 sec: 43875.8). Total num frames: 237535232. Throughput: 0: 43705.8. Samples: 237702440. Policy #0 lag: (min: 0.0, avg: 11.5, max: 23.0) +[2024-06-10 19:35:58,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:35:58,537][46990] Updated weights for policy 0, policy_version 14500 (0.0050) +[2024-06-10 19:36:02,548][46990] Updated weights for policy 0, policy_version 14510 (0.0036) +[2024-06-10 19:36:03,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 237748224. Throughput: 0: 43694.7. Samples: 237829080. Policy #0 lag: (min: 0.0, avg: 10.0, max: 20.0) +[2024-06-10 19:36:03,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:36:06,162][46990] Updated weights for policy 0, policy_version 14520 (0.0031) +[2024-06-10 19:36:08,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 237977600. Throughput: 0: 43863.5. Samples: 238098560. Policy #0 lag: (min: 0.0, avg: 10.0, max: 20.0) +[2024-06-10 19:36:08,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:36:09,991][46990] Updated weights for policy 0, policy_version 14530 (0.0029) +[2024-06-10 19:36:13,239][46753] Fps is (10 sec: 45874.8, 60 sec: 44236.8, 300 sec: 43875.8). Total num frames: 238206976. Throughput: 0: 43828.4. Samples: 238360560. Policy #0 lag: (min: 0.0, avg: 10.0, max: 20.0) +[2024-06-10 19:36:13,249][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:36:13,429][46990] Updated weights for policy 0, policy_version 14540 (0.0044) +[2024-06-10 19:36:17,187][46990] Updated weights for policy 0, policy_version 14550 (0.0029) +[2024-06-10 19:36:18,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 238387200. Throughput: 0: 43865.8. Samples: 238487660. Policy #0 lag: (min: 0.0, avg: 10.5, max: 20.0) +[2024-06-10 19:36:18,240][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:36:21,050][46990] Updated weights for policy 0, policy_version 14560 (0.0029) +[2024-06-10 19:36:23,240][46753] Fps is (10 sec: 42597.9, 60 sec: 43690.5, 300 sec: 43875.8). Total num frames: 238632960. Throughput: 0: 43724.7. Samples: 238748360. Policy #0 lag: (min: 0.0, avg: 10.5, max: 20.0) +[2024-06-10 19:36:23,240][46753] Avg episode reward: [(0, '0.209')] +[2024-06-10 19:36:23,256][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000014565_238632960.pth... +[2024-06-10 19:36:23,356][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000013924_228130816.pth +[2024-06-10 19:36:24,770][46990] Updated weights for policy 0, policy_version 14570 (0.0035) +[2024-06-10 19:36:28,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 238845952. Throughput: 0: 43666.5. Samples: 239010580. Policy #0 lag: (min: 0.0, avg: 10.9, max: 23.0) +[2024-06-10 19:36:28,240][46753] Avg episode reward: [(0, '0.204')] +[2024-06-10 19:36:28,466][46990] Updated weights for policy 0, policy_version 14580 (0.0038) +[2024-06-10 19:36:32,552][46990] Updated weights for policy 0, policy_version 14590 (0.0030) +[2024-06-10 19:36:33,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 239058944. Throughput: 0: 43545.2. Samples: 239136160. Policy #0 lag: (min: 0.0, avg: 10.9, max: 23.0) +[2024-06-10 19:36:33,240][46753] Avg episode reward: [(0, '0.221')] +[2024-06-10 19:36:36,271][46990] Updated weights for policy 0, policy_version 14600 (0.0027) +[2024-06-10 19:36:38,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.6, 300 sec: 43931.3). Total num frames: 239288320. Throughput: 0: 43670.7. Samples: 239405040. Policy #0 lag: (min: 0.0, avg: 9.9, max: 24.0) +[2024-06-10 19:36:38,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:36:40,348][46990] Updated weights for policy 0, policy_version 14610 (0.0033) +[2024-06-10 19:36:43,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43418.5, 300 sec: 43820.3). Total num frames: 239501312. Throughput: 0: 43603.1. Samples: 239664580. Policy #0 lag: (min: 0.0, avg: 9.9, max: 24.0) +[2024-06-10 19:36:43,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:36:43,578][46990] Updated weights for policy 0, policy_version 14620 (0.0024) +[2024-06-10 19:36:47,478][46990] Updated weights for policy 0, policy_version 14630 (0.0031) +[2024-06-10 19:36:48,243][46753] Fps is (10 sec: 42582.4, 60 sec: 43687.9, 300 sec: 43764.2). Total num frames: 239714304. Throughput: 0: 43681.2. Samples: 239794900. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:36:48,244][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:36:50,920][46990] Updated weights for policy 0, policy_version 14640 (0.0027) +[2024-06-10 19:36:53,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 239943680. Throughput: 0: 43604.5. Samples: 240060760. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:36:53,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:36:54,829][46990] Updated weights for policy 0, policy_version 14650 (0.0028) +[2024-06-10 19:36:58,239][46753] Fps is (10 sec: 44253.8, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 240156672. Throughput: 0: 43606.3. Samples: 240322840. Policy #0 lag: (min: 1.0, avg: 9.9, max: 22.0) +[2024-06-10 19:36:58,240][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:36:58,474][46990] Updated weights for policy 0, policy_version 14660 (0.0039) +[2024-06-10 19:37:02,542][46990] Updated weights for policy 0, policy_version 14670 (0.0028) +[2024-06-10 19:37:03,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43690.6, 300 sec: 43764.9). Total num frames: 240369664. Throughput: 0: 43642.6. Samples: 240451580. Policy #0 lag: (min: 1.0, avg: 9.9, max: 22.0) +[2024-06-10 19:37:03,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:37:03,256][46970] Saving new best policy, reward=0.231! +[2024-06-10 19:37:05,801][46990] Updated weights for policy 0, policy_version 14680 (0.0040) +[2024-06-10 19:37:08,035][46970] Signal inference workers to stop experience collection... (3400 times) +[2024-06-10 19:37:08,040][46970] Signal inference workers to resume experience collection... (3400 times) +[2024-06-10 19:37:08,054][46990] InferenceWorker_p0-w0: stopping experience collection (3400 times) +[2024-06-10 19:37:08,054][46990] InferenceWorker_p0-w0: resuming experience collection (3400 times) +[2024-06-10 19:37:08,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43963.8, 300 sec: 43931.3). Total num frames: 240615424. Throughput: 0: 43843.8. Samples: 240721320. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:37:08,240][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:37:09,725][46990] Updated weights for policy 0, policy_version 14690 (0.0031) +[2024-06-10 19:37:13,211][46990] Updated weights for policy 0, policy_version 14700 (0.0021) +[2024-06-10 19:37:13,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43963.8, 300 sec: 43931.3). Total num frames: 240844800. Throughput: 0: 43930.7. Samples: 240987460. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:37:13,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:37:17,295][46990] Updated weights for policy 0, policy_version 14710 (0.0029) +[2024-06-10 19:37:18,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 241025024. Throughput: 0: 43950.7. Samples: 241113940. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:37:18,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:37:20,696][46990] Updated weights for policy 0, policy_version 14720 (0.0036) +[2024-06-10 19:37:23,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.8, 300 sec: 43820.9). Total num frames: 241254400. Throughput: 0: 43950.8. Samples: 241382820. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 19:37:23,240][46753] Avg episode reward: [(0, '0.207')] +[2024-06-10 19:37:24,797][46990] Updated weights for policy 0, policy_version 14730 (0.0039) +[2024-06-10 19:37:28,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 241467392. Throughput: 0: 43892.6. Samples: 241639740. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 19:37:28,240][46753] Avg episode reward: [(0, '0.213')] +[2024-06-10 19:37:28,535][46990] Updated weights for policy 0, policy_version 14740 (0.0041) +[2024-06-10 19:37:32,752][46990] Updated weights for policy 0, policy_version 14750 (0.0030) +[2024-06-10 19:37:33,241][46753] Fps is (10 sec: 44228.2, 60 sec: 43962.4, 300 sec: 43820.0). Total num frames: 241696768. Throughput: 0: 43788.1. Samples: 241765280. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 19:37:33,242][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:37:35,819][46990] Updated weights for policy 0, policy_version 14760 (0.0033) +[2024-06-10 19:37:38,239][46753] Fps is (10 sec: 45874.7, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 241926144. Throughput: 0: 43909.6. Samples: 242036700. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 19:37:38,240][46753] Avg episode reward: [(0, '0.213')] +[2024-06-10 19:37:39,875][46990] Updated weights for policy 0, policy_version 14770 (0.0039) +[2024-06-10 19:37:43,092][46990] Updated weights for policy 0, policy_version 14780 (0.0044) +[2024-06-10 19:37:43,239][46753] Fps is (10 sec: 45884.1, 60 sec: 44236.9, 300 sec: 43931.4). Total num frames: 242155520. Throughput: 0: 43932.4. Samples: 242299800. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 19:37:43,244][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:37:47,278][46990] Updated weights for policy 0, policy_version 14790 (0.0038) +[2024-06-10 19:37:48,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43693.5, 300 sec: 43709.2). Total num frames: 242335744. Throughput: 0: 43898.8. Samples: 242427020. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 19:37:48,240][46753] Avg episode reward: [(0, '0.210')] +[2024-06-10 19:37:50,718][46990] Updated weights for policy 0, policy_version 14800 (0.0028) +[2024-06-10 19:37:53,240][46753] Fps is (10 sec: 42597.6, 60 sec: 43963.6, 300 sec: 43931.3). Total num frames: 242581504. Throughput: 0: 43674.9. Samples: 242686700. Policy #0 lag: (min: 1.0, avg: 10.8, max: 22.0) +[2024-06-10 19:37:53,240][46753] Avg episode reward: [(0, '0.222')] +[2024-06-10 19:37:54,967][46990] Updated weights for policy 0, policy_version 14810 (0.0053) +[2024-06-10 19:37:58,240][46753] Fps is (10 sec: 45874.1, 60 sec: 43963.6, 300 sec: 43875.8). Total num frames: 242794496. Throughput: 0: 43553.2. Samples: 242947360. Policy #0 lag: (min: 1.0, avg: 10.8, max: 22.0) +[2024-06-10 19:37:58,240][46753] Avg episode reward: [(0, '0.209')] +[2024-06-10 19:37:58,303][46990] Updated weights for policy 0, policy_version 14820 (0.0036) +[2024-06-10 19:38:02,482][46990] Updated weights for policy 0, policy_version 14830 (0.0047) +[2024-06-10 19:38:03,239][46753] Fps is (10 sec: 42599.3, 60 sec: 43963.8, 300 sec: 43764.8). Total num frames: 243007488. Throughput: 0: 43731.2. Samples: 243081840. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 19:38:03,240][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:38:05,452][46990] Updated weights for policy 0, policy_version 14840 (0.0035) +[2024-06-10 19:38:08,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43963.7, 300 sec: 43876.5). Total num frames: 243253248. Throughput: 0: 43767.5. Samples: 243352360. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 19:38:08,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 19:38:08,241][46970] Saving new best policy, reward=0.232! +[2024-06-10 19:38:09,786][46990] Updated weights for policy 0, policy_version 14850 (0.0020) +[2024-06-10 19:38:13,218][46990] Updated weights for policy 0, policy_version 14860 (0.0033) +[2024-06-10 19:38:13,239][46753] Fps is (10 sec: 45874.7, 60 sec: 43690.6, 300 sec: 43931.3). Total num frames: 243466240. Throughput: 0: 43842.1. Samples: 243612640. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 19:38:13,240][46753] Avg episode reward: [(0, '0.215')] +[2024-06-10 19:38:17,311][46990] Updated weights for policy 0, policy_version 14870 (0.0044) +[2024-06-10 19:38:18,244][46753] Fps is (10 sec: 40942.4, 60 sec: 43960.6, 300 sec: 43764.1). Total num frames: 243662848. Throughput: 0: 43947.4. Samples: 243743020. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 19:38:18,244][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:38:20,659][46990] Updated weights for policy 0, policy_version 14880 (0.0032) +[2024-06-10 19:38:23,239][46753] Fps is (10 sec: 44237.1, 60 sec: 44236.8, 300 sec: 43931.3). Total num frames: 243908608. Throughput: 0: 43677.8. Samples: 244002200. Policy #0 lag: (min: 0.0, avg: 11.3, max: 20.0) +[2024-06-10 19:38:23,240][46753] Avg episode reward: [(0, '0.207')] +[2024-06-10 19:38:23,252][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000014887_243908608.pth... +[2024-06-10 19:38:23,311][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000014245_233390080.pth +[2024-06-10 19:38:25,099][46990] Updated weights for policy 0, policy_version 14890 (0.0032) +[2024-06-10 19:38:25,525][46970] Signal inference workers to stop experience collection... (3450 times) +[2024-06-10 19:38:25,525][46970] Signal inference workers to resume experience collection... (3450 times) +[2024-06-10 19:38:25,559][46990] InferenceWorker_p0-w0: stopping experience collection (3450 times) +[2024-06-10 19:38:25,559][46990] InferenceWorker_p0-w0: resuming experience collection (3450 times) +[2024-06-10 19:38:28,239][46753] Fps is (10 sec: 44256.2, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 244105216. Throughput: 0: 43675.1. Samples: 244265180. Policy #0 lag: (min: 0.0, avg: 11.3, max: 20.0) +[2024-06-10 19:38:28,240][46753] Avg episode reward: [(0, '0.215')] +[2024-06-10 19:38:28,270][46990] Updated weights for policy 0, policy_version 14900 (0.0044) +[2024-06-10 19:38:32,634][46990] Updated weights for policy 0, policy_version 14910 (0.0040) +[2024-06-10 19:38:33,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43419.0, 300 sec: 43709.2). Total num frames: 244301824. Throughput: 0: 43738.2. Samples: 244395240. Policy #0 lag: (min: 0.0, avg: 11.3, max: 20.0) +[2024-06-10 19:38:33,240][46753] Avg episode reward: [(0, '0.215')] +[2024-06-10 19:38:35,891][46990] Updated weights for policy 0, policy_version 14920 (0.0026) +[2024-06-10 19:38:38,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 244547584. Throughput: 0: 43792.5. Samples: 244657360. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 19:38:38,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:38:39,785][46990] Updated weights for policy 0, policy_version 14930 (0.0050) +[2024-06-10 19:38:43,239][46753] Fps is (10 sec: 45874.6, 60 sec: 43417.5, 300 sec: 43875.8). Total num frames: 244760576. Throughput: 0: 43958.3. Samples: 244925480. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 19:38:43,240][46753] Avg episode reward: [(0, '0.222')] +[2024-06-10 19:38:43,383][46990] Updated weights for policy 0, policy_version 14940 (0.0038) +[2024-06-10 19:38:47,675][46990] Updated weights for policy 0, policy_version 14950 (0.0039) +[2024-06-10 19:38:48,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43690.6, 300 sec: 43653.7). Total num frames: 244957184. Throughput: 0: 43930.6. Samples: 245058720. Policy #0 lag: (min: 1.0, avg: 11.8, max: 22.0) +[2024-06-10 19:38:48,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:38:50,759][46990] Updated weights for policy 0, policy_version 14960 (0.0041) +[2024-06-10 19:38:53,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 245202944. Throughput: 0: 43579.6. Samples: 245313440. Policy #0 lag: (min: 1.0, avg: 11.8, max: 22.0) +[2024-06-10 19:38:53,240][46753] Avg episode reward: [(0, '0.221')] +[2024-06-10 19:38:55,083][46990] Updated weights for policy 0, policy_version 14970 (0.0023) +[2024-06-10 19:38:58,230][46990] Updated weights for policy 0, policy_version 14980 (0.0028) +[2024-06-10 19:38:58,239][46753] Fps is (10 sec: 47513.3, 60 sec: 43963.8, 300 sec: 43931.3). Total num frames: 245432320. Throughput: 0: 43709.8. Samples: 245579580. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 19:38:58,240][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:39:02,625][46990] Updated weights for policy 0, policy_version 14990 (0.0042) +[2024-06-10 19:39:03,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 245628928. Throughput: 0: 43756.7. Samples: 245711880. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 19:39:03,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:39:05,889][46990] Updated weights for policy 0, policy_version 15000 (0.0032) +[2024-06-10 19:39:08,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.6, 300 sec: 43820.2). Total num frames: 245858304. Throughput: 0: 43751.0. Samples: 245971000. Policy #0 lag: (min: 0.0, avg: 11.9, max: 23.0) +[2024-06-10 19:39:08,244][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:39:09,791][46990] Updated weights for policy 0, policy_version 15010 (0.0029) +[2024-06-10 19:39:13,066][46990] Updated weights for policy 0, policy_version 15020 (0.0026) +[2024-06-10 19:39:13,239][46753] Fps is (10 sec: 47513.3, 60 sec: 43963.7, 300 sec: 43932.0). Total num frames: 246104064. Throughput: 0: 43984.4. Samples: 246244480. Policy #0 lag: (min: 0.0, avg: 11.9, max: 23.0) +[2024-06-10 19:39:13,240][46753] Avg episode reward: [(0, '0.213')] +[2024-06-10 19:39:17,466][46990] Updated weights for policy 0, policy_version 15030 (0.0043) +[2024-06-10 19:39:18,244][46753] Fps is (10 sec: 40941.8, 60 sec: 43417.5, 300 sec: 43653.6). Total num frames: 246267904. Throughput: 0: 44052.0. Samples: 246377780. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:39:18,244][46753] Avg episode reward: [(0, '0.217')] +[2024-06-10 19:39:20,507][46990] Updated weights for policy 0, policy_version 15040 (0.0043) +[2024-06-10 19:39:23,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43417.6, 300 sec: 43875.8). Total num frames: 246513664. Throughput: 0: 43948.6. Samples: 246635040. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:39:23,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:39:24,768][46990] Updated weights for policy 0, policy_version 15050 (0.0029) +[2024-06-10 19:39:27,912][46990] Updated weights for policy 0, policy_version 15060 (0.0033) +[2024-06-10 19:39:28,239][46753] Fps is (10 sec: 47535.4, 60 sec: 43963.8, 300 sec: 43876.5). Total num frames: 246743040. Throughput: 0: 43767.3. Samples: 246895000. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:39:28,240][46753] Avg episode reward: [(0, '0.203')] +[2024-06-10 19:39:31,966][46990] Updated weights for policy 0, policy_version 15070 (0.0037) +[2024-06-10 19:39:33,239][46753] Fps is (10 sec: 42597.7, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 246939648. Throughput: 0: 43969.7. Samples: 247037360. Policy #0 lag: (min: 1.0, avg: 12.1, max: 23.0) +[2024-06-10 19:39:33,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:39:35,321][46990] Updated weights for policy 0, policy_version 15080 (0.0042) +[2024-06-10 19:39:38,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.8, 300 sec: 43931.3). Total num frames: 247185408. Throughput: 0: 44004.4. Samples: 247293640. Policy #0 lag: (min: 1.0, avg: 12.1, max: 23.0) +[2024-06-10 19:39:38,240][46753] Avg episode reward: [(0, '0.208')] +[2024-06-10 19:39:39,541][46990] Updated weights for policy 0, policy_version 15090 (0.0039) +[2024-06-10 19:39:42,742][46990] Updated weights for policy 0, policy_version 15100 (0.0024) +[2024-06-10 19:39:43,240][46753] Fps is (10 sec: 49151.8, 60 sec: 44509.8, 300 sec: 43986.8). Total num frames: 247431168. Throughput: 0: 44025.7. Samples: 247560740. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 19:39:43,240][46753] Avg episode reward: [(0, '0.201')] +[2024-06-10 19:39:43,861][46970] Signal inference workers to stop experience collection... (3500 times) +[2024-06-10 19:39:43,895][46990] InferenceWorker_p0-w0: stopping experience collection (3500 times) +[2024-06-10 19:39:43,919][46970] Signal inference workers to resume experience collection... (3500 times) +[2024-06-10 19:39:43,920][46990] InferenceWorker_p0-w0: resuming experience collection (3500 times) +[2024-06-10 19:39:47,568][46990] Updated weights for policy 0, policy_version 15110 (0.0029) +[2024-06-10 19:39:48,244][46753] Fps is (10 sec: 40941.5, 60 sec: 43960.4, 300 sec: 43708.5). Total num frames: 247595008. Throughput: 0: 43996.9. Samples: 247691940. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 19:39:48,245][46753] Avg episode reward: [(0, '0.201')] +[2024-06-10 19:39:50,429][46990] Updated weights for policy 0, policy_version 15120 (0.0043) +[2024-06-10 19:39:53,240][46753] Fps is (10 sec: 39321.7, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 247824384. Throughput: 0: 43943.5. Samples: 247948460. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 19:39:53,240][46753] Avg episode reward: [(0, '0.228')] +[2024-06-10 19:39:54,766][46990] Updated weights for policy 0, policy_version 15130 (0.0039) +[2024-06-10 19:39:57,750][46990] Updated weights for policy 0, policy_version 15140 (0.0042) +[2024-06-10 19:39:58,243][46753] Fps is (10 sec: 47516.9, 60 sec: 43961.0, 300 sec: 43930.8). Total num frames: 248070144. Throughput: 0: 43610.1. Samples: 248207100. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 19:39:58,244][46753] Avg episode reward: [(0, '0.210')] +[2024-06-10 19:40:01,908][46990] Updated weights for policy 0, policy_version 15150 (0.0045) +[2024-06-10 19:40:03,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 248250368. Throughput: 0: 43761.7. Samples: 248346860. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 19:40:03,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:40:05,509][46990] Updated weights for policy 0, policy_version 15160 (0.0036) +[2024-06-10 19:40:08,239][46753] Fps is (10 sec: 40975.9, 60 sec: 43690.8, 300 sec: 43820.3). Total num frames: 248479744. Throughput: 0: 43691.1. Samples: 248601140. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 19:40:08,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:40:09,877][46990] Updated weights for policy 0, policy_version 15170 (0.0029) +[2024-06-10 19:40:12,762][46990] Updated weights for policy 0, policy_version 15180 (0.0033) +[2024-06-10 19:40:13,239][46753] Fps is (10 sec: 47513.2, 60 sec: 43690.6, 300 sec: 43931.3). Total num frames: 248725504. Throughput: 0: 43827.0. Samples: 248867220. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 19:40:13,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:40:17,671][46990] Updated weights for policy 0, policy_version 15190 (0.0030) +[2024-06-10 19:40:18,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43693.9, 300 sec: 43653.6). Total num frames: 248889344. Throughput: 0: 43702.7. Samples: 249003980. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 19:40:18,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:40:20,419][46990] Updated weights for policy 0, policy_version 15200 (0.0043) +[2024-06-10 19:40:23,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.5, 300 sec: 43764.7). Total num frames: 249135104. Throughput: 0: 43658.1. Samples: 249258260. Policy #0 lag: (min: 0.0, avg: 10.6, max: 22.0) +[2024-06-10 19:40:23,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:40:23,273][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000015207_249151488.pth... +[2024-06-10 19:40:23,332][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000014565_238632960.pth +[2024-06-10 19:40:24,880][46990] Updated weights for policy 0, policy_version 15210 (0.0030) +[2024-06-10 19:40:27,708][46990] Updated weights for policy 0, policy_version 15220 (0.0042) +[2024-06-10 19:40:28,239][46753] Fps is (10 sec: 50790.3, 60 sec: 44236.7, 300 sec: 43931.3). Total num frames: 249397248. Throughput: 0: 43569.4. Samples: 249521360. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:40:28,240][46753] Avg episode reward: [(0, '0.207')] +[2024-06-10 19:40:32,021][46990] Updated weights for policy 0, policy_version 15230 (0.0038) +[2024-06-10 19:40:33,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 249561088. Throughput: 0: 43702.6. Samples: 249658360. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:40:33,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:40:35,240][46990] Updated weights for policy 0, policy_version 15240 (0.0031) +[2024-06-10 19:40:38,240][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.6, 300 sec: 43764.9). Total num frames: 249806848. Throughput: 0: 43735.6. Samples: 249916560. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 19:40:38,240][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:40:39,891][46990] Updated weights for policy 0, policy_version 15250 (0.0031) +[2024-06-10 19:40:42,600][46990] Updated weights for policy 0, policy_version 15260 (0.0030) +[2024-06-10 19:40:43,239][46753] Fps is (10 sec: 47514.1, 60 sec: 43417.8, 300 sec: 43875.8). Total num frames: 250036224. Throughput: 0: 43871.8. Samples: 250181160. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 19:40:43,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:40:47,772][46990] Updated weights for policy 0, policy_version 15270 (0.0039) +[2024-06-10 19:40:48,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43420.8, 300 sec: 43653.6). Total num frames: 250200064. Throughput: 0: 43682.2. Samples: 250312560. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 19:40:48,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:40:50,216][46970] Signal inference workers to stop experience collection... (3550 times) +[2024-06-10 19:40:50,267][46970] Signal inference workers to resume experience collection... (3550 times) +[2024-06-10 19:40:50,281][46990] InferenceWorker_p0-w0: stopping experience collection (3550 times) +[2024-06-10 19:40:50,317][46990] InferenceWorker_p0-w0: resuming experience collection (3550 times) +[2024-06-10 19:40:50,396][46990] Updated weights for policy 0, policy_version 15280 (0.0042) +[2024-06-10 19:40:53,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 250462208. Throughput: 0: 43743.1. Samples: 250569580. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 19:40:53,240][46753] Avg episode reward: [(0, '0.210')] +[2024-06-10 19:40:54,949][46990] Updated weights for policy 0, policy_version 15290 (0.0037) +[2024-06-10 19:40:57,637][46990] Updated weights for policy 0, policy_version 15300 (0.0027) +[2024-06-10 19:40:58,239][46753] Fps is (10 sec: 50790.4, 60 sec: 43966.5, 300 sec: 43931.3). Total num frames: 250707968. Throughput: 0: 43636.9. Samples: 250830880. Policy #0 lag: (min: 0.0, avg: 8.1, max: 22.0) +[2024-06-10 19:40:58,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:41:02,378][46990] Updated weights for policy 0, policy_version 15310 (0.0030) +[2024-06-10 19:41:03,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 250871808. Throughput: 0: 43604.0. Samples: 250966160. Policy #0 lag: (min: 0.0, avg: 8.1, max: 22.0) +[2024-06-10 19:41:03,240][46753] Avg episode reward: [(0, '0.208')] +[2024-06-10 19:41:05,066][46990] Updated weights for policy 0, policy_version 15320 (0.0027) +[2024-06-10 19:41:08,239][46753] Fps is (10 sec: 42598.6, 60 sec: 44236.8, 300 sec: 43820.3). Total num frames: 251133952. Throughput: 0: 43899.7. Samples: 251233740. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 19:41:08,240][46753] Avg episode reward: [(0, '0.213')] +[2024-06-10 19:41:09,697][46990] Updated weights for policy 0, policy_version 15330 (0.0032) +[2024-06-10 19:41:12,459][46990] Updated weights for policy 0, policy_version 15340 (0.0034) +[2024-06-10 19:41:13,239][46753] Fps is (10 sec: 49151.6, 60 sec: 43963.7, 300 sec: 43986.9). Total num frames: 251363328. Throughput: 0: 43837.7. Samples: 251494060. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 19:41:13,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:41:17,086][46990] Updated weights for policy 0, policy_version 15350 (0.0037) +[2024-06-10 19:41:18,239][46753] Fps is (10 sec: 39321.2, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 251527168. Throughput: 0: 43806.1. Samples: 251629640. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 19:41:18,240][46753] Avg episode reward: [(0, '0.207')] +[2024-06-10 19:41:19,939][46990] Updated weights for policy 0, policy_version 15360 (0.0029) +[2024-06-10 19:41:23,239][46753] Fps is (10 sec: 42599.0, 60 sec: 44236.9, 300 sec: 43875.8). Total num frames: 251789312. Throughput: 0: 43900.6. Samples: 251892080. Policy #0 lag: (min: 1.0, avg: 9.6, max: 23.0) +[2024-06-10 19:41:23,240][46753] Avg episode reward: [(0, '0.217')] +[2024-06-10 19:41:24,726][46990] Updated weights for policy 0, policy_version 15370 (0.0031) +[2024-06-10 19:41:27,374][46990] Updated weights for policy 0, policy_version 15380 (0.0040) +[2024-06-10 19:41:28,239][46753] Fps is (10 sec: 49152.5, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 252018688. Throughput: 0: 43805.7. Samples: 252152420. Policy #0 lag: (min: 1.0, avg: 9.6, max: 23.0) +[2024-06-10 19:41:28,240][46753] Avg episode reward: [(0, '0.217')] +[2024-06-10 19:41:31,919][46990] Updated weights for policy 0, policy_version 15390 (0.0032) +[2024-06-10 19:41:33,240][46753] Fps is (10 sec: 39321.0, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 252182528. Throughput: 0: 44003.5. Samples: 252292720. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 19:41:33,240][46753] Avg episode reward: [(0, '0.229')] +[2024-06-10 19:41:34,881][46990] Updated weights for policy 0, policy_version 15400 (0.0042) +[2024-06-10 19:41:38,240][46753] Fps is (10 sec: 42596.3, 60 sec: 43963.5, 300 sec: 43875.7). Total num frames: 252444672. Throughput: 0: 44027.5. Samples: 252550840. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 19:41:38,240][46753] Avg episode reward: [(0, '0.218')] +[2024-06-10 19:41:39,690][46990] Updated weights for policy 0, policy_version 15410 (0.0035) +[2024-06-10 19:41:42,307][46990] Updated weights for policy 0, policy_version 15420 (0.0037) +[2024-06-10 19:41:43,239][46753] Fps is (10 sec: 50790.9, 60 sec: 44236.7, 300 sec: 43987.4). Total num frames: 252690432. Throughput: 0: 43976.9. Samples: 252809840. Policy #0 lag: (min: 1.0, avg: 9.2, max: 20.0) +[2024-06-10 19:41:43,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:41:46,985][46990] Updated weights for policy 0, policy_version 15430 (0.0028) +[2024-06-10 19:41:48,239][46753] Fps is (10 sec: 39323.7, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 252837888. Throughput: 0: 43979.6. Samples: 252945240. Policy #0 lag: (min: 1.0, avg: 9.2, max: 20.0) +[2024-06-10 19:41:48,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:41:49,631][46990] Updated weights for policy 0, policy_version 15440 (0.0025) +[2024-06-10 19:41:53,239][46753] Fps is (10 sec: 39321.4, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 253083648. Throughput: 0: 43759.5. Samples: 253202920. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 19:41:53,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:41:54,674][46990] Updated weights for policy 0, policy_version 15450 (0.0030) +[2024-06-10 19:41:56,764][46970] Signal inference workers to stop experience collection... (3600 times) +[2024-06-10 19:41:56,793][46990] InferenceWorker_p0-w0: stopping experience collection (3600 times) +[2024-06-10 19:41:56,818][46970] Signal inference workers to resume experience collection... (3600 times) +[2024-06-10 19:41:56,819][46990] InferenceWorker_p0-w0: resuming experience collection (3600 times) +[2024-06-10 19:41:57,246][46990] Updated weights for policy 0, policy_version 15460 (0.0026) +[2024-06-10 19:41:58,239][46753] Fps is (10 sec: 49151.7, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 253329408. Throughput: 0: 43771.7. Samples: 253463780. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 19:41:58,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:42:02,142][46990] Updated weights for policy 0, policy_version 15470 (0.0036) +[2024-06-10 19:42:03,239][46753] Fps is (10 sec: 44237.0, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 253526016. Throughput: 0: 43857.4. Samples: 253603220. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 19:42:03,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:42:04,993][46990] Updated weights for policy 0, policy_version 15480 (0.0032) +[2024-06-10 19:42:08,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 253755392. Throughput: 0: 43834.1. Samples: 253864620. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 19:42:08,240][46753] Avg episode reward: [(0, '0.209')] +[2024-06-10 19:42:09,713][46990] Updated weights for policy 0, policy_version 15490 (0.0047) +[2024-06-10 19:42:12,364][46990] Updated weights for policy 0, policy_version 15500 (0.0033) +[2024-06-10 19:42:13,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 253984768. Throughput: 0: 43873.3. Samples: 254126720. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 19:42:13,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:42:17,174][46990] Updated weights for policy 0, policy_version 15510 (0.0033) +[2024-06-10 19:42:18,239][46753] Fps is (10 sec: 42599.0, 60 sec: 44236.9, 300 sec: 43820.3). Total num frames: 254181376. Throughput: 0: 43770.0. Samples: 254262360. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 19:42:18,240][46753] Avg episode reward: [(0, '0.204')] +[2024-06-10 19:42:19,511][46990] Updated weights for policy 0, policy_version 15520 (0.0027) +[2024-06-10 19:42:23,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43417.6, 300 sec: 43820.2). Total num frames: 254394368. Throughput: 0: 43778.6. Samples: 254520860. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 19:42:23,240][46753] Avg episode reward: [(0, '0.209')] +[2024-06-10 19:42:23,268][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000015527_254394368.pth... +[2024-06-10 19:42:23,320][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000014887_243908608.pth +[2024-06-10 19:42:24,501][46990] Updated weights for policy 0, policy_version 15530 (0.0040) +[2024-06-10 19:42:27,307][46990] Updated weights for policy 0, policy_version 15540 (0.0034) +[2024-06-10 19:42:28,239][46753] Fps is (10 sec: 45874.6, 60 sec: 43690.6, 300 sec: 43876.1). Total num frames: 254640128. Throughput: 0: 43872.0. Samples: 254784080. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:42:28,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:42:31,863][46990] Updated weights for policy 0, policy_version 15550 (0.0040) +[2024-06-10 19:42:33,244][46753] Fps is (10 sec: 45854.6, 60 sec: 44506.6, 300 sec: 43819.6). Total num frames: 254853120. Throughput: 0: 43985.3. Samples: 254924780. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 19:42:33,244][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:42:34,946][46990] Updated weights for policy 0, policy_version 15560 (0.0030) +[2024-06-10 19:42:38,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43417.9, 300 sec: 43709.2). Total num frames: 255049728. Throughput: 0: 43976.5. Samples: 255181860. Policy #0 lag: (min: 0.0, avg: 12.1, max: 21.0) +[2024-06-10 19:42:38,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:42:39,483][46990] Updated weights for policy 0, policy_version 15570 (0.0024) +[2024-06-10 19:42:42,249][46990] Updated weights for policy 0, policy_version 15580 (0.0038) +[2024-06-10 19:42:43,239][46753] Fps is (10 sec: 44256.8, 60 sec: 43417.6, 300 sec: 43931.3). Total num frames: 255295488. Throughput: 0: 43898.7. Samples: 255439220. Policy #0 lag: (min: 0.0, avg: 12.1, max: 21.0) +[2024-06-10 19:42:43,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:42:47,097][46990] Updated weights for policy 0, policy_version 15590 (0.0043) +[2024-06-10 19:42:48,244][46753] Fps is (10 sec: 45854.6, 60 sec: 44506.5, 300 sec: 43819.6). Total num frames: 255508480. Throughput: 0: 43783.6. Samples: 255573680. Policy #0 lag: (min: 0.0, avg: 12.1, max: 21.0) +[2024-06-10 19:42:48,245][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:42:49,659][46990] Updated weights for policy 0, policy_version 15600 (0.0033) +[2024-06-10 19:42:53,244][46753] Fps is (10 sec: 40941.5, 60 sec: 43687.4, 300 sec: 43764.1). Total num frames: 255705088. Throughput: 0: 43557.5. Samples: 255824900. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 19:42:53,244][46753] Avg episode reward: [(0, '0.215')] +[2024-06-10 19:42:54,598][46990] Updated weights for policy 0, policy_version 15610 (0.0030) +[2024-06-10 19:42:57,611][46990] Updated weights for policy 0, policy_version 15620 (0.0043) +[2024-06-10 19:42:58,239][46753] Fps is (10 sec: 44256.6, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 255950848. Throughput: 0: 43697.3. Samples: 256093100. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 19:42:58,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:43:02,092][46990] Updated weights for policy 0, policy_version 15630 (0.0034) +[2024-06-10 19:43:03,240][46753] Fps is (10 sec: 45895.2, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 256163840. Throughput: 0: 43664.2. Samples: 256227260. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 19:43:03,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:43:05,143][46990] Updated weights for policy 0, policy_version 15640 (0.0028) +[2024-06-10 19:43:08,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43144.6, 300 sec: 43653.7). Total num frames: 256344064. Throughput: 0: 43718.7. Samples: 256488200. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 19:43:08,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:43:09,598][46990] Updated weights for policy 0, policy_version 15650 (0.0055) +[2024-06-10 19:43:12,536][46990] Updated weights for policy 0, policy_version 15660 (0.0037) +[2024-06-10 19:43:13,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43417.6, 300 sec: 43820.9). Total num frames: 256589824. Throughput: 0: 43724.1. Samples: 256751660. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 19:43:13,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:43:17,196][46990] Updated weights for policy 0, policy_version 15670 (0.0025) +[2024-06-10 19:43:18,239][46753] Fps is (10 sec: 47513.4, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 256819200. Throughput: 0: 43621.7. Samples: 256887560. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 19:43:18,240][46753] Avg episode reward: [(0, '0.206')] +[2024-06-10 19:43:19,999][46990] Updated weights for policy 0, policy_version 15680 (0.0028) +[2024-06-10 19:43:23,240][46753] Fps is (10 sec: 40959.3, 60 sec: 43417.5, 300 sec: 43709.2). Total num frames: 256999424. Throughput: 0: 43451.9. Samples: 257137200. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 19:43:23,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:43:24,434][46990] Updated weights for policy 0, policy_version 15690 (0.0031) +[2024-06-10 19:43:24,908][46970] Signal inference workers to stop experience collection... (3650 times) +[2024-06-10 19:43:24,908][46970] Signal inference workers to resume experience collection... (3650 times) +[2024-06-10 19:43:24,947][46990] InferenceWorker_p0-w0: stopping experience collection (3650 times) +[2024-06-10 19:43:24,947][46990] InferenceWorker_p0-w0: resuming experience collection (3650 times) +[2024-06-10 19:43:27,644][46990] Updated weights for policy 0, policy_version 15700 (0.0036) +[2024-06-10 19:43:28,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 257261568. Throughput: 0: 43760.0. Samples: 257408420. Policy #0 lag: (min: 0.0, avg: 7.8, max: 21.0) +[2024-06-10 19:43:28,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:43:31,675][46990] Updated weights for policy 0, policy_version 15710 (0.0027) +[2024-06-10 19:43:33,239][46753] Fps is (10 sec: 47514.3, 60 sec: 43694.0, 300 sec: 43820.3). Total num frames: 257474560. Throughput: 0: 43769.3. Samples: 257543100. Policy #0 lag: (min: 0.0, avg: 7.8, max: 21.0) +[2024-06-10 19:43:33,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 19:43:33,252][46970] Saving new best policy, reward=0.234! +[2024-06-10 19:43:35,236][46990] Updated weights for policy 0, policy_version 15720 (0.0039) +[2024-06-10 19:43:38,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 257671168. Throughput: 0: 43786.2. Samples: 257795080. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 19:43:38,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:43:39,503][46990] Updated weights for policy 0, policy_version 15730 (0.0041) +[2024-06-10 19:43:42,795][46990] Updated weights for policy 0, policy_version 15740 (0.0038) +[2024-06-10 19:43:43,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43417.6, 300 sec: 43875.8). Total num frames: 257900544. Throughput: 0: 43674.3. Samples: 258058440. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 19:43:43,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:43:47,135][46990] Updated weights for policy 0, policy_version 15750 (0.0050) +[2024-06-10 19:43:48,240][46753] Fps is (10 sec: 45874.0, 60 sec: 43693.8, 300 sec: 43820.2). Total num frames: 258129920. Throughput: 0: 43585.3. Samples: 258188600. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 19:43:48,241][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:43:50,241][46990] Updated weights for policy 0, policy_version 15760 (0.0056) +[2024-06-10 19:43:53,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43693.9, 300 sec: 43709.2). Total num frames: 258326528. Throughput: 0: 43487.1. Samples: 258445120. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 19:43:53,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:43:54,595][46990] Updated weights for policy 0, policy_version 15770 (0.0041) +[2024-06-10 19:43:57,926][46990] Updated weights for policy 0, policy_version 15780 (0.0042) +[2024-06-10 19:43:58,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43417.6, 300 sec: 43820.2). Total num frames: 258555904. Throughput: 0: 43557.7. Samples: 258711760. Policy #0 lag: (min: 1.0, avg: 9.5, max: 20.0) +[2024-06-10 19:43:58,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:44:01,878][46990] Updated weights for policy 0, policy_version 15790 (0.0039) +[2024-06-10 19:44:03,240][46753] Fps is (10 sec: 45874.6, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 258785280. Throughput: 0: 43567.9. Samples: 258848120. Policy #0 lag: (min: 1.0, avg: 9.5, max: 20.0) +[2024-06-10 19:44:03,248][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:44:05,315][46990] Updated weights for policy 0, policy_version 15800 (0.0033) +[2024-06-10 19:44:08,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 258981888. Throughput: 0: 43782.3. Samples: 259107400. Policy #0 lag: (min: 1.0, avg: 9.5, max: 20.0) +[2024-06-10 19:44:08,240][46753] Avg episode reward: [(0, '0.203')] +[2024-06-10 19:44:09,357][46990] Updated weights for policy 0, policy_version 15810 (0.0034) +[2024-06-10 19:44:12,721][46990] Updated weights for policy 0, policy_version 15820 (0.0048) +[2024-06-10 19:44:13,240][46753] Fps is (10 sec: 44236.8, 60 sec: 43963.6, 300 sec: 43932.0). Total num frames: 259227648. Throughput: 0: 43718.5. Samples: 259375760. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 19:44:13,240][46753] Avg episode reward: [(0, '0.228')] +[2024-06-10 19:44:16,903][46990] Updated weights for policy 0, policy_version 15830 (0.0041) +[2024-06-10 19:44:18,239][46753] Fps is (10 sec: 45875.9, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 259440640. Throughput: 0: 43710.3. Samples: 259510060. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 19:44:18,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:44:20,001][46990] Updated weights for policy 0, policy_version 15840 (0.0047) +[2024-06-10 19:44:23,240][46753] Fps is (10 sec: 40960.2, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 259637248. Throughput: 0: 43726.5. Samples: 259762780. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 19:44:23,240][46753] Avg episode reward: [(0, '0.222')] +[2024-06-10 19:44:23,253][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000015847_259637248.pth... +[2024-06-10 19:44:23,300][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000015207_249151488.pth +[2024-06-10 19:44:24,269][46990] Updated weights for policy 0, policy_version 15850 (0.0039) +[2024-06-10 19:44:27,540][46990] Updated weights for policy 0, policy_version 15860 (0.0041) +[2024-06-10 19:44:28,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 259866624. Throughput: 0: 43874.2. Samples: 260032780. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 19:44:28,240][46753] Avg episode reward: [(0, '0.218')] +[2024-06-10 19:44:28,271][46970] Signal inference workers to stop experience collection... (3700 times) +[2024-06-10 19:44:28,316][46990] InferenceWorker_p0-w0: stopping experience collection (3700 times) +[2024-06-10 19:44:28,323][46970] Signal inference workers to resume experience collection... (3700 times) +[2024-06-10 19:44:28,332][46990] InferenceWorker_p0-w0: resuming experience collection (3700 times) +[2024-06-10 19:44:31,781][46990] Updated weights for policy 0, policy_version 15870 (0.0030) +[2024-06-10 19:44:33,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 260096000. Throughput: 0: 43849.1. Samples: 260161800. Policy #0 lag: (min: 0.0, avg: 8.5, max: 20.0) +[2024-06-10 19:44:33,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:44:35,233][46990] Updated weights for policy 0, policy_version 15880 (0.0041) +[2024-06-10 19:44:38,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43963.7, 300 sec: 43653.7). Total num frames: 260308992. Throughput: 0: 43957.4. Samples: 260423200. Policy #0 lag: (min: 0.0, avg: 8.5, max: 20.0) +[2024-06-10 19:44:38,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:44:39,033][46990] Updated weights for policy 0, policy_version 15890 (0.0038) +[2024-06-10 19:44:42,538][46990] Updated weights for policy 0, policy_version 15900 (0.0039) +[2024-06-10 19:44:43,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.6, 300 sec: 43820.9). Total num frames: 260521984. Throughput: 0: 43803.6. Samples: 260682920. Policy #0 lag: (min: 0.0, avg: 8.5, max: 20.0) +[2024-06-10 19:44:43,240][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:44:47,114][46990] Updated weights for policy 0, policy_version 15910 (0.0024) +[2024-06-10 19:44:48,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.9, 300 sec: 43875.8). Total num frames: 260767744. Throughput: 0: 43743.7. Samples: 260816580. Policy #0 lag: (min: 1.0, avg: 10.5, max: 19.0) +[2024-06-10 19:44:48,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:44:50,017][46990] Updated weights for policy 0, policy_version 15920 (0.0033) +[2024-06-10 19:44:53,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.6, 300 sec: 43654.2). Total num frames: 260947968. Throughput: 0: 43815.6. Samples: 261079100. Policy #0 lag: (min: 1.0, avg: 10.5, max: 19.0) +[2024-06-10 19:44:53,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:44:54,265][46990] Updated weights for policy 0, policy_version 15930 (0.0029) +[2024-06-10 19:44:57,195][46990] Updated weights for policy 0, policy_version 15940 (0.0039) +[2024-06-10 19:44:58,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 261177344. Throughput: 0: 43760.1. Samples: 261344960. Policy #0 lag: (min: 1.0, avg: 8.7, max: 22.0) +[2024-06-10 19:44:58,240][46753] Avg episode reward: [(0, '0.228')] +[2024-06-10 19:45:01,787][46990] Updated weights for policy 0, policy_version 15950 (0.0039) +[2024-06-10 19:45:03,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.8, 300 sec: 43820.2). Total num frames: 261406720. Throughput: 0: 43611.0. Samples: 261472560. Policy #0 lag: (min: 1.0, avg: 8.7, max: 22.0) +[2024-06-10 19:45:03,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:45:04,779][46990] Updated weights for policy 0, policy_version 15960 (0.0037) +[2024-06-10 19:45:08,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 261619712. Throughput: 0: 43867.2. Samples: 261736800. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:45:08,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:45:09,580][46990] Updated weights for policy 0, policy_version 15970 (0.0038) +[2024-06-10 19:45:12,127][46990] Updated weights for policy 0, policy_version 15980 (0.0029) +[2024-06-10 19:45:13,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 261849088. Throughput: 0: 43689.2. Samples: 261998800. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:45:13,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:45:17,003][46990] Updated weights for policy 0, policy_version 15990 (0.0044) +[2024-06-10 19:45:18,240][46753] Fps is (10 sec: 42597.8, 60 sec: 43417.5, 300 sec: 43764.7). Total num frames: 262045696. Throughput: 0: 43747.4. Samples: 262130440. Policy #0 lag: (min: 0.0, avg: 9.6, max: 23.0) +[2024-06-10 19:45:18,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 19:45:19,799][46990] Updated weights for policy 0, policy_version 16000 (0.0038) +[2024-06-10 19:45:23,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43963.8, 300 sec: 43653.6). Total num frames: 262275072. Throughput: 0: 43788.9. Samples: 262393700. Policy #0 lag: (min: 0.0, avg: 9.6, max: 23.0) +[2024-06-10 19:45:23,240][46753] Avg episode reward: [(0, '0.222')] +[2024-06-10 19:45:24,156][46990] Updated weights for policy 0, policy_version 16010 (0.0041) +[2024-06-10 19:45:26,978][46990] Updated weights for policy 0, policy_version 16020 (0.0037) +[2024-06-10 19:45:28,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 262488064. Throughput: 0: 44009.4. Samples: 262663340. Policy #0 lag: (min: 0.0, avg: 9.6, max: 23.0) +[2024-06-10 19:45:28,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 19:45:31,663][46990] Updated weights for policy 0, policy_version 16030 (0.0033) +[2024-06-10 19:45:33,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 262717440. Throughput: 0: 43830.6. Samples: 262788960. Policy #0 lag: (min: 0.0, avg: 9.4, max: 20.0) +[2024-06-10 19:45:33,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:45:34,297][46990] Updated weights for policy 0, policy_version 16040 (0.0033) +[2024-06-10 19:45:38,239][46753] Fps is (10 sec: 45874.7, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 262946816. Throughput: 0: 43850.6. Samples: 263052380. Policy #0 lag: (min: 0.0, avg: 9.4, max: 20.0) +[2024-06-10 19:45:38,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:45:39,280][46990] Updated weights for policy 0, policy_version 16050 (0.0034) +[2024-06-10 19:45:42,044][46990] Updated weights for policy 0, policy_version 16060 (0.0043) +[2024-06-10 19:45:43,244][46753] Fps is (10 sec: 42579.4, 60 sec: 43687.4, 300 sec: 43875.1). Total num frames: 263143424. Throughput: 0: 43737.4. Samples: 263313340. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 19:45:43,245][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:45:46,469][46990] Updated weights for policy 0, policy_version 16070 (0.0035) +[2024-06-10 19:45:48,239][46753] Fps is (10 sec: 40960.9, 60 sec: 43144.6, 300 sec: 43709.2). Total num frames: 263356416. Throughput: 0: 43833.0. Samples: 263445040. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 19:45:48,240][46753] Avg episode reward: [(0, '0.203')] +[2024-06-10 19:45:49,841][46990] Updated weights for policy 0, policy_version 16080 (0.0028) +[2024-06-10 19:45:53,240][46753] Fps is (10 sec: 44256.1, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 263585792. Throughput: 0: 43815.8. Samples: 263708520. Policy #0 lag: (min: 1.0, avg: 11.2, max: 23.0) +[2024-06-10 19:45:53,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 19:45:53,665][46990] Updated weights for policy 0, policy_version 16090 (0.0023) +[2024-06-10 19:45:55,360][46970] Signal inference workers to stop experience collection... (3750 times) +[2024-06-10 19:45:55,412][46990] InferenceWorker_p0-w0: stopping experience collection (3750 times) +[2024-06-10 19:45:55,474][46970] Signal inference workers to resume experience collection... (3750 times) +[2024-06-10 19:45:55,475][46990] InferenceWorker_p0-w0: resuming experience collection (3750 times) +[2024-06-10 19:45:57,070][46990] Updated weights for policy 0, policy_version 16100 (0.0029) +[2024-06-10 19:45:58,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 263798784. Throughput: 0: 43819.2. Samples: 263970660. Policy #0 lag: (min: 1.0, avg: 11.2, max: 23.0) +[2024-06-10 19:45:58,240][46753] Avg episode reward: [(0, '0.228')] +[2024-06-10 19:46:01,147][46990] Updated weights for policy 0, policy_version 16110 (0.0039) +[2024-06-10 19:46:03,240][46753] Fps is (10 sec: 44237.0, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 264028160. Throughput: 0: 43744.0. Samples: 264098920. Policy #0 lag: (min: 1.0, avg: 11.2, max: 23.0) +[2024-06-10 19:46:03,240][46753] Avg episode reward: [(0, '0.213')] +[2024-06-10 19:46:04,361][46990] Updated weights for policy 0, policy_version 16120 (0.0027) +[2024-06-10 19:46:08,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 264241152. Throughput: 0: 43732.5. Samples: 264361660. Policy #0 lag: (min: 3.0, avg: 12.0, max: 21.0) +[2024-06-10 19:46:08,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:46:08,926][46990] Updated weights for policy 0, policy_version 16130 (0.0042) +[2024-06-10 19:46:12,092][46990] Updated weights for policy 0, policy_version 16140 (0.0032) +[2024-06-10 19:46:13,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 264454144. Throughput: 0: 43598.6. Samples: 264625280. Policy #0 lag: (min: 3.0, avg: 12.0, max: 21.0) +[2024-06-10 19:46:13,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:46:16,179][46990] Updated weights for policy 0, policy_version 16150 (0.0027) +[2024-06-10 19:46:18,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 264683520. Throughput: 0: 43749.8. Samples: 264757700. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 19:46:18,240][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 19:46:18,240][46970] Saving new best policy, reward=0.237! +[2024-06-10 19:46:19,968][46990] Updated weights for policy 0, policy_version 16160 (0.0023) +[2024-06-10 19:46:23,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 264896512. Throughput: 0: 43631.1. Samples: 265015780. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 19:46:23,241][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:46:23,254][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000016168_264896512.pth... +[2024-06-10 19:46:23,335][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000015527_254394368.pth +[2024-06-10 19:46:23,596][46990] Updated weights for policy 0, policy_version 16170 (0.0034) +[2024-06-10 19:46:27,232][46990] Updated weights for policy 0, policy_version 16180 (0.0037) +[2024-06-10 19:46:28,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 265109504. Throughput: 0: 43623.0. Samples: 265276180. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 19:46:28,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:46:30,998][46990] Updated weights for policy 0, policy_version 16190 (0.0030) +[2024-06-10 19:46:33,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 265322496. Throughput: 0: 43621.3. Samples: 265408000. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 19:46:33,240][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:46:34,829][46990] Updated weights for policy 0, policy_version 16200 (0.0038) +[2024-06-10 19:46:38,240][46753] Fps is (10 sec: 44236.5, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 265551872. Throughput: 0: 43548.0. Samples: 265668180. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 19:46:38,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:46:38,864][46990] Updated weights for policy 0, policy_version 16210 (0.0032) +[2024-06-10 19:46:42,580][46990] Updated weights for policy 0, policy_version 16220 (0.0033) +[2024-06-10 19:46:43,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43694.0, 300 sec: 43820.3). Total num frames: 265764864. Throughput: 0: 43556.9. Samples: 265930720. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 19:46:43,240][46753] Avg episode reward: [(0, '0.217')] +[2024-06-10 19:46:46,053][46990] Updated weights for policy 0, policy_version 16230 (0.0037) +[2024-06-10 19:46:48,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 265994240. Throughput: 0: 43655.6. Samples: 266063420. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 19:46:48,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:46:49,946][46990] Updated weights for policy 0, policy_version 16240 (0.0037) +[2024-06-10 19:46:53,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 266207232. Throughput: 0: 43683.4. Samples: 266327420. Policy #0 lag: (min: 1.0, avg: 9.8, max: 19.0) +[2024-06-10 19:46:53,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 19:46:53,670][46990] Updated weights for policy 0, policy_version 16250 (0.0034) +[2024-06-10 19:46:57,102][46990] Updated weights for policy 0, policy_version 16260 (0.0027) +[2024-06-10 19:46:58,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 266420224. Throughput: 0: 43737.3. Samples: 266593460. Policy #0 lag: (min: 1.0, avg: 9.8, max: 19.0) +[2024-06-10 19:46:58,240][46753] Avg episode reward: [(0, '0.218')] +[2024-06-10 19:47:00,856][46990] Updated weights for policy 0, policy_version 16270 (0.0033) +[2024-06-10 19:47:03,239][46753] Fps is (10 sec: 44237.7, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 266649600. Throughput: 0: 43693.0. Samples: 266723880. Policy #0 lag: (min: 1.0, avg: 9.8, max: 19.0) +[2024-06-10 19:47:03,240][46753] Avg episode reward: [(0, '0.221')] +[2024-06-10 19:47:04,835][46990] Updated weights for policy 0, policy_version 16280 (0.0037) +[2024-06-10 19:47:08,240][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.5, 300 sec: 43653.6). Total num frames: 266862592. Throughput: 0: 43708.4. Samples: 266982660. Policy #0 lag: (min: 0.0, avg: 10.3, max: 23.0) +[2024-06-10 19:47:08,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 19:47:08,784][46990] Updated weights for policy 0, policy_version 16290 (0.0042) +[2024-06-10 19:47:12,391][46990] Updated weights for policy 0, policy_version 16300 (0.0038) +[2024-06-10 19:47:13,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 267091968. Throughput: 0: 43854.3. Samples: 267249620. Policy #0 lag: (min: 0.0, avg: 10.3, max: 23.0) +[2024-06-10 19:47:13,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:47:16,094][46990] Updated weights for policy 0, policy_version 16310 (0.0035) +[2024-06-10 19:47:18,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 267304960. Throughput: 0: 43819.0. Samples: 267379860. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:47:18,240][46753] Avg episode reward: [(0, '0.210')] +[2024-06-10 19:47:19,855][46990] Updated weights for policy 0, policy_version 16320 (0.0033) +[2024-06-10 19:47:23,240][46753] Fps is (10 sec: 42597.8, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 267517952. Throughput: 0: 43814.7. Samples: 267639840. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:47:23,243][46753] Avg episode reward: [(0, '0.222')] +[2024-06-10 19:47:23,859][46990] Updated weights for policy 0, policy_version 16330 (0.0032) +[2024-06-10 19:47:27,173][46990] Updated weights for policy 0, policy_version 16340 (0.0033) +[2024-06-10 19:47:28,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43690.8, 300 sec: 43654.3). Total num frames: 267730944. Throughput: 0: 43765.8. Samples: 267900180. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:47:28,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:47:28,562][46970] Signal inference workers to stop experience collection... (3800 times) +[2024-06-10 19:47:28,563][46970] Signal inference workers to resume experience collection... (3800 times) +[2024-06-10 19:47:28,610][46990] InferenceWorker_p0-w0: stopping experience collection (3800 times) +[2024-06-10 19:47:28,610][46990] InferenceWorker_p0-w0: resuming experience collection (3800 times) +[2024-06-10 19:47:31,331][46990] Updated weights for policy 0, policy_version 16350 (0.0036) +[2024-06-10 19:47:33,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 267960320. Throughput: 0: 43838.7. Samples: 268036160. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:47:33,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:47:34,730][46990] Updated weights for policy 0, policy_version 16360 (0.0035) +[2024-06-10 19:47:38,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 268173312. Throughput: 0: 43637.4. Samples: 268291100. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:47:38,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:47:38,555][46990] Updated weights for policy 0, policy_version 16370 (0.0037) +[2024-06-10 19:47:42,406][46990] Updated weights for policy 0, policy_version 16380 (0.0028) +[2024-06-10 19:47:43,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43690.6, 300 sec: 43654.3). Total num frames: 268386304. Throughput: 0: 43764.5. Samples: 268562860. Policy #0 lag: (min: 1.0, avg: 10.5, max: 22.0) +[2024-06-10 19:47:43,240][46753] Avg episode reward: [(0, '0.218')] +[2024-06-10 19:47:46,002][46990] Updated weights for policy 0, policy_version 16390 (0.0039) +[2024-06-10 19:47:48,244][46753] Fps is (10 sec: 44217.1, 60 sec: 43687.4, 300 sec: 43764.7). Total num frames: 268615680. Throughput: 0: 43697.3. Samples: 268690460. Policy #0 lag: (min: 1.0, avg: 10.5, max: 22.0) +[2024-06-10 19:47:48,245][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:47:49,610][46990] Updated weights for policy 0, policy_version 16400 (0.0040) +[2024-06-10 19:47:53,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 268828672. Throughput: 0: 43855.6. Samples: 268956160. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 19:47:53,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:47:53,548][46990] Updated weights for policy 0, policy_version 16410 (0.0035) +[2024-06-10 19:47:57,354][46990] Updated weights for policy 0, policy_version 16420 (0.0037) +[2024-06-10 19:47:58,239][46753] Fps is (10 sec: 44256.7, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 269058048. Throughput: 0: 43739.1. Samples: 269217880. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 19:47:58,240][46753] Avg episode reward: [(0, '0.218')] +[2024-06-10 19:48:01,230][46990] Updated weights for policy 0, policy_version 16430 (0.0032) +[2024-06-10 19:48:03,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 269287424. Throughput: 0: 43811.2. Samples: 269351360. Policy #0 lag: (min: 0.0, avg: 10.3, max: 23.0) +[2024-06-10 19:48:03,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:48:04,804][46990] Updated weights for policy 0, policy_version 16440 (0.0029) +[2024-06-10 19:48:08,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 269484032. Throughput: 0: 43752.1. Samples: 269608680. Policy #0 lag: (min: 0.0, avg: 10.3, max: 23.0) +[2024-06-10 19:48:08,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:48:08,524][46990] Updated weights for policy 0, policy_version 16450 (0.0032) +[2024-06-10 19:48:12,260][46990] Updated weights for policy 0, policy_version 16460 (0.0044) +[2024-06-10 19:48:13,240][46753] Fps is (10 sec: 44234.6, 60 sec: 43963.4, 300 sec: 43764.7). Total num frames: 269729792. Throughput: 0: 43928.4. Samples: 269876980. Policy #0 lag: (min: 0.0, avg: 10.3, max: 23.0) +[2024-06-10 19:48:13,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:48:15,921][46990] Updated weights for policy 0, policy_version 16470 (0.0043) +[2024-06-10 19:48:18,240][46753] Fps is (10 sec: 45874.7, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 269942784. Throughput: 0: 43729.3. Samples: 270003980. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 19:48:18,240][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 19:48:19,605][46990] Updated weights for policy 0, policy_version 16480 (0.0039) +[2024-06-10 19:48:23,239][46753] Fps is (10 sec: 42600.1, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 270155776. Throughput: 0: 44009.3. Samples: 270271520. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 19:48:23,241][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:48:23,254][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000016489_270155776.pth... +[2024-06-10 19:48:23,312][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000015847_259637248.pth +[2024-06-10 19:48:23,484][46990] Updated weights for policy 0, policy_version 16490 (0.0038) +[2024-06-10 19:48:27,075][46990] Updated weights for policy 0, policy_version 16500 (0.0025) +[2024-06-10 19:48:28,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 270368768. Throughput: 0: 43823.6. Samples: 270534920. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:48:28,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:48:30,768][46990] Updated weights for policy 0, policy_version 16510 (0.0031) +[2024-06-10 19:48:33,244][46753] Fps is (10 sec: 44217.2, 60 sec: 43960.4, 300 sec: 43819.6). Total num frames: 270598144. Throughput: 0: 43841.3. Samples: 270663320. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 19:48:33,244][46753] Avg episode reward: [(0, '0.221')] +[2024-06-10 19:48:34,485][46990] Updated weights for policy 0, policy_version 16520 (0.0023) +[2024-06-10 19:48:38,240][46753] Fps is (10 sec: 44236.1, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 270811136. Throughput: 0: 43854.1. Samples: 270929600. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 19:48:38,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:48:38,365][46990] Updated weights for policy 0, policy_version 16530 (0.0034) +[2024-06-10 19:48:42,311][46990] Updated weights for policy 0, policy_version 16540 (0.0038) +[2024-06-10 19:48:43,239][46753] Fps is (10 sec: 44256.9, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 271040512. Throughput: 0: 43905.4. Samples: 271193620. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 19:48:43,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:48:45,578][46970] Signal inference workers to stop experience collection... (3850 times) +[2024-06-10 19:48:45,578][46970] Signal inference workers to resume experience collection... (3850 times) +[2024-06-10 19:48:45,588][46990] InferenceWorker_p0-w0: stopping experience collection (3850 times) +[2024-06-10 19:48:45,588][46990] InferenceWorker_p0-w0: resuming experience collection (3850 times) +[2024-06-10 19:48:45,727][46990] Updated weights for policy 0, policy_version 16550 (0.0046) +[2024-06-10 19:48:48,244][46753] Fps is (10 sec: 44217.5, 60 sec: 43963.7, 300 sec: 43819.6). Total num frames: 271253504. Throughput: 0: 43758.7. Samples: 271320700. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 19:48:48,245][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:48:49,573][46990] Updated weights for policy 0, policy_version 16560 (0.0025) +[2024-06-10 19:48:53,240][46753] Fps is (10 sec: 42597.4, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 271466496. Throughput: 0: 44024.7. Samples: 271589800. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 19:48:53,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:48:53,386][46990] Updated weights for policy 0, policy_version 16570 (0.0043) +[2024-06-10 19:48:57,062][46990] Updated weights for policy 0, policy_version 16580 (0.0032) +[2024-06-10 19:48:58,239][46753] Fps is (10 sec: 40978.6, 60 sec: 43417.6, 300 sec: 43653.7). Total num frames: 271663104. Throughput: 0: 43820.5. Samples: 271848880. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 19:48:58,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:49:00,716][46990] Updated weights for policy 0, policy_version 16590 (0.0035) +[2024-06-10 19:49:03,240][46753] Fps is (10 sec: 44236.4, 60 sec: 43690.4, 300 sec: 43820.2). Total num frames: 271908864. Throughput: 0: 43831.4. Samples: 271976400. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:49:03,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:49:04,421][46990] Updated weights for policy 0, policy_version 16600 (0.0035) +[2024-06-10 19:49:08,074][46990] Updated weights for policy 0, policy_version 16610 (0.0029) +[2024-06-10 19:49:08,239][46753] Fps is (10 sec: 47513.8, 60 sec: 44236.9, 300 sec: 43764.8). Total num frames: 272138240. Throughput: 0: 43791.3. Samples: 272242120. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:49:08,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:49:12,099][46990] Updated weights for policy 0, policy_version 16620 (0.0047) +[2024-06-10 19:49:13,239][46753] Fps is (10 sec: 44238.1, 60 sec: 43691.0, 300 sec: 43764.7). Total num frames: 272351232. Throughput: 0: 43744.9. Samples: 272503440. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:49:13,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:49:15,709][46990] Updated weights for policy 0, policy_version 16630 (0.0036) +[2024-06-10 19:49:18,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.8, 300 sec: 43820.3). Total num frames: 272564224. Throughput: 0: 43735.1. Samples: 272631200. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:49:18,240][46753] Avg episode reward: [(0, '0.221')] +[2024-06-10 19:49:19,568][46990] Updated weights for policy 0, policy_version 16640 (0.0032) +[2024-06-10 19:49:23,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 272777216. Throughput: 0: 43795.7. Samples: 272900400. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:49:23,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:49:23,324][46990] Updated weights for policy 0, policy_version 16650 (0.0041) +[2024-06-10 19:49:26,810][46990] Updated weights for policy 0, policy_version 16660 (0.0031) +[2024-06-10 19:49:28,240][46753] Fps is (10 sec: 42597.9, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 272990208. Throughput: 0: 43804.3. Samples: 273164820. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:49:28,241][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 19:49:30,594][46990] Updated weights for policy 0, policy_version 16670 (0.0039) +[2024-06-10 19:49:33,240][46753] Fps is (10 sec: 44234.4, 60 sec: 43693.6, 300 sec: 43764.6). Total num frames: 273219584. Throughput: 0: 43703.5. Samples: 273287180. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:49:33,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:49:34,253][46990] Updated weights for policy 0, policy_version 16680 (0.0034) +[2024-06-10 19:49:37,857][46990] Updated weights for policy 0, policy_version 16690 (0.0028) +[2024-06-10 19:49:38,239][46753] Fps is (10 sec: 47513.7, 60 sec: 44236.8, 300 sec: 43875.8). Total num frames: 273465344. Throughput: 0: 43849.0. Samples: 273563000. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:49:38,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:49:41,693][46990] Updated weights for policy 0, policy_version 16700 (0.0040) +[2024-06-10 19:49:43,240][46753] Fps is (10 sec: 44238.2, 60 sec: 43690.5, 300 sec: 43709.2). Total num frames: 273661952. Throughput: 0: 43901.2. Samples: 273824440. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:49:43,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:49:45,551][46990] Updated weights for policy 0, policy_version 16710 (0.0043) +[2024-06-10 19:49:48,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43694.0, 300 sec: 43820.3). Total num frames: 273874944. Throughput: 0: 43929.2. Samples: 273953200. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 19:49:48,240][46753] Avg episode reward: [(0, '0.229')] +[2024-06-10 19:49:49,380][46990] Updated weights for policy 0, policy_version 16720 (0.0045) +[2024-06-10 19:49:52,881][46990] Updated weights for policy 0, policy_version 16730 (0.0037) +[2024-06-10 19:49:53,239][46753] Fps is (10 sec: 44237.7, 60 sec: 43963.9, 300 sec: 43820.3). Total num frames: 274104320. Throughput: 0: 43905.8. Samples: 274217880. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:49:53,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:49:56,592][46990] Updated weights for policy 0, policy_version 16740 (0.0032) +[2024-06-10 19:49:58,240][46753] Fps is (10 sec: 42596.2, 60 sec: 43963.4, 300 sec: 43709.1). Total num frames: 274300928. Throughput: 0: 44006.2. Samples: 274483740. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:49:58,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:50:00,475][46990] Updated weights for policy 0, policy_version 16750 (0.0030) +[2024-06-10 19:50:03,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.9, 300 sec: 43764.7). Total num frames: 274530304. Throughput: 0: 44008.4. Samples: 274611580. Policy #0 lag: (min: 0.0, avg: 10.9, max: 24.0) +[2024-06-10 19:50:03,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:50:03,910][46990] Updated weights for policy 0, policy_version 16760 (0.0036) +[2024-06-10 19:50:07,956][46990] Updated weights for policy 0, policy_version 16770 (0.0031) +[2024-06-10 19:50:08,239][46753] Fps is (10 sec: 45877.6, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 274759680. Throughput: 0: 43861.3. Samples: 274874160. Policy #0 lag: (min: 0.0, avg: 10.9, max: 24.0) +[2024-06-10 19:50:08,240][46753] Avg episode reward: [(0, '0.221')] +[2024-06-10 19:50:11,757][46990] Updated weights for policy 0, policy_version 16780 (0.0040) +[2024-06-10 19:50:13,241][46753] Fps is (10 sec: 44228.7, 60 sec: 43689.3, 300 sec: 43820.0). Total num frames: 274972672. Throughput: 0: 43850.3. Samples: 275138160. Policy #0 lag: (min: 0.0, avg: 10.9, max: 24.0) +[2024-06-10 19:50:13,242][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:50:15,642][46990] Updated weights for policy 0, policy_version 16790 (0.0045) +[2024-06-10 19:50:16,248][46970] Signal inference workers to stop experience collection... (3900 times) +[2024-06-10 19:50:16,248][46970] Signal inference workers to resume experience collection... (3900 times) +[2024-06-10 19:50:16,273][46990] InferenceWorker_p0-w0: stopping experience collection (3900 times) +[2024-06-10 19:50:16,273][46990] InferenceWorker_p0-w0: resuming experience collection (3900 times) +[2024-06-10 19:50:18,240][46753] Fps is (10 sec: 45874.6, 60 sec: 44236.7, 300 sec: 43875.8). Total num frames: 275218432. Throughput: 0: 44023.5. Samples: 275268220. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 19:50:18,249][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:50:19,419][46990] Updated weights for policy 0, policy_version 16800 (0.0039) +[2024-06-10 19:50:23,168][46990] Updated weights for policy 0, policy_version 16810 (0.0032) +[2024-06-10 19:50:23,240][46753] Fps is (10 sec: 44244.5, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 275415040. Throughput: 0: 43913.3. Samples: 275539100. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 19:50:23,251][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:50:23,318][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000016811_275431424.pth... +[2024-06-10 19:50:23,375][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000016168_264896512.pth +[2024-06-10 19:50:26,666][46990] Updated weights for policy 0, policy_version 16820 (0.0033) +[2024-06-10 19:50:28,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 275628032. Throughput: 0: 43961.0. Samples: 275802680. Policy #0 lag: (min: 0.0, avg: 9.6, max: 18.0) +[2024-06-10 19:50:28,241][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:50:30,383][46990] Updated weights for policy 0, policy_version 16830 (0.0037) +[2024-06-10 19:50:33,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43964.1, 300 sec: 43764.7). Total num frames: 275857408. Throughput: 0: 43955.5. Samples: 275931200. Policy #0 lag: (min: 0.0, avg: 9.6, max: 18.0) +[2024-06-10 19:50:33,248][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 19:50:34,535][46990] Updated weights for policy 0, policy_version 16840 (0.0031) +[2024-06-10 19:50:37,879][46990] Updated weights for policy 0, policy_version 16850 (0.0027) +[2024-06-10 19:50:38,240][46753] Fps is (10 sec: 44236.2, 60 sec: 43417.5, 300 sec: 43820.9). Total num frames: 276070400. Throughput: 0: 43843.8. Samples: 276190860. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 19:50:38,240][46753] Avg episode reward: [(0, '0.214')] +[2024-06-10 19:50:42,090][46990] Updated weights for policy 0, policy_version 16860 (0.0041) +[2024-06-10 19:50:43,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.8, 300 sec: 43820.2). Total num frames: 276283392. Throughput: 0: 43867.6. Samples: 276457760. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 19:50:43,240][46753] Avg episode reward: [(0, '0.213')] +[2024-06-10 19:50:45,494][46990] Updated weights for policy 0, policy_version 16870 (0.0037) +[2024-06-10 19:50:48,240][46753] Fps is (10 sec: 45875.4, 60 sec: 44236.7, 300 sec: 43875.8). Total num frames: 276529152. Throughput: 0: 43830.1. Samples: 276583940. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 19:50:48,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 19:50:49,215][46990] Updated weights for policy 0, policy_version 16880 (0.0031) +[2024-06-10 19:50:53,002][46990] Updated weights for policy 0, policy_version 16890 (0.0029) +[2024-06-10 19:50:53,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43963.6, 300 sec: 43875.8). Total num frames: 276742144. Throughput: 0: 43903.4. Samples: 276849820. Policy #0 lag: (min: 0.0, avg: 9.3, max: 20.0) +[2024-06-10 19:50:53,242][46753] Avg episode reward: [(0, '0.233')] +[2024-06-10 19:50:56,651][46990] Updated weights for policy 0, policy_version 16900 (0.0022) +[2024-06-10 19:50:58,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43964.1, 300 sec: 43764.7). Total num frames: 276938752. Throughput: 0: 43975.1. Samples: 277116960. Policy #0 lag: (min: 0.0, avg: 9.3, max: 20.0) +[2024-06-10 19:50:58,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:51:00,567][46990] Updated weights for policy 0, policy_version 16910 (0.0034) +[2024-06-10 19:51:03,244][46753] Fps is (10 sec: 42579.8, 60 sec: 43960.5, 300 sec: 43819.6). Total num frames: 277168128. Throughput: 0: 43863.7. Samples: 277242280. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 19:51:03,244][46753] Avg episode reward: [(0, '0.217')] +[2024-06-10 19:51:04,708][46990] Updated weights for policy 0, policy_version 16920 (0.0042) +[2024-06-10 19:51:07,876][46990] Updated weights for policy 0, policy_version 16930 (0.0041) +[2024-06-10 19:51:08,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 277397504. Throughput: 0: 43668.1. Samples: 277504160. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 19:51:08,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:51:11,864][46990] Updated weights for policy 0, policy_version 16940 (0.0039) +[2024-06-10 19:51:13,240][46753] Fps is (10 sec: 42616.6, 60 sec: 43691.9, 300 sec: 43764.7). Total num frames: 277594112. Throughput: 0: 43692.3. Samples: 277768840. Policy #0 lag: (min: 0.0, avg: 9.7, max: 22.0) +[2024-06-10 19:51:13,252][46753] Avg episode reward: [(0, '0.229')] +[2024-06-10 19:51:15,374][46990] Updated weights for policy 0, policy_version 16950 (0.0024) +[2024-06-10 19:51:18,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 277839872. Throughput: 0: 43653.8. Samples: 277895620. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 19:51:18,252][46753] Avg episode reward: [(0, '0.194')] +[2024-06-10 19:51:19,156][46990] Updated weights for policy 0, policy_version 16960 (0.0049) +[2024-06-10 19:51:23,088][46990] Updated weights for policy 0, policy_version 16970 (0.0041) +[2024-06-10 19:51:23,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 278052864. Throughput: 0: 43756.1. Samples: 278159880. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 19:51:23,240][46753] Avg episode reward: [(0, '0.221')] +[2024-06-10 19:51:26,764][46990] Updated weights for policy 0, policy_version 16980 (0.0028) +[2024-06-10 19:51:28,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 278249472. Throughput: 0: 43810.2. Samples: 278429220. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 19:51:28,242][46753] Avg episode reward: [(0, '0.215')] +[2024-06-10 19:51:30,206][46990] Updated weights for policy 0, policy_version 16990 (0.0040) +[2024-06-10 19:51:33,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 278495232. Throughput: 0: 43867.2. Samples: 278557960. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 19:51:33,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 19:51:34,274][46990] Updated weights for policy 0, policy_version 17000 (0.0039) +[2024-06-10 19:51:37,634][46990] Updated weights for policy 0, policy_version 17010 (0.0031) +[2024-06-10 19:51:38,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43963.9, 300 sec: 43875.8). Total num frames: 278708224. Throughput: 0: 43884.5. Samples: 278824620. Policy #0 lag: (min: 0.0, avg: 10.2, max: 20.0) +[2024-06-10 19:51:38,240][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:51:41,643][46990] Updated weights for policy 0, policy_version 17020 (0.0024) +[2024-06-10 19:51:43,240][46753] Fps is (10 sec: 40959.4, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 278904832. Throughput: 0: 43722.1. Samples: 279084460. Policy #0 lag: (min: 0.0, avg: 10.2, max: 20.0) +[2024-06-10 19:51:43,240][46753] Avg episode reward: [(0, '0.205')] +[2024-06-10 19:51:45,093][46990] Updated weights for policy 0, policy_version 17030 (0.0030) +[2024-06-10 19:51:47,204][46970] Signal inference workers to stop experience collection... (3950 times) +[2024-06-10 19:51:47,229][46990] InferenceWorker_p0-w0: stopping experience collection (3950 times) +[2024-06-10 19:51:47,264][46970] Signal inference workers to resume experience collection... (3950 times) +[2024-06-10 19:51:47,265][46990] InferenceWorker_p0-w0: resuming experience collection (3950 times) +[2024-06-10 19:51:48,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 279150592. Throughput: 0: 43877.7. Samples: 279216580. Policy #0 lag: (min: 0.0, avg: 10.2, max: 20.0) +[2024-06-10 19:51:48,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:51:48,888][46990] Updated weights for policy 0, policy_version 17040 (0.0038) +[2024-06-10 19:51:52,655][46990] Updated weights for policy 0, policy_version 17050 (0.0033) +[2024-06-10 19:51:53,239][46753] Fps is (10 sec: 44237.8, 60 sec: 43417.7, 300 sec: 43820.3). Total num frames: 279347200. Throughput: 0: 43849.8. Samples: 279477400. Policy #0 lag: (min: 0.0, avg: 9.0, max: 21.0) +[2024-06-10 19:51:53,240][46753] Avg episode reward: [(0, '0.215')] +[2024-06-10 19:51:56,680][46990] Updated weights for policy 0, policy_version 17060 (0.0042) +[2024-06-10 19:51:58,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 279560192. Throughput: 0: 43958.0. Samples: 279746940. Policy #0 lag: (min: 0.0, avg: 9.0, max: 21.0) +[2024-06-10 19:51:58,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:52:00,270][46990] Updated weights for policy 0, policy_version 17070 (0.0030) +[2024-06-10 19:52:03,239][46753] Fps is (10 sec: 45874.7, 60 sec: 43966.9, 300 sec: 43875.8). Total num frames: 279805952. Throughput: 0: 43988.8. Samples: 279875120. Policy #0 lag: (min: 0.0, avg: 12.3, max: 22.0) +[2024-06-10 19:52:03,243][46753] Avg episode reward: [(0, '0.233')] +[2024-06-10 19:52:04,361][46990] Updated weights for policy 0, policy_version 17080 (0.0046) +[2024-06-10 19:52:07,934][46990] Updated weights for policy 0, policy_version 17090 (0.0038) +[2024-06-10 19:52:08,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 280002560. Throughput: 0: 43724.5. Samples: 280127480. Policy #0 lag: (min: 0.0, avg: 12.3, max: 22.0) +[2024-06-10 19:52:08,240][46753] Avg episode reward: [(0, '0.228')] +[2024-06-10 19:52:11,670][46990] Updated weights for policy 0, policy_version 17100 (0.0031) +[2024-06-10 19:52:13,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 280215552. Throughput: 0: 43630.3. Samples: 280392580. Policy #0 lag: (min: 0.0, avg: 12.3, max: 22.0) +[2024-06-10 19:52:13,244][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 19:52:15,422][46990] Updated weights for policy 0, policy_version 17110 (0.0038) +[2024-06-10 19:52:18,243][46753] Fps is (10 sec: 45860.7, 60 sec: 43688.4, 300 sec: 43875.3). Total num frames: 280461312. Throughput: 0: 43717.4. Samples: 280525380. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 19:52:18,243][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:52:18,815][46990] Updated weights for policy 0, policy_version 17120 (0.0036) +[2024-06-10 19:52:22,787][46990] Updated weights for policy 0, policy_version 17130 (0.0029) +[2024-06-10 19:52:23,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 280674304. Throughput: 0: 43643.1. Samples: 280788560. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 19:52:23,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 19:52:23,247][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000017131_280674304.pth... +[2024-06-10 19:52:23,301][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000016489_270155776.pth +[2024-06-10 19:52:26,587][46990] Updated weights for policy 0, policy_version 17140 (0.0030) +[2024-06-10 19:52:28,239][46753] Fps is (10 sec: 42611.5, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 280887296. Throughput: 0: 43782.8. Samples: 281054680. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 19:52:28,240][46753] Avg episode reward: [(0, '0.215')] +[2024-06-10 19:52:30,235][46990] Updated weights for policy 0, policy_version 17150 (0.0040) +[2024-06-10 19:52:33,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 281100288. Throughput: 0: 43737.4. Samples: 281184760. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 19:52:33,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:52:34,140][46990] Updated weights for policy 0, policy_version 17160 (0.0037) +[2024-06-10 19:52:37,791][46990] Updated weights for policy 0, policy_version 17170 (0.0038) +[2024-06-10 19:52:38,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 281346048. Throughput: 0: 43831.1. Samples: 281449800. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 19:52:38,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:52:41,346][46990] Updated weights for policy 0, policy_version 17180 (0.0030) +[2024-06-10 19:52:43,240][46753] Fps is (10 sec: 42597.8, 60 sec: 43690.7, 300 sec: 43765.4). Total num frames: 281526272. Throughput: 0: 43604.3. Samples: 281709140. Policy #0 lag: (min: 0.0, avg: 11.6, max: 23.0) +[2024-06-10 19:52:43,240][46753] Avg episode reward: [(0, '0.233')] +[2024-06-10 19:52:45,142][46990] Updated weights for policy 0, policy_version 17190 (0.0030) +[2024-06-10 19:52:48,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43690.5, 300 sec: 43875.8). Total num frames: 281772032. Throughput: 0: 43743.5. Samples: 281843580. Policy #0 lag: (min: 0.0, avg: 11.6, max: 23.0) +[2024-06-10 19:52:48,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:52:48,830][46990] Updated weights for policy 0, policy_version 17200 (0.0042) +[2024-06-10 19:52:52,798][46990] Updated weights for policy 0, policy_version 17210 (0.0034) +[2024-06-10 19:52:53,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 281968640. Throughput: 0: 43868.9. Samples: 282101580. Policy #0 lag: (min: 0.0, avg: 11.3, max: 24.0) +[2024-06-10 19:52:53,240][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:52:56,142][46990] Updated weights for policy 0, policy_version 17220 (0.0040) +[2024-06-10 19:52:58,239][46753] Fps is (10 sec: 42599.3, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 282198016. Throughput: 0: 43903.6. Samples: 282368240. Policy #0 lag: (min: 0.0, avg: 11.3, max: 24.0) +[2024-06-10 19:52:58,240][46753] Avg episode reward: [(0, '0.215')] +[2024-06-10 19:53:00,473][46990] Updated weights for policy 0, policy_version 17230 (0.0026) +[2024-06-10 19:53:03,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 282427392. Throughput: 0: 43819.5. Samples: 282497120. Policy #0 lag: (min: 0.0, avg: 11.3, max: 24.0) +[2024-06-10 19:53:03,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:53:03,943][46990] Updated weights for policy 0, policy_version 17240 (0.0038) +[2024-06-10 19:53:07,777][46990] Updated weights for policy 0, policy_version 17250 (0.0047) +[2024-06-10 19:53:08,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43963.8, 300 sec: 43764.8). Total num frames: 282640384. Throughput: 0: 43758.3. Samples: 282757680. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:53:08,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:53:11,162][46990] Updated weights for policy 0, policy_version 17260 (0.0040) +[2024-06-10 19:53:13,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 282836992. Throughput: 0: 43679.3. Samples: 283020240. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 19:53:13,240][46753] Avg episode reward: [(0, '0.222')] +[2024-06-10 19:53:15,290][46990] Updated weights for policy 0, policy_version 17270 (0.0030) +[2024-06-10 19:53:17,705][46970] Signal inference workers to stop experience collection... (4000 times) +[2024-06-10 19:53:17,762][46990] InferenceWorker_p0-w0: stopping experience collection (4000 times) +[2024-06-10 19:53:17,768][46970] Signal inference workers to resume experience collection... (4000 times) +[2024-06-10 19:53:17,777][46990] InferenceWorker_p0-w0: resuming experience collection (4000 times) +[2024-06-10 19:53:18,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43693.0, 300 sec: 43820.3). Total num frames: 283082752. Throughput: 0: 43702.2. Samples: 283151360. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 19:53:18,240][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 19:53:18,606][46990] Updated weights for policy 0, policy_version 17280 (0.0029) +[2024-06-10 19:53:22,678][46990] Updated weights for policy 0, policy_version 17290 (0.0032) +[2024-06-10 19:53:23,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 283295744. Throughput: 0: 43690.7. Samples: 283415880. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 19:53:23,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:53:26,146][46990] Updated weights for policy 0, policy_version 17300 (0.0026) +[2024-06-10 19:53:28,240][46753] Fps is (10 sec: 42597.5, 60 sec: 43690.6, 300 sec: 43765.4). Total num frames: 283508736. Throughput: 0: 43844.8. Samples: 283682160. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 19:53:28,240][46753] Avg episode reward: [(0, '0.218')] +[2024-06-10 19:53:30,105][46990] Updated weights for policy 0, policy_version 17310 (0.0032) +[2024-06-10 19:53:33,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 283738112. Throughput: 0: 43762.9. Samples: 283812900. Policy #0 lag: (min: 0.0, avg: 9.9, max: 24.0) +[2024-06-10 19:53:33,240][46753] Avg episode reward: [(0, '0.218')] +[2024-06-10 19:53:33,888][46990] Updated weights for policy 0, policy_version 17320 (0.0052) +[2024-06-10 19:53:37,524][46990] Updated weights for policy 0, policy_version 17330 (0.0023) +[2024-06-10 19:53:38,239][46753] Fps is (10 sec: 44237.8, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 283951104. Throughput: 0: 43886.3. Samples: 284076460. Policy #0 lag: (min: 0.0, avg: 9.9, max: 24.0) +[2024-06-10 19:53:38,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:53:41,136][46990] Updated weights for policy 0, policy_version 17340 (0.0033) +[2024-06-10 19:53:43,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.7, 300 sec: 43709.8). Total num frames: 284147712. Throughput: 0: 43800.0. Samples: 284339240. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:53:43,240][46753] Avg episode reward: [(0, '0.229')] +[2024-06-10 19:53:44,919][46990] Updated weights for policy 0, policy_version 17350 (0.0026) +[2024-06-10 19:53:48,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43963.9, 300 sec: 43875.8). Total num frames: 284409856. Throughput: 0: 43938.7. Samples: 284474360. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:53:48,240][46753] Avg episode reward: [(0, '0.222')] +[2024-06-10 19:53:48,447][46990] Updated weights for policy 0, policy_version 17360 (0.0030) +[2024-06-10 19:53:52,563][46990] Updated weights for policy 0, policy_version 17370 (0.0037) +[2024-06-10 19:53:53,239][46753] Fps is (10 sec: 47513.9, 60 sec: 44236.9, 300 sec: 43931.3). Total num frames: 284622848. Throughput: 0: 43958.2. Samples: 284735800. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 19:53:53,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:53:56,037][46990] Updated weights for policy 0, policy_version 17380 (0.0037) +[2024-06-10 19:53:58,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.6, 300 sec: 43764.8). Total num frames: 284819456. Throughput: 0: 44066.1. Samples: 285003220. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 19:53:58,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:53:59,727][46990] Updated weights for policy 0, policy_version 17390 (0.0030) +[2024-06-10 19:54:03,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 285065216. Throughput: 0: 43944.8. Samples: 285128880. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 19:54:03,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:54:03,294][46990] Updated weights for policy 0, policy_version 17400 (0.0033) +[2024-06-10 19:54:07,440][46990] Updated weights for policy 0, policy_version 17410 (0.0031) +[2024-06-10 19:54:08,239][46753] Fps is (10 sec: 47513.8, 60 sec: 44236.8, 300 sec: 43875.8). Total num frames: 285294592. Throughput: 0: 44018.6. Samples: 285396720. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 19:54:08,240][46753] Avg episode reward: [(0, '0.217')] +[2024-06-10 19:54:10,802][46990] Updated weights for policy 0, policy_version 17420 (0.0037) +[2024-06-10 19:54:13,239][46753] Fps is (10 sec: 42598.4, 60 sec: 44236.7, 300 sec: 43820.2). Total num frames: 285491200. Throughput: 0: 44019.2. Samples: 285663020. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 19:54:13,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 19:54:14,468][46990] Updated weights for policy 0, policy_version 17430 (0.0039) +[2024-06-10 19:54:18,138][46990] Updated weights for policy 0, policy_version 17440 (0.0030) +[2024-06-10 19:54:18,239][46753] Fps is (10 sec: 44236.8, 60 sec: 44236.8, 300 sec: 43931.3). Total num frames: 285736960. Throughput: 0: 44000.4. Samples: 285792920. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 19:54:18,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:54:22,084][46990] Updated weights for policy 0, policy_version 17450 (0.0030) +[2024-06-10 19:54:23,239][46753] Fps is (10 sec: 45875.4, 60 sec: 44236.8, 300 sec: 43931.3). Total num frames: 285949952. Throughput: 0: 44081.3. Samples: 286060120. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 19:54:23,240][46753] Avg episode reward: [(0, '0.228')] +[2024-06-10 19:54:23,260][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000017453_285949952.pth... +[2024-06-10 19:54:23,320][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000016811_275431424.pth +[2024-06-10 19:54:25,703][46990] Updated weights for policy 0, policy_version 17460 (0.0035) +[2024-06-10 19:54:28,244][46753] Fps is (10 sec: 40941.6, 60 sec: 43960.6, 300 sec: 43819.7). Total num frames: 286146560. Throughput: 0: 44011.2. Samples: 286319940. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 19:54:28,244][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:54:29,670][46990] Updated weights for policy 0, policy_version 17470 (0.0028) +[2024-06-10 19:54:33,050][46990] Updated weights for policy 0, policy_version 17480 (0.0034) +[2024-06-10 19:54:33,239][46753] Fps is (10 sec: 44236.9, 60 sec: 44236.8, 300 sec: 43820.3). Total num frames: 286392320. Throughput: 0: 43943.5. Samples: 286451820. Policy #0 lag: (min: 0.0, avg: 9.6, max: 20.0) +[2024-06-10 19:54:33,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 19:54:36,869][46990] Updated weights for policy 0, policy_version 17490 (0.0041) +[2024-06-10 19:54:38,239][46753] Fps is (10 sec: 45895.6, 60 sec: 44236.7, 300 sec: 43875.8). Total num frames: 286605312. Throughput: 0: 43740.8. Samples: 286704140. Policy #0 lag: (min: 0.0, avg: 9.6, max: 20.0) +[2024-06-10 19:54:38,242][46753] Avg episode reward: [(0, '0.222')] +[2024-06-10 19:54:40,686][46990] Updated weights for policy 0, policy_version 17500 (0.0039) +[2024-06-10 19:54:42,497][46970] Signal inference workers to stop experience collection... (4050 times) +[2024-06-10 19:54:42,497][46970] Signal inference workers to resume experience collection... (4050 times) +[2024-06-10 19:54:42,553][46990] InferenceWorker_p0-w0: stopping experience collection (4050 times) +[2024-06-10 19:54:42,553][46990] InferenceWorker_p0-w0: resuming experience collection (4050 times) +[2024-06-10 19:54:43,239][46753] Fps is (10 sec: 40959.7, 60 sec: 44236.8, 300 sec: 43820.2). Total num frames: 286801920. Throughput: 0: 43757.7. Samples: 286972320. Policy #0 lag: (min: 0.0, avg: 9.6, max: 20.0) +[2024-06-10 19:54:43,240][46753] Avg episode reward: [(0, '0.221')] +[2024-06-10 19:54:44,315][46990] Updated weights for policy 0, policy_version 17510 (0.0024) +[2024-06-10 19:54:47,944][46990] Updated weights for policy 0, policy_version 17520 (0.0042) +[2024-06-10 19:54:48,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 287047680. Throughput: 0: 43807.2. Samples: 287100200. Policy #0 lag: (min: 0.0, avg: 9.2, max: 19.0) +[2024-06-10 19:54:48,240][46753] Avg episode reward: [(0, '0.228')] +[2024-06-10 19:54:51,981][46990] Updated weights for policy 0, policy_version 17530 (0.0028) +[2024-06-10 19:54:53,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.5, 300 sec: 43875.8). Total num frames: 287244288. Throughput: 0: 43737.2. Samples: 287364900. Policy #0 lag: (min: 0.0, avg: 9.2, max: 19.0) +[2024-06-10 19:54:53,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:54:55,542][46990] Updated weights for policy 0, policy_version 17540 (0.0033) +[2024-06-10 19:54:58,244][46753] Fps is (10 sec: 40941.4, 60 sec: 43960.5, 300 sec: 43819.6). Total num frames: 287457280. Throughput: 0: 43696.6. Samples: 287629560. Policy #0 lag: (min: 2.0, avg: 12.4, max: 24.0) +[2024-06-10 19:54:58,245][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:54:59,301][46990] Updated weights for policy 0, policy_version 17550 (0.0032) +[2024-06-10 19:55:03,119][46990] Updated weights for policy 0, policy_version 17560 (0.0053) +[2024-06-10 19:55:03,239][46753] Fps is (10 sec: 45876.1, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 287703040. Throughput: 0: 43800.9. Samples: 287763960. Policy #0 lag: (min: 2.0, avg: 12.4, max: 24.0) +[2024-06-10 19:55:03,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 19:55:06,948][46990] Updated weights for policy 0, policy_version 17570 (0.0025) +[2024-06-10 19:55:08,239][46753] Fps is (10 sec: 44256.2, 60 sec: 43417.5, 300 sec: 43820.5). Total num frames: 287899648. Throughput: 0: 43631.0. Samples: 288023520. Policy #0 lag: (min: 2.0, avg: 12.4, max: 24.0) +[2024-06-10 19:55:08,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:55:10,637][46990] Updated weights for policy 0, policy_version 17580 (0.0043) +[2024-06-10 19:55:13,240][46753] Fps is (10 sec: 40959.3, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 288112640. Throughput: 0: 43748.2. Samples: 288288420. Policy #0 lag: (min: 1.0, avg: 11.2, max: 21.0) +[2024-06-10 19:55:13,240][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 19:55:14,151][46990] Updated weights for policy 0, policy_version 17590 (0.0029) +[2024-06-10 19:55:17,826][46990] Updated weights for policy 0, policy_version 17600 (0.0038) +[2024-06-10 19:55:18,240][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 288358400. Throughput: 0: 43645.7. Samples: 288415880. Policy #0 lag: (min: 1.0, avg: 11.2, max: 21.0) +[2024-06-10 19:55:18,240][46753] Avg episode reward: [(0, '0.212')] +[2024-06-10 19:55:21,900][46990] Updated weights for policy 0, policy_version 17610 (0.0039) +[2024-06-10 19:55:23,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 288555008. Throughput: 0: 44002.3. Samples: 288684240. Policy #0 lag: (min: 0.0, avg: 10.0, max: 24.0) +[2024-06-10 19:55:23,240][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 19:55:25,319][46990] Updated weights for policy 0, policy_version 17620 (0.0044) +[2024-06-10 19:55:28,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43693.9, 300 sec: 43764.7). Total num frames: 288768000. Throughput: 0: 43820.0. Samples: 288944220. Policy #0 lag: (min: 0.0, avg: 10.0, max: 24.0) +[2024-06-10 19:55:28,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:55:29,346][46990] Updated weights for policy 0, policy_version 17630 (0.0042) +[2024-06-10 19:55:32,802][46990] Updated weights for policy 0, policy_version 17640 (0.0033) +[2024-06-10 19:55:33,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 289013760. Throughput: 0: 43946.7. Samples: 289077800. Policy #0 lag: (min: 0.0, avg: 10.0, max: 24.0) +[2024-06-10 19:55:33,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:55:36,919][46990] Updated weights for policy 0, policy_version 17650 (0.0026) +[2024-06-10 19:55:38,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 289210368. Throughput: 0: 43794.9. Samples: 289335660. Policy #0 lag: (min: 0.0, avg: 9.3, max: 20.0) +[2024-06-10 19:55:38,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:55:40,630][46990] Updated weights for policy 0, policy_version 17660 (0.0033) +[2024-06-10 19:55:43,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 289439744. Throughput: 0: 43878.1. Samples: 289603880. Policy #0 lag: (min: 0.0, avg: 9.3, max: 20.0) +[2024-06-10 19:55:43,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 19:55:44,759][46990] Updated weights for policy 0, policy_version 17670 (0.0033) +[2024-06-10 19:55:47,869][46990] Updated weights for policy 0, policy_version 17680 (0.0029) +[2024-06-10 19:55:48,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 289669120. Throughput: 0: 43684.1. Samples: 289729740. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 19:55:48,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 19:55:51,930][46990] Updated weights for policy 0, policy_version 17690 (0.0028) +[2024-06-10 19:55:53,240][46753] Fps is (10 sec: 45873.3, 60 sec: 44236.6, 300 sec: 43931.3). Total num frames: 289898496. Throughput: 0: 43981.9. Samples: 290002720. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 19:55:53,240][46753] Avg episode reward: [(0, '0.240')] +[2024-06-10 19:55:53,253][46970] Saving new best policy, reward=0.240! +[2024-06-10 19:55:55,538][46990] Updated weights for policy 0, policy_version 17700 (0.0027) +[2024-06-10 19:55:58,239][46753] Fps is (10 sec: 42597.7, 60 sec: 43967.0, 300 sec: 43820.9). Total num frames: 290095104. Throughput: 0: 43997.4. Samples: 290268300. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 19:55:58,240][46753] Avg episode reward: [(0, '0.221')] +[2024-06-10 19:55:59,491][46990] Updated weights for policy 0, policy_version 17710 (0.0041) +[2024-06-10 19:56:00,529][46970] Signal inference workers to stop experience collection... (4100 times) +[2024-06-10 19:56:00,529][46970] Signal inference workers to resume experience collection... (4100 times) +[2024-06-10 19:56:00,546][46990] InferenceWorker_p0-w0: stopping experience collection (4100 times) +[2024-06-10 19:56:00,578][46990] InferenceWorker_p0-w0: resuming experience collection (4100 times) +[2024-06-10 19:56:03,068][46990] Updated weights for policy 0, policy_version 17720 (0.0038) +[2024-06-10 19:56:03,240][46753] Fps is (10 sec: 42600.0, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 290324480. Throughput: 0: 44046.2. Samples: 290397960. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 19:56:03,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 19:56:06,987][46990] Updated weights for policy 0, policy_version 17730 (0.0034) +[2024-06-10 19:56:08,239][46753] Fps is (10 sec: 45875.3, 60 sec: 44236.8, 300 sec: 43931.4). Total num frames: 290553856. Throughput: 0: 43965.7. Samples: 290662700. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 19:56:08,242][46753] Avg episode reward: [(0, '0.229')] +[2024-06-10 19:56:10,427][46990] Updated weights for policy 0, policy_version 17740 (0.0038) +[2024-06-10 19:56:13,240][46753] Fps is (10 sec: 44236.8, 60 sec: 44236.8, 300 sec: 43820.2). Total num frames: 290766848. Throughput: 0: 44161.7. Samples: 290931500. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 19:56:13,240][46753] Avg episode reward: [(0, '0.216')] +[2024-06-10 19:56:14,650][46990] Updated weights for policy 0, policy_version 17750 (0.0043) +[2024-06-10 19:56:17,643][46990] Updated weights for policy 0, policy_version 17760 (0.0041) +[2024-06-10 19:56:18,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 290996224. Throughput: 0: 43982.1. Samples: 291057000. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 19:56:18,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 19:56:21,924][46990] Updated weights for policy 0, policy_version 17770 (0.0039) +[2024-06-10 19:56:23,242][46753] Fps is (10 sec: 44226.9, 60 sec: 44235.0, 300 sec: 43931.0). Total num frames: 291209216. Throughput: 0: 44359.4. Samples: 291331940. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 19:56:23,242][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 19:56:23,356][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000017775_291225600.pth... +[2024-06-10 19:56:23,411][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000017131_280674304.pth +[2024-06-10 19:56:25,177][46990] Updated weights for policy 0, policy_version 17780 (0.0042) +[2024-06-10 19:56:28,239][46753] Fps is (10 sec: 42598.6, 60 sec: 44236.8, 300 sec: 43820.3). Total num frames: 291422208. Throughput: 0: 44164.9. Samples: 291591300. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 19:56:28,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 19:56:29,467][46990] Updated weights for policy 0, policy_version 17790 (0.0034) +[2024-06-10 19:56:32,575][46990] Updated weights for policy 0, policy_version 17800 (0.0031) +[2024-06-10 19:56:33,239][46753] Fps is (10 sec: 42608.7, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 291635200. Throughput: 0: 44083.5. Samples: 291713500. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 19:56:33,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 19:56:36,855][46990] Updated weights for policy 0, policy_version 17810 (0.0031) +[2024-06-10 19:56:38,239][46753] Fps is (10 sec: 45875.5, 60 sec: 44509.9, 300 sec: 43986.9). Total num frames: 291880960. Throughput: 0: 43891.6. Samples: 291977820. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 19:56:38,240][46753] Avg episode reward: [(0, '0.229')] +[2024-06-10 19:56:40,242][46990] Updated weights for policy 0, policy_version 17820 (0.0036) +[2024-06-10 19:56:43,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 292077568. Throughput: 0: 43941.0. Samples: 292245640. Policy #0 lag: (min: 0.0, avg: 12.5, max: 24.0) +[2024-06-10 19:56:43,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:56:44,595][46990] Updated weights for policy 0, policy_version 17830 (0.0035) +[2024-06-10 19:56:47,651][46990] Updated weights for policy 0, policy_version 17840 (0.0033) +[2024-06-10 19:56:48,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 292306944. Throughput: 0: 43821.9. Samples: 292369940. Policy #0 lag: (min: 0.0, avg: 12.5, max: 24.0) +[2024-06-10 19:56:48,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:56:51,887][46990] Updated weights for policy 0, policy_version 17850 (0.0038) +[2024-06-10 19:56:53,240][46753] Fps is (10 sec: 45874.6, 60 sec: 43964.0, 300 sec: 43986.9). Total num frames: 292536320. Throughput: 0: 43952.9. Samples: 292640580. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:56:53,240][46753] Avg episode reward: [(0, '0.211')] +[2024-06-10 19:56:55,144][46990] Updated weights for policy 0, policy_version 17860 (0.0040) +[2024-06-10 19:56:58,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 292732928. Throughput: 0: 43795.6. Samples: 292902300. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:56:58,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 19:56:59,234][46990] Updated weights for policy 0, policy_version 17870 (0.0039) +[2024-06-10 19:57:02,554][46990] Updated weights for policy 0, policy_version 17880 (0.0032) +[2024-06-10 19:57:03,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43963.8, 300 sec: 43931.3). Total num frames: 292962304. Throughput: 0: 43853.9. Samples: 293030420. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 19:57:03,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 19:57:06,903][46990] Updated weights for policy 0, policy_version 17890 (0.0029) +[2024-06-10 19:57:08,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43963.8, 300 sec: 43986.9). Total num frames: 293191680. Throughput: 0: 43626.3. Samples: 293295020. Policy #0 lag: (min: 1.0, avg: 10.8, max: 21.0) +[2024-06-10 19:57:08,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:57:10,295][46990] Updated weights for policy 0, policy_version 17900 (0.0047) +[2024-06-10 19:57:13,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.7, 300 sec: 43820.7). Total num frames: 293388288. Throughput: 0: 43722.3. Samples: 293558800. Policy #0 lag: (min: 1.0, avg: 10.8, max: 21.0) +[2024-06-10 19:57:13,248][46753] Avg episode reward: [(0, '0.213')] +[2024-06-10 19:57:14,702][46990] Updated weights for policy 0, policy_version 17910 (0.0033) +[2024-06-10 19:57:17,684][46990] Updated weights for policy 0, policy_version 17920 (0.0045) +[2024-06-10 19:57:18,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 293634048. Throughput: 0: 43866.0. Samples: 293687480. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:57:18,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:57:21,940][46990] Updated weights for policy 0, policy_version 17930 (0.0040) +[2024-06-10 19:57:23,244][46753] Fps is (10 sec: 45854.4, 60 sec: 43962.1, 300 sec: 43930.7). Total num frames: 293847040. Throughput: 0: 43996.0. Samples: 293957840. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:57:23,244][46753] Avg episode reward: [(0, '0.229')] +[2024-06-10 19:57:25,226][46990] Updated weights for policy 0, policy_version 17940 (0.0042) +[2024-06-10 19:57:28,241][46753] Fps is (10 sec: 40952.7, 60 sec: 43689.3, 300 sec: 43875.5). Total num frames: 294043648. Throughput: 0: 43734.6. Samples: 294213780. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:57:28,242][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 19:57:29,541][46990] Updated weights for policy 0, policy_version 17950 (0.0046) +[2024-06-10 19:57:32,717][46990] Updated weights for policy 0, policy_version 17960 (0.0040) +[2024-06-10 19:57:33,240][46753] Fps is (10 sec: 42616.9, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 294273024. Throughput: 0: 43816.7. Samples: 294341700. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:57:33,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 19:57:36,833][46970] Signal inference workers to stop experience collection... (4150 times) +[2024-06-10 19:57:36,834][46970] Signal inference workers to resume experience collection... (4150 times) +[2024-06-10 19:57:36,844][46990] InferenceWorker_p0-w0: stopping experience collection (4150 times) +[2024-06-10 19:57:36,844][46990] InferenceWorker_p0-w0: resuming experience collection (4150 times) +[2024-06-10 19:57:36,986][46990] Updated weights for policy 0, policy_version 17970 (0.0043) +[2024-06-10 19:57:38,239][46753] Fps is (10 sec: 45884.1, 60 sec: 43690.7, 300 sec: 43986.9). Total num frames: 294502400. Throughput: 0: 43665.5. Samples: 294605520. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 19:57:38,240][46753] Avg episode reward: [(0, '0.229')] +[2024-06-10 19:57:40,327][46990] Updated weights for policy 0, policy_version 17980 (0.0046) +[2024-06-10 19:57:43,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 294699008. Throughput: 0: 43731.6. Samples: 294870220. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 19:57:43,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:57:44,582][46990] Updated weights for policy 0, policy_version 17990 (0.0043) +[2024-06-10 19:57:47,430][46990] Updated weights for policy 0, policy_version 18000 (0.0034) +[2024-06-10 19:57:48,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.6, 300 sec: 43931.3). Total num frames: 294928384. Throughput: 0: 43716.4. Samples: 294997660. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 19:57:48,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:57:51,835][46990] Updated weights for policy 0, policy_version 18010 (0.0032) +[2024-06-10 19:57:53,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.8, 300 sec: 43931.3). Total num frames: 295157760. Throughput: 0: 43858.6. Samples: 295268660. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 19:57:53,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 19:57:55,147][46990] Updated weights for policy 0, policy_version 18020 (0.0034) +[2024-06-10 19:57:58,240][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 295354368. Throughput: 0: 43772.7. Samples: 295528580. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 19:57:58,240][46753] Avg episode reward: [(0, '0.220')] +[2024-06-10 19:57:59,293][46990] Updated weights for policy 0, policy_version 18030 (0.0031) +[2024-06-10 19:58:02,945][46990] Updated weights for policy 0, policy_version 18040 (0.0032) +[2024-06-10 19:58:03,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 295583744. Throughput: 0: 43847.2. Samples: 295660600. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 19:58:03,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 19:58:06,815][46990] Updated weights for policy 0, policy_version 18050 (0.0024) +[2024-06-10 19:58:08,239][46753] Fps is (10 sec: 45876.0, 60 sec: 43690.6, 300 sec: 43986.9). Total num frames: 295813120. Throughput: 0: 43712.0. Samples: 295924680. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:58:08,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:58:09,999][46990] Updated weights for policy 0, policy_version 18060 (0.0030) +[2024-06-10 19:58:13,240][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 296009728. Throughput: 0: 43796.4. Samples: 296184540. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:58:13,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 19:58:14,522][46990] Updated weights for policy 0, policy_version 18070 (0.0030) +[2024-06-10 19:58:17,274][46990] Updated weights for policy 0, policy_version 18080 (0.0035) +[2024-06-10 19:58:18,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.7, 300 sec: 43875.8). Total num frames: 296239104. Throughput: 0: 43823.7. Samples: 296313760. Policy #0 lag: (min: 0.0, avg: 11.3, max: 23.0) +[2024-06-10 19:58:18,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:58:21,695][46990] Updated weights for policy 0, policy_version 18090 (0.0028) +[2024-06-10 19:58:23,240][46753] Fps is (10 sec: 44236.4, 60 sec: 43420.7, 300 sec: 43875.8). Total num frames: 296452096. Throughput: 0: 43894.0. Samples: 296580760. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:58:23,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 19:58:23,328][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000018095_296468480.pth... +[2024-06-10 19:58:23,377][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000017453_285949952.pth +[2024-06-10 19:58:24,914][46990] Updated weights for policy 0, policy_version 18100 (0.0033) +[2024-06-10 19:58:28,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43691.9, 300 sec: 43820.2). Total num frames: 296665088. Throughput: 0: 43763.0. Samples: 296839560. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:58:28,240][46753] Avg episode reward: [(0, '0.243')] +[2024-06-10 19:58:28,244][46970] Saving new best policy, reward=0.243! +[2024-06-10 19:58:28,983][46990] Updated weights for policy 0, policy_version 18110 (0.0030) +[2024-06-10 19:58:32,472][46990] Updated weights for policy 0, policy_version 18120 (0.0021) +[2024-06-10 19:58:33,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 296894464. Throughput: 0: 43882.2. Samples: 296972360. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:58:33,240][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 19:58:36,714][46990] Updated weights for policy 0, policy_version 18130 (0.0035) +[2024-06-10 19:58:38,239][46753] Fps is (10 sec: 44237.9, 60 sec: 43417.6, 300 sec: 43931.3). Total num frames: 297107456. Throughput: 0: 43661.4. Samples: 297233420. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:58:38,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 19:58:39,857][46990] Updated weights for policy 0, policy_version 18140 (0.0032) +[2024-06-10 19:58:43,243][46753] Fps is (10 sec: 44222.3, 60 sec: 43961.3, 300 sec: 43819.7). Total num frames: 297336832. Throughput: 0: 43700.0. Samples: 297495220. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 19:58:43,243][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 19:58:44,433][46990] Updated weights for policy 0, policy_version 18150 (0.0033) +[2024-06-10 19:58:47,306][46990] Updated weights for policy 0, policy_version 18160 (0.0048) +[2024-06-10 19:58:48,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.8, 300 sec: 43820.3). Total num frames: 297549824. Throughput: 0: 43648.5. Samples: 297624780. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 19:58:48,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 19:58:48,972][46970] Signal inference workers to stop experience collection... (4200 times) +[2024-06-10 19:58:48,973][46970] Signal inference workers to resume experience collection... (4200 times) +[2024-06-10 19:58:49,005][46990] InferenceWorker_p0-w0: stopping experience collection (4200 times) +[2024-06-10 19:58:49,005][46990] InferenceWorker_p0-w0: resuming experience collection (4200 times) +[2024-06-10 19:58:51,589][46990] Updated weights for policy 0, policy_version 18170 (0.0037) +[2024-06-10 19:58:53,239][46753] Fps is (10 sec: 44251.5, 60 sec: 43690.6, 300 sec: 43931.3). Total num frames: 297779200. Throughput: 0: 43764.8. Samples: 297894100. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 19:58:53,241][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 19:58:54,669][46990] Updated weights for policy 0, policy_version 18180 (0.0048) +[2024-06-10 19:58:58,239][46753] Fps is (10 sec: 44236.1, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 297992192. Throughput: 0: 43764.9. Samples: 298153960. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 19:58:58,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 19:58:58,822][46990] Updated weights for policy 0, policy_version 18190 (0.0049) +[2024-06-10 19:59:02,155][46990] Updated weights for policy 0, policy_version 18200 (0.0023) +[2024-06-10 19:59:03,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 298205184. Throughput: 0: 43799.9. Samples: 298284760. Policy #0 lag: (min: 1.0, avg: 9.7, max: 21.0) +[2024-06-10 19:59:03,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 19:59:06,386][46990] Updated weights for policy 0, policy_version 18210 (0.0031) +[2024-06-10 19:59:08,239][46753] Fps is (10 sec: 42599.3, 60 sec: 43417.7, 300 sec: 43820.3). Total num frames: 298418176. Throughput: 0: 43675.4. Samples: 298546140. Policy #0 lag: (min: 1.0, avg: 9.7, max: 21.0) +[2024-06-10 19:59:08,240][46753] Avg episode reward: [(0, '0.240')] +[2024-06-10 19:59:09,741][46990] Updated weights for policy 0, policy_version 18220 (0.0036) +[2024-06-10 19:59:13,240][46753] Fps is (10 sec: 44237.0, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 298647552. Throughput: 0: 43680.1. Samples: 298805160. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:59:13,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 19:59:14,195][46990] Updated weights for policy 0, policy_version 18230 (0.0031) +[2024-06-10 19:59:17,171][46990] Updated weights for policy 0, policy_version 18240 (0.0030) +[2024-06-10 19:59:18,239][46753] Fps is (10 sec: 45874.4, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 298876928. Throughput: 0: 43769.8. Samples: 298942000. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:59:18,240][46753] Avg episode reward: [(0, '0.229')] +[2024-06-10 19:59:21,362][46990] Updated weights for policy 0, policy_version 18250 (0.0031) +[2024-06-10 19:59:23,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43690.8, 300 sec: 43820.9). Total num frames: 299073536. Throughput: 0: 43734.6. Samples: 299201480. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 19:59:23,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 19:59:24,823][46990] Updated weights for policy 0, policy_version 18260 (0.0034) +[2024-06-10 19:59:28,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 299302912. Throughput: 0: 43738.3. Samples: 299463300. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 19:59:28,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 19:59:28,921][46990] Updated weights for policy 0, policy_version 18270 (0.0031) +[2024-06-10 19:59:32,424][46990] Updated weights for policy 0, policy_version 18280 (0.0036) +[2024-06-10 19:59:33,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 299532288. Throughput: 0: 43793.2. Samples: 299595480. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 19:59:33,240][46753] Avg episode reward: [(0, '0.229')] +[2024-06-10 19:59:36,484][46990] Updated weights for policy 0, policy_version 18290 (0.0034) +[2024-06-10 19:59:38,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 299728896. Throughput: 0: 43517.4. Samples: 299852380. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:59:38,240][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 19:59:39,831][46990] Updated weights for policy 0, policy_version 18300 (0.0031) +[2024-06-10 19:59:43,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43693.1, 300 sec: 43764.7). Total num frames: 299958272. Throughput: 0: 43566.8. Samples: 300114460. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:59:43,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 19:59:44,302][46990] Updated weights for policy 0, policy_version 18310 (0.0031) +[2024-06-10 19:59:47,327][46990] Updated weights for policy 0, policy_version 18320 (0.0038) +[2024-06-10 19:59:48,239][46753] Fps is (10 sec: 45874.8, 60 sec: 43963.6, 300 sec: 43875.8). Total num frames: 300187648. Throughput: 0: 43688.1. Samples: 300250720. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 19:59:48,244][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 19:59:51,477][46990] Updated weights for policy 0, policy_version 18330 (0.0042) +[2024-06-10 19:59:53,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.7, 300 sec: 43876.5). Total num frames: 300400640. Throughput: 0: 43762.0. Samples: 300515440. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 19:59:53,240][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 19:59:54,592][46990] Updated weights for policy 0, policy_version 18340 (0.0038) +[2024-06-10 19:59:58,240][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 300613632. Throughput: 0: 43851.1. Samples: 300778460. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 19:59:58,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 19:59:58,515][46990] Updated weights for policy 0, policy_version 18350 (0.0032) +[2024-06-10 20:00:01,903][46990] Updated weights for policy 0, policy_version 18360 (0.0031) +[2024-06-10 20:00:03,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 300843008. Throughput: 0: 43738.3. Samples: 300910220. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 20:00:03,240][46753] Avg episode reward: [(0, '0.233')] +[2024-06-10 20:00:06,154][46990] Updated weights for policy 0, policy_version 18370 (0.0032) +[2024-06-10 20:00:08,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43963.6, 300 sec: 43875.8). Total num frames: 301056000. Throughput: 0: 43971.0. Samples: 301180180. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 20:00:08,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 20:00:09,384][46990] Updated weights for policy 0, policy_version 18380 (0.0028) +[2024-06-10 20:00:13,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 301268992. Throughput: 0: 43937.0. Samples: 301440460. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 20:00:13,240][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 20:00:13,950][46990] Updated weights for policy 0, policy_version 18390 (0.0036) +[2024-06-10 20:00:16,560][46970] Signal inference workers to stop experience collection... (4250 times) +[2024-06-10 20:00:16,561][46970] Signal inference workers to resume experience collection... (4250 times) +[2024-06-10 20:00:16,575][46990] InferenceWorker_p0-w0: stopping experience collection (4250 times) +[2024-06-10 20:00:16,575][46990] InferenceWorker_p0-w0: resuming experience collection (4250 times) +[2024-06-10 20:00:16,722][46990] Updated weights for policy 0, policy_version 18400 (0.0048) +[2024-06-10 20:00:18,240][46753] Fps is (10 sec: 45874.8, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 301514752. Throughput: 0: 44062.5. Samples: 301578300. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 20:00:18,240][46753] Avg episode reward: [(0, '0.233')] +[2024-06-10 20:00:21,065][46990] Updated weights for policy 0, policy_version 18410 (0.0038) +[2024-06-10 20:00:23,240][46753] Fps is (10 sec: 45872.7, 60 sec: 44236.4, 300 sec: 43931.3). Total num frames: 301727744. Throughput: 0: 44222.6. Samples: 301842420. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 20:00:23,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 20:00:23,250][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000018416_301727744.pth... +[2024-06-10 20:00:23,314][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000017775_291225600.pth +[2024-06-10 20:00:24,390][46990] Updated weights for policy 0, policy_version 18420 (0.0028) +[2024-06-10 20:00:28,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 301940736. Throughput: 0: 44204.4. Samples: 302103660. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 20:00:28,243][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 20:00:28,337][46990] Updated weights for policy 0, policy_version 18430 (0.0045) +[2024-06-10 20:00:31,581][46990] Updated weights for policy 0, policy_version 18440 (0.0036) +[2024-06-10 20:00:33,239][46753] Fps is (10 sec: 44238.8, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 302170112. Throughput: 0: 44096.5. Samples: 302235060. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:00:33,241][46753] Avg episode reward: [(0, '0.233')] +[2024-06-10 20:00:35,804][46990] Updated weights for policy 0, policy_version 18450 (0.0040) +[2024-06-10 20:00:38,240][46753] Fps is (10 sec: 44233.1, 60 sec: 44236.1, 300 sec: 43875.7). Total num frames: 302383104. Throughput: 0: 44188.1. Samples: 302503940. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:00:38,241][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 20:00:39,070][46990] Updated weights for policy 0, policy_version 18460 (0.0041) +[2024-06-10 20:00:43,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 302596096. Throughput: 0: 44132.0. Samples: 302764400. Policy #0 lag: (min: 0.0, avg: 8.6, max: 20.0) +[2024-06-10 20:00:43,243][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 20:00:43,378][46990] Updated weights for policy 0, policy_version 18470 (0.0039) +[2024-06-10 20:00:46,662][46990] Updated weights for policy 0, policy_version 18480 (0.0036) +[2024-06-10 20:00:48,239][46753] Fps is (10 sec: 44240.3, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 302825472. Throughput: 0: 44095.5. Samples: 302894520. Policy #0 lag: (min: 0.0, avg: 8.6, max: 20.0) +[2024-06-10 20:00:48,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:00:48,241][46970] Saving new best policy, reward=0.246! +[2024-06-10 20:00:51,086][46990] Updated weights for policy 0, policy_version 18490 (0.0036) +[2024-06-10 20:00:53,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 303038464. Throughput: 0: 43963.2. Samples: 303158520. Policy #0 lag: (min: 0.0, avg: 8.6, max: 20.0) +[2024-06-10 20:00:53,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 20:00:54,223][46990] Updated weights for policy 0, policy_version 18500 (0.0032) +[2024-06-10 20:00:58,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 303251456. Throughput: 0: 43905.2. Samples: 303416200. Policy #0 lag: (min: 0.0, avg: 11.0, max: 23.0) +[2024-06-10 20:00:58,240][46753] Avg episode reward: [(0, '0.233')] +[2024-06-10 20:00:58,401][46990] Updated weights for policy 0, policy_version 18510 (0.0033) +[2024-06-10 20:01:02,011][46990] Updated weights for policy 0, policy_version 18520 (0.0030) +[2024-06-10 20:01:03,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 303464448. Throughput: 0: 43680.1. Samples: 303543900. Policy #0 lag: (min: 0.0, avg: 11.0, max: 23.0) +[2024-06-10 20:01:03,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 20:01:05,834][46990] Updated weights for policy 0, policy_version 18530 (0.0041) +[2024-06-10 20:01:08,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 303693824. Throughput: 0: 43829.8. Samples: 303814740. Policy #0 lag: (min: 0.0, avg: 11.0, max: 23.0) +[2024-06-10 20:01:08,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:01:09,159][46990] Updated weights for policy 0, policy_version 18540 (0.0043) +[2024-06-10 20:01:13,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 303906816. Throughput: 0: 43896.1. Samples: 304078980. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:01:13,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 20:01:13,335][46990] Updated weights for policy 0, policy_version 18550 (0.0042) +[2024-06-10 20:01:16,794][46990] Updated weights for policy 0, policy_version 18560 (0.0035) +[2024-06-10 20:01:18,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.8, 300 sec: 43820.6). Total num frames: 304136192. Throughput: 0: 43860.5. Samples: 304208780. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:01:18,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 20:01:20,826][46990] Updated weights for policy 0, policy_version 18570 (0.0033) +[2024-06-10 20:01:23,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43691.1, 300 sec: 43820.3). Total num frames: 304349184. Throughput: 0: 43729.3. Samples: 304471720. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:01:23,240][46753] Avg episode reward: [(0, '0.233')] +[2024-06-10 20:01:24,298][46990] Updated weights for policy 0, policy_version 18580 (0.0043) +[2024-06-10 20:01:28,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 304545792. Throughput: 0: 43896.6. Samples: 304739740. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 20:01:28,240][46753] Avg episode reward: [(0, '0.223')] +[2024-06-10 20:01:28,426][46990] Updated weights for policy 0, policy_version 18590 (0.0036) +[2024-06-10 20:01:31,727][46990] Updated weights for policy 0, policy_version 18600 (0.0024) +[2024-06-10 20:01:33,239][46753] Fps is (10 sec: 44236.1, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 304791552. Throughput: 0: 43648.9. Samples: 304858720. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 20:01:33,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 20:01:36,013][46990] Updated weights for policy 0, policy_version 18610 (0.0041) +[2024-06-10 20:01:36,994][46970] Signal inference workers to stop experience collection... (4300 times) +[2024-06-10 20:01:37,047][46970] Signal inference workers to resume experience collection... (4300 times) +[2024-06-10 20:01:37,052][46990] InferenceWorker_p0-w0: stopping experience collection (4300 times) +[2024-06-10 20:01:37,073][46990] InferenceWorker_p0-w0: resuming experience collection (4300 times) +[2024-06-10 20:01:38,239][46753] Fps is (10 sec: 47513.1, 60 sec: 43964.3, 300 sec: 43875.8). Total num frames: 305020928. Throughput: 0: 43808.8. Samples: 305129920. Policy #0 lag: (min: 0.0, avg: 9.8, max: 24.0) +[2024-06-10 20:01:38,240][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 20:01:39,229][46990] Updated weights for policy 0, policy_version 18620 (0.0039) +[2024-06-10 20:01:43,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 305217536. Throughput: 0: 43973.0. Samples: 305394980. Policy #0 lag: (min: 0.0, avg: 9.8, max: 24.0) +[2024-06-10 20:01:43,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 20:01:43,336][46990] Updated weights for policy 0, policy_version 18630 (0.0032) +[2024-06-10 20:01:46,490][46990] Updated weights for policy 0, policy_version 18640 (0.0036) +[2024-06-10 20:01:48,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 305430528. Throughput: 0: 43993.8. Samples: 305523620. Policy #0 lag: (min: 0.0, avg: 9.8, max: 24.0) +[2024-06-10 20:01:48,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 20:01:50,871][46990] Updated weights for policy 0, policy_version 18650 (0.0025) +[2024-06-10 20:01:53,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 305676288. Throughput: 0: 43807.9. Samples: 305786100. Policy #0 lag: (min: 0.0, avg: 7.3, max: 19.0) +[2024-06-10 20:01:53,241][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:01:54,082][46990] Updated weights for policy 0, policy_version 18660 (0.0027) +[2024-06-10 20:01:58,240][46753] Fps is (10 sec: 42597.8, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 305856512. Throughput: 0: 43780.7. Samples: 306049120. Policy #0 lag: (min: 0.0, avg: 7.3, max: 19.0) +[2024-06-10 20:01:58,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:01:58,446][46990] Updated weights for policy 0, policy_version 18670 (0.0041) +[2024-06-10 20:02:01,667][46990] Updated weights for policy 0, policy_version 18680 (0.0036) +[2024-06-10 20:02:03,240][46753] Fps is (10 sec: 42598.2, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 306102272. Throughput: 0: 43645.7. Samples: 306172840. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 20:02:03,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 20:02:05,891][46990] Updated weights for policy 0, policy_version 18690 (0.0037) +[2024-06-10 20:02:08,239][46753] Fps is (10 sec: 49152.8, 60 sec: 44236.8, 300 sec: 43931.3). Total num frames: 306348032. Throughput: 0: 43813.3. Samples: 306443320. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 20:02:08,248][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:02:08,975][46990] Updated weights for policy 0, policy_version 18700 (0.0026) +[2024-06-10 20:02:13,240][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 306528256. Throughput: 0: 43898.1. Samples: 306715160. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 20:02:13,243][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 20:02:13,318][46990] Updated weights for policy 0, policy_version 18710 (0.0026) +[2024-06-10 20:02:16,478][46990] Updated weights for policy 0, policy_version 18720 (0.0032) +[2024-06-10 20:02:18,240][46753] Fps is (10 sec: 40959.5, 60 sec: 43690.6, 300 sec: 43765.4). Total num frames: 306757632. Throughput: 0: 43970.6. Samples: 306837400. Policy #0 lag: (min: 0.0, avg: 12.0, max: 21.0) +[2024-06-10 20:02:18,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 20:02:21,021][46990] Updated weights for policy 0, policy_version 18730 (0.0040) +[2024-06-10 20:02:23,240][46753] Fps is (10 sec: 45874.6, 60 sec: 43963.5, 300 sec: 43876.0). Total num frames: 306987008. Throughput: 0: 43717.6. Samples: 307097220. Policy #0 lag: (min: 0.0, avg: 12.0, max: 21.0) +[2024-06-10 20:02:23,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 20:02:23,303][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000018738_307003392.pth... +[2024-06-10 20:02:23,358][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000018095_296468480.pth +[2024-06-10 20:02:23,705][46990] Updated weights for policy 0, policy_version 18740 (0.0034) +[2024-06-10 20:02:28,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43963.8, 300 sec: 43764.8). Total num frames: 307183616. Throughput: 0: 43828.9. Samples: 307367280. Policy #0 lag: (min: 0.0, avg: 12.0, max: 21.0) +[2024-06-10 20:02:28,240][46753] Avg episode reward: [(0, '0.240')] +[2024-06-10 20:02:28,306][46990] Updated weights for policy 0, policy_version 18750 (0.0039) +[2024-06-10 20:02:31,524][46990] Updated weights for policy 0, policy_version 18760 (0.0032) +[2024-06-10 20:02:33,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43963.8, 300 sec: 43820.2). Total num frames: 307429376. Throughput: 0: 43779.9. Samples: 307493720. Policy #0 lag: (min: 0.0, avg: 9.6, max: 22.0) +[2024-06-10 20:02:33,249][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 20:02:35,549][46990] Updated weights for policy 0, policy_version 18770 (0.0026) +[2024-06-10 20:02:38,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 307642368. Throughput: 0: 43942.0. Samples: 307763480. Policy #0 lag: (min: 0.0, avg: 9.6, max: 22.0) +[2024-06-10 20:02:38,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 20:02:38,779][46990] Updated weights for policy 0, policy_version 18780 (0.0036) +[2024-06-10 20:02:43,166][46990] Updated weights for policy 0, policy_version 18790 (0.0040) +[2024-06-10 20:02:43,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 307855360. Throughput: 0: 44089.4. Samples: 308033140. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:02:43,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 20:02:46,185][46990] Updated weights for policy 0, policy_version 18800 (0.0034) +[2024-06-10 20:02:48,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 308068352. Throughput: 0: 44125.9. Samples: 308158500. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:02:48,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 20:02:48,435][46970] Signal inference workers to stop experience collection... (4350 times) +[2024-06-10 20:02:48,436][46970] Signal inference workers to resume experience collection... (4350 times) +[2024-06-10 20:02:48,488][46990] InferenceWorker_p0-w0: stopping experience collection (4350 times) +[2024-06-10 20:02:48,488][46990] InferenceWorker_p0-w0: resuming experience collection (4350 times) +[2024-06-10 20:02:50,650][46990] Updated weights for policy 0, policy_version 18810 (0.0037) +[2024-06-10 20:02:53,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43963.8, 300 sec: 43931.4). Total num frames: 308314112. Throughput: 0: 43824.0. Samples: 308415400. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:02:53,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 20:02:53,526][46990] Updated weights for policy 0, policy_version 18820 (0.0026) +[2024-06-10 20:02:58,244][46753] Fps is (10 sec: 42579.7, 60 sec: 43960.7, 300 sec: 43764.1). Total num frames: 308494336. Throughput: 0: 43678.6. Samples: 308680880. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:02:58,244][46753] Avg episode reward: [(0, '0.228')] +[2024-06-10 20:02:58,424][46990] Updated weights for policy 0, policy_version 18830 (0.0045) +[2024-06-10 20:03:01,280][46990] Updated weights for policy 0, policy_version 18840 (0.0034) +[2024-06-10 20:03:03,240][46753] Fps is (10 sec: 42597.8, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 308740096. Throughput: 0: 43817.3. Samples: 308809180. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:03:03,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 20:03:05,572][46990] Updated weights for policy 0, policy_version 18850 (0.0038) +[2024-06-10 20:03:08,239][46753] Fps is (10 sec: 47533.8, 60 sec: 43690.6, 300 sec: 43931.3). Total num frames: 308969472. Throughput: 0: 43945.5. Samples: 309074760. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:03:08,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 20:03:08,655][46990] Updated weights for policy 0, policy_version 18860 (0.0028) +[2024-06-10 20:03:13,195][46990] Updated weights for policy 0, policy_version 18870 (0.0029) +[2024-06-10 20:03:13,239][46753] Fps is (10 sec: 42599.3, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 309166080. Throughput: 0: 44013.3. Samples: 309347880. Policy #0 lag: (min: 0.0, avg: 12.3, max: 21.0) +[2024-06-10 20:03:13,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:03:16,210][46990] Updated weights for policy 0, policy_version 18880 (0.0036) +[2024-06-10 20:03:18,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 309395456. Throughput: 0: 43938.2. Samples: 309470940. Policy #0 lag: (min: 0.0, avg: 12.3, max: 21.0) +[2024-06-10 20:03:18,240][46753] Avg episode reward: [(0, '0.243')] +[2024-06-10 20:03:20,479][46990] Updated weights for policy 0, policy_version 18890 (0.0034) +[2024-06-10 20:03:23,239][46753] Fps is (10 sec: 47513.9, 60 sec: 44237.1, 300 sec: 43986.9). Total num frames: 309641216. Throughput: 0: 43821.3. Samples: 309735440. Policy #0 lag: (min: 1.0, avg: 10.0, max: 23.0) +[2024-06-10 20:03:23,240][46753] Avg episode reward: [(0, '0.230')] +[2024-06-10 20:03:23,327][46990] Updated weights for policy 0, policy_version 18900 (0.0029) +[2024-06-10 20:03:28,009][46990] Updated weights for policy 0, policy_version 18910 (0.0041) +[2024-06-10 20:03:28,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 309821440. Throughput: 0: 43656.6. Samples: 309997680. Policy #0 lag: (min: 1.0, avg: 10.0, max: 23.0) +[2024-06-10 20:03:28,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 20:03:31,096][46990] Updated weights for policy 0, policy_version 18920 (0.0045) +[2024-06-10 20:03:33,239][46753] Fps is (10 sec: 42597.6, 60 sec: 43963.8, 300 sec: 43931.3). Total num frames: 310067200. Throughput: 0: 43834.6. Samples: 310131060. Policy #0 lag: (min: 1.0, avg: 10.0, max: 23.0) +[2024-06-10 20:03:33,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 20:03:35,319][46990] Updated weights for policy 0, policy_version 18930 (0.0025) +[2024-06-10 20:03:38,239][46753] Fps is (10 sec: 47513.3, 60 sec: 44236.7, 300 sec: 43931.8). Total num frames: 310296576. Throughput: 0: 43990.2. Samples: 310394960. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:03:38,240][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 20:03:38,311][46990] Updated weights for policy 0, policy_version 18940 (0.0033) +[2024-06-10 20:03:42,597][46990] Updated weights for policy 0, policy_version 18950 (0.0032) +[2024-06-10 20:03:43,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 310493184. Throughput: 0: 44118.0. Samples: 310666000. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:03:43,240][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 20:03:45,814][46990] Updated weights for policy 0, policy_version 18960 (0.0041) +[2024-06-10 20:03:48,240][46753] Fps is (10 sec: 40959.4, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 310706176. Throughput: 0: 44042.6. Samples: 310791100. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 20:03:48,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:03:50,272][46990] Updated weights for policy 0, policy_version 18970 (0.0035) +[2024-06-10 20:03:53,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43963.8, 300 sec: 43931.4). Total num frames: 310951936. Throughput: 0: 44048.6. Samples: 311056940. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 20:03:53,240][46753] Avg episode reward: [(0, '0.245')] +[2024-06-10 20:03:53,305][46990] Updated weights for policy 0, policy_version 18980 (0.0036) +[2024-06-10 20:03:57,773][46990] Updated weights for policy 0, policy_version 18990 (0.0037) +[2024-06-10 20:03:58,239][46753] Fps is (10 sec: 44237.3, 60 sec: 44240.0, 300 sec: 43875.8). Total num frames: 311148544. Throughput: 0: 43621.2. Samples: 311310840. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 20:03:58,240][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 20:04:01,012][46990] Updated weights for policy 0, policy_version 19000 (0.0029) +[2024-06-10 20:04:03,241][46753] Fps is (10 sec: 42590.9, 60 sec: 43962.6, 300 sec: 43931.1). Total num frames: 311377920. Throughput: 0: 43873.5. Samples: 311445320. Policy #0 lag: (min: 1.0, avg: 11.1, max: 21.0) +[2024-06-10 20:04:03,242][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 20:04:05,015][46990] Updated weights for policy 0, policy_version 19010 (0.0033) +[2024-06-10 20:04:08,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 311607296. Throughput: 0: 43985.6. Samples: 311714800. Policy #0 lag: (min: 1.0, avg: 11.1, max: 21.0) +[2024-06-10 20:04:08,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:04:08,382][46990] Updated weights for policy 0, policy_version 19020 (0.0031) +[2024-06-10 20:04:12,533][46990] Updated weights for policy 0, policy_version 19030 (0.0031) +[2024-06-10 20:04:13,240][46753] Fps is (10 sec: 42605.4, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 311803904. Throughput: 0: 44094.5. Samples: 311981940. Policy #0 lag: (min: 1.0, avg: 11.1, max: 21.0) +[2024-06-10 20:04:13,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 20:04:15,755][46990] Updated weights for policy 0, policy_version 19040 (0.0037) +[2024-06-10 20:04:18,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43963.8, 300 sec: 43931.3). Total num frames: 312033280. Throughput: 0: 43880.5. Samples: 312105680. Policy #0 lag: (min: 1.0, avg: 10.3, max: 22.0) +[2024-06-10 20:04:18,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 20:04:19,867][46990] Updated weights for policy 0, policy_version 19050 (0.0028) +[2024-06-10 20:04:23,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.5, 300 sec: 43931.3). Total num frames: 312262656. Throughput: 0: 44064.9. Samples: 312377880. Policy #0 lag: (min: 1.0, avg: 10.3, max: 22.0) +[2024-06-10 20:04:23,240][46753] Avg episode reward: [(0, '0.227')] +[2024-06-10 20:04:23,247][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000019060_312279040.pth... +[2024-06-10 20:04:23,250][46990] Updated weights for policy 0, policy_version 19060 (0.0028) +[2024-06-10 20:04:23,302][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000018416_301727744.pth +[2024-06-10 20:04:27,285][46990] Updated weights for policy 0, policy_version 19070 (0.0040) +[2024-06-10 20:04:28,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 312459264. Throughput: 0: 43677.8. Samples: 312631500. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 20:04:28,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:04:28,241][46970] Saving new best policy, reward=0.248! +[2024-06-10 20:04:30,859][46970] Signal inference workers to stop experience collection... (4400 times) +[2024-06-10 20:04:30,859][46970] Signal inference workers to resume experience collection... (4400 times) +[2024-06-10 20:04:30,904][46990] InferenceWorker_p0-w0: stopping experience collection (4400 times) +[2024-06-10 20:04:30,904][46990] InferenceWorker_p0-w0: resuming experience collection (4400 times) +[2024-06-10 20:04:30,993][46990] Updated weights for policy 0, policy_version 19080 (0.0034) +[2024-06-10 20:04:33,240][46753] Fps is (10 sec: 42597.4, 60 sec: 43690.5, 300 sec: 43931.3). Total num frames: 312688640. Throughput: 0: 43963.4. Samples: 312769460. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 20:04:33,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:04:34,800][46990] Updated weights for policy 0, policy_version 19090 (0.0029) +[2024-06-10 20:04:38,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.6, 300 sec: 43931.3). Total num frames: 312918016. Throughput: 0: 43912.8. Samples: 313033020. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 20:04:38,240][46753] Avg episode reward: [(0, '0.243')] +[2024-06-10 20:04:38,465][46990] Updated weights for policy 0, policy_version 19100 (0.0033) +[2024-06-10 20:04:42,363][46990] Updated weights for policy 0, policy_version 19110 (0.0035) +[2024-06-10 20:04:43,239][46753] Fps is (10 sec: 45876.1, 60 sec: 44236.8, 300 sec: 43931.3). Total num frames: 313147392. Throughput: 0: 44141.8. Samples: 313297220. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 20:04:43,240][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 20:04:45,706][46990] Updated weights for policy 0, policy_version 19120 (0.0041) +[2024-06-10 20:04:48,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.9, 300 sec: 43875.8). Total num frames: 313344000. Throughput: 0: 43924.8. Samples: 313421860. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 20:04:48,240][46753] Avg episode reward: [(0, '0.239')] +[2024-06-10 20:04:49,621][46990] Updated weights for policy 0, policy_version 19130 (0.0044) +[2024-06-10 20:04:53,240][46753] Fps is (10 sec: 42596.4, 60 sec: 43690.3, 300 sec: 43931.3). Total num frames: 313573376. Throughput: 0: 43920.9. Samples: 313691260. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 20:04:53,241][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 20:04:53,458][46990] Updated weights for policy 0, policy_version 19140 (0.0032) +[2024-06-10 20:04:56,956][46990] Updated weights for policy 0, policy_version 19150 (0.0045) +[2024-06-10 20:04:58,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 313769984. Throughput: 0: 43808.1. Samples: 313953300. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:04:58,240][46753] Avg episode reward: [(0, '0.240')] +[2024-06-10 20:05:00,869][46990] Updated weights for policy 0, policy_version 19160 (0.0038) +[2024-06-10 20:05:03,239][46753] Fps is (10 sec: 44238.9, 60 sec: 43965.0, 300 sec: 43931.3). Total num frames: 314015744. Throughput: 0: 43927.1. Samples: 314082400. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:05:03,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 20:05:04,304][46990] Updated weights for policy 0, policy_version 19170 (0.0040) +[2024-06-10 20:05:08,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 314228736. Throughput: 0: 43884.9. Samples: 314352700. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:05:08,240][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:05:08,287][46970] Saving new best policy, reward=0.250! +[2024-06-10 20:05:08,290][46990] Updated weights for policy 0, policy_version 19180 (0.0036) +[2024-06-10 20:05:11,834][46990] Updated weights for policy 0, policy_version 19190 (0.0033) +[2024-06-10 20:05:13,240][46753] Fps is (10 sec: 44236.4, 60 sec: 44236.8, 300 sec: 43875.8). Total num frames: 314458112. Throughput: 0: 44121.7. Samples: 314616980. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:05:13,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 20:05:15,408][46990] Updated weights for policy 0, policy_version 19200 (0.0038) +[2024-06-10 20:05:18,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.7, 300 sec: 43875.9). Total num frames: 314671104. Throughput: 0: 44068.2. Samples: 314752520. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:05:18,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 20:05:19,066][46990] Updated weights for policy 0, policy_version 19210 (0.0038) +[2024-06-10 20:05:23,046][46990] Updated weights for policy 0, policy_version 19220 (0.0037) +[2024-06-10 20:05:23,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 314900480. Throughput: 0: 44040.4. Samples: 315014840. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:05:23,240][46753] Avg episode reward: [(0, '0.240')] +[2024-06-10 20:05:26,420][46990] Updated weights for policy 0, policy_version 19230 (0.0036) +[2024-06-10 20:05:28,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 315097088. Throughput: 0: 44227.2. Samples: 315287440. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:05:28,240][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 20:05:30,765][46990] Updated weights for policy 0, policy_version 19240 (0.0038) +[2024-06-10 20:05:33,239][46753] Fps is (10 sec: 44237.3, 60 sec: 44237.0, 300 sec: 43931.5). Total num frames: 315342848. Throughput: 0: 44254.3. Samples: 315413300. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:05:33,240][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 20:05:33,820][46990] Updated weights for policy 0, policy_version 19250 (0.0043) +[2024-06-10 20:05:38,042][46990] Updated weights for policy 0, policy_version 19260 (0.0032) +[2024-06-10 20:05:38,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 315555840. Throughput: 0: 44190.7. Samples: 315679820. Policy #0 lag: (min: 0.0, avg: 11.9, max: 21.0) +[2024-06-10 20:05:38,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:05:41,634][46990] Updated weights for policy 0, policy_version 19270 (0.0037) +[2024-06-10 20:05:43,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43417.7, 300 sec: 43820.3). Total num frames: 315752448. Throughput: 0: 44140.1. Samples: 315939600. Policy #0 lag: (min: 0.0, avg: 11.9, max: 21.0) +[2024-06-10 20:05:43,240][46753] Avg episode reward: [(0, '0.232')] +[2024-06-10 20:05:45,682][46990] Updated weights for policy 0, policy_version 19280 (0.0038) +[2024-06-10 20:05:48,199][46970] Signal inference workers to stop experience collection... (4450 times) +[2024-06-10 20:05:48,239][46753] Fps is (10 sec: 42599.1, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 315981824. Throughput: 0: 43999.2. Samples: 316062360. Policy #0 lag: (min: 0.0, avg: 10.3, max: 20.0) +[2024-06-10 20:05:48,240][46753] Avg episode reward: [(0, '0.241')] +[2024-06-10 20:05:48,244][46990] InferenceWorker_p0-w0: stopping experience collection (4450 times) +[2024-06-10 20:05:48,316][46970] Signal inference workers to resume experience collection... (4450 times) +[2024-06-10 20:05:48,316][46990] InferenceWorker_p0-w0: resuming experience collection (4450 times) +[2024-06-10 20:05:48,962][46990] Updated weights for policy 0, policy_version 19290 (0.0039) +[2024-06-10 20:05:53,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43691.1, 300 sec: 43875.8). Total num frames: 316194816. Throughput: 0: 43940.9. Samples: 316330040. Policy #0 lag: (min: 0.0, avg: 10.3, max: 20.0) +[2024-06-10 20:05:53,240][46753] Avg episode reward: [(0, '0.249')] +[2024-06-10 20:05:53,298][46990] Updated weights for policy 0, policy_version 19300 (0.0041) +[2024-06-10 20:05:56,231][46990] Updated weights for policy 0, policy_version 19310 (0.0033) +[2024-06-10 20:05:58,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 316407808. Throughput: 0: 43909.9. Samples: 316592920. Policy #0 lag: (min: 0.0, avg: 10.3, max: 20.0) +[2024-06-10 20:05:58,240][46753] Avg episode reward: [(0, '0.219')] +[2024-06-10 20:06:00,914][46990] Updated weights for policy 0, policy_version 19320 (0.0035) +[2024-06-10 20:06:03,239][46753] Fps is (10 sec: 47513.9, 60 sec: 44236.9, 300 sec: 43986.9). Total num frames: 316669952. Throughput: 0: 43873.9. Samples: 316726840. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 20:06:03,240][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 20:06:03,594][46990] Updated weights for policy 0, policy_version 19330 (0.0028) +[2024-06-10 20:06:08,009][46990] Updated weights for policy 0, policy_version 19340 (0.0026) +[2024-06-10 20:06:08,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43963.8, 300 sec: 43931.3). Total num frames: 316866560. Throughput: 0: 43980.6. Samples: 316993960. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 20:06:08,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:06:08,288][46970] Saving new best policy, reward=0.253! +[2024-06-10 20:06:11,326][46990] Updated weights for policy 0, policy_version 19350 (0.0027) +[2024-06-10 20:06:13,239][46753] Fps is (10 sec: 40959.6, 60 sec: 43690.8, 300 sec: 43875.8). Total num frames: 317079552. Throughput: 0: 43836.5. Samples: 317260080. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 20:06:13,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:06:15,455][46990] Updated weights for policy 0, policy_version 19360 (0.0022) +[2024-06-10 20:06:18,239][46753] Fps is (10 sec: 45874.7, 60 sec: 44236.8, 300 sec: 43986.9). Total num frames: 317325312. Throughput: 0: 43748.8. Samples: 317382000. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 20:06:18,240][46753] Avg episode reward: [(0, '0.217')] +[2024-06-10 20:06:19,123][46990] Updated weights for policy 0, policy_version 19370 (0.0026) +[2024-06-10 20:06:23,117][46990] Updated weights for policy 0, policy_version 19380 (0.0033) +[2024-06-10 20:06:23,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43986.9). Total num frames: 317521920. Throughput: 0: 43798.3. Samples: 317650740. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 20:06:23,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:06:23,258][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000019380_317521920.pth... +[2024-06-10 20:06:23,320][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000018738_307003392.pth +[2024-06-10 20:06:26,178][46990] Updated weights for policy 0, policy_version 19390 (0.0038) +[2024-06-10 20:06:28,239][46753] Fps is (10 sec: 42598.7, 60 sec: 44236.8, 300 sec: 43931.4). Total num frames: 317751296. Throughput: 0: 43869.3. Samples: 317913720. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 20:06:28,240][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 20:06:30,589][46990] Updated weights for policy 0, policy_version 19400 (0.0033) +[2024-06-10 20:06:33,239][46753] Fps is (10 sec: 47513.2, 60 sec: 44236.7, 300 sec: 43986.9). Total num frames: 317997056. Throughput: 0: 44119.4. Samples: 318047740. Policy #0 lag: (min: 1.0, avg: 10.2, max: 21.0) +[2024-06-10 20:06:33,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 20:06:33,517][46990] Updated weights for policy 0, policy_version 19410 (0.0030) +[2024-06-10 20:06:37,840][46990] Updated weights for policy 0, policy_version 19420 (0.0031) +[2024-06-10 20:06:38,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43963.7, 300 sec: 43986.9). Total num frames: 318193664. Throughput: 0: 44203.4. Samples: 318319200. Policy #0 lag: (min: 1.0, avg: 10.2, max: 21.0) +[2024-06-10 20:06:38,240][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 20:06:41,334][46990] Updated weights for policy 0, policy_version 19430 (0.0048) +[2024-06-10 20:06:43,239][46753] Fps is (10 sec: 40960.3, 60 sec: 44236.8, 300 sec: 43986.9). Total num frames: 318406656. Throughput: 0: 44065.3. Samples: 318575860. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:06:43,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:06:45,185][46990] Updated weights for policy 0, policy_version 19440 (0.0041) +[2024-06-10 20:06:48,239][46753] Fps is (10 sec: 44237.4, 60 sec: 44236.8, 300 sec: 43931.3). Total num frames: 318636032. Throughput: 0: 43956.4. Samples: 318704880. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:06:48,240][46753] Avg episode reward: [(0, '0.239')] +[2024-06-10 20:06:48,935][46990] Updated weights for policy 0, policy_version 19450 (0.0035) +[2024-06-10 20:06:52,873][46990] Updated weights for policy 0, policy_version 19460 (0.0053) +[2024-06-10 20:06:53,240][46753] Fps is (10 sec: 44236.1, 60 sec: 44236.7, 300 sec: 44042.4). Total num frames: 318849024. Throughput: 0: 43855.3. Samples: 318967460. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:06:53,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:06:56,295][46990] Updated weights for policy 0, policy_version 19470 (0.0027) +[2024-06-10 20:06:58,239][46753] Fps is (10 sec: 42598.2, 60 sec: 44236.8, 300 sec: 43931.3). Total num frames: 319062016. Throughput: 0: 43823.6. Samples: 319232140. Policy #0 lag: (min: 0.0, avg: 12.3, max: 23.0) +[2024-06-10 20:06:58,240][46753] Avg episode reward: [(0, '0.239')] +[2024-06-10 20:07:00,335][46990] Updated weights for policy 0, policy_version 19480 (0.0037) +[2024-06-10 20:07:03,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43963.6, 300 sec: 43931.3). Total num frames: 319307776. Throughput: 0: 44053.3. Samples: 319364400. Policy #0 lag: (min: 0.0, avg: 12.3, max: 23.0) +[2024-06-10 20:07:03,240][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 20:07:03,643][46990] Updated weights for policy 0, policy_version 19490 (0.0030) +[2024-06-10 20:07:04,543][46970] Signal inference workers to stop experience collection... (4500 times) +[2024-06-10 20:07:04,543][46970] Signal inference workers to resume experience collection... (4500 times) +[2024-06-10 20:07:04,558][46990] InferenceWorker_p0-w0: stopping experience collection (4500 times) +[2024-06-10 20:07:04,558][46990] InferenceWorker_p0-w0: resuming experience collection (4500 times) +[2024-06-10 20:07:07,595][46990] Updated weights for policy 0, policy_version 19500 (0.0032) +[2024-06-10 20:07:08,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43963.7, 300 sec: 43986.9). Total num frames: 319504384. Throughput: 0: 43985.4. Samples: 319630080. Policy #0 lag: (min: 0.0, avg: 12.3, max: 23.0) +[2024-06-10 20:07:08,240][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 20:07:11,240][46990] Updated weights for policy 0, policy_version 19510 (0.0038) +[2024-06-10 20:07:13,239][46753] Fps is (10 sec: 40960.7, 60 sec: 43963.8, 300 sec: 43931.4). Total num frames: 319717376. Throughput: 0: 43933.8. Samples: 319890740. Policy #0 lag: (min: 1.0, avg: 9.7, max: 21.0) +[2024-06-10 20:07:13,240][46753] Avg episode reward: [(0, '0.239')] +[2024-06-10 20:07:15,042][46990] Updated weights for policy 0, policy_version 19520 (0.0036) +[2024-06-10 20:07:18,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43931.4). Total num frames: 319946752. Throughput: 0: 43829.0. Samples: 320020040. Policy #0 lag: (min: 1.0, avg: 9.7, max: 21.0) +[2024-06-10 20:07:18,240][46753] Avg episode reward: [(0, '0.243')] +[2024-06-10 20:07:18,973][46990] Updated weights for policy 0, policy_version 19530 (0.0038) +[2024-06-10 20:07:22,823][46990] Updated weights for policy 0, policy_version 19540 (0.0032) +[2024-06-10 20:07:23,240][46753] Fps is (10 sec: 44236.1, 60 sec: 43963.7, 300 sec: 43986.9). Total num frames: 320159744. Throughput: 0: 43608.0. Samples: 320281560. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 20:07:23,240][46753] Avg episode reward: [(0, '0.228')] +[2024-06-10 20:07:26,202][46990] Updated weights for policy 0, policy_version 19550 (0.0036) +[2024-06-10 20:07:28,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 320372736. Throughput: 0: 43741.7. Samples: 320544240. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 20:07:28,240][46753] Avg episode reward: [(0, '0.240')] +[2024-06-10 20:07:30,064][46990] Updated weights for policy 0, policy_version 19560 (0.0033) +[2024-06-10 20:07:33,240][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.7, 300 sec: 43986.8). Total num frames: 320618496. Throughput: 0: 43823.4. Samples: 320676940. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 20:07:33,240][46753] Avg episode reward: [(0, '0.241')] +[2024-06-10 20:07:33,974][46990] Updated weights for policy 0, policy_version 19570 (0.0044) +[2024-06-10 20:07:37,520][46990] Updated weights for policy 0, policy_version 19580 (0.0033) +[2024-06-10 20:07:38,243][46753] Fps is (10 sec: 44219.6, 60 sec: 43687.9, 300 sec: 43930.8). Total num frames: 320815104. Throughput: 0: 43831.4. Samples: 320940040. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 20:07:38,244][46753] Avg episode reward: [(0, '0.235')] +[2024-06-10 20:07:41,577][46990] Updated weights for policy 0, policy_version 19590 (0.0038) +[2024-06-10 20:07:43,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.6, 300 sec: 43931.3). Total num frames: 321028096. Throughput: 0: 43727.1. Samples: 321199860. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 20:07:43,240][46753] Avg episode reward: [(0, '0.231')] +[2024-06-10 20:07:44,961][46990] Updated weights for policy 0, policy_version 19600 (0.0038) +[2024-06-10 20:07:48,239][46753] Fps is (10 sec: 44254.0, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 321257472. Throughput: 0: 43658.7. Samples: 321329040. Policy #0 lag: (min: 0.0, avg: 9.8, max: 22.0) +[2024-06-10 20:07:48,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:07:48,767][46990] Updated weights for policy 0, policy_version 19610 (0.0036) +[2024-06-10 20:07:52,662][46990] Updated weights for policy 0, policy_version 19620 (0.0028) +[2024-06-10 20:07:53,240][46753] Fps is (10 sec: 44236.4, 60 sec: 43690.7, 300 sec: 43987.5). Total num frames: 321470464. Throughput: 0: 43567.4. Samples: 321590620. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:07:53,240][46753] Avg episode reward: [(0, '0.240')] +[2024-06-10 20:07:55,893][46990] Updated weights for policy 0, policy_version 19630 (0.0029) +[2024-06-10 20:07:58,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 321683456. Throughput: 0: 43755.0. Samples: 321859720. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:07:58,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:07:59,965][46990] Updated weights for policy 0, policy_version 19640 (0.0030) +[2024-06-10 20:08:03,240][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.6, 300 sec: 43931.3). Total num frames: 321929216. Throughput: 0: 43766.5. Samples: 321989540. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:08:03,240][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 20:08:03,469][46990] Updated weights for policy 0, policy_version 19650 (0.0034) +[2024-06-10 20:08:07,396][46990] Updated weights for policy 0, policy_version 19660 (0.0037) +[2024-06-10 20:08:08,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43963.7, 300 sec: 43986.9). Total num frames: 322142208. Throughput: 0: 43858.3. Samples: 322255180. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 20:08:08,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:08:11,241][46990] Updated weights for policy 0, policy_version 19670 (0.0036) +[2024-06-10 20:08:13,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 322338816. Throughput: 0: 43848.0. Samples: 322517400. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 20:08:13,241][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:08:15,050][46990] Updated weights for policy 0, policy_version 19680 (0.0038) +[2024-06-10 20:08:18,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 322568192. Throughput: 0: 43720.1. Samples: 322644340. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 20:08:18,240][46753] Avg episode reward: [(0, '0.243')] +[2024-06-10 20:08:18,543][46990] Updated weights for policy 0, policy_version 19690 (0.0039) +[2024-06-10 20:08:22,588][46990] Updated weights for policy 0, policy_version 19700 (0.0035) +[2024-06-10 20:08:23,239][46753] Fps is (10 sec: 45876.0, 60 sec: 43963.9, 300 sec: 43986.9). Total num frames: 322797568. Throughput: 0: 43717.7. Samples: 322907160. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 20:08:23,240][46753] Avg episode reward: [(0, '0.249')] +[2024-06-10 20:08:23,367][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000019703_322813952.pth... +[2024-06-10 20:08:23,414][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000019060_312279040.pth +[2024-06-10 20:08:25,801][46990] Updated weights for policy 0, policy_version 19710 (0.0034) +[2024-06-10 20:08:28,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 322994176. Throughput: 0: 43904.1. Samples: 323175540. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 20:08:28,240][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 20:08:29,882][46990] Updated weights for policy 0, policy_version 19720 (0.0036) +[2024-06-10 20:08:33,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 323239936. Throughput: 0: 43932.0. Samples: 323305980. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 20:08:33,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:08:33,433][46990] Updated weights for policy 0, policy_version 19730 (0.0036) +[2024-06-10 20:08:37,323][46990] Updated weights for policy 0, policy_version 19740 (0.0048) +[2024-06-10 20:08:38,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43966.7, 300 sec: 43931.3). Total num frames: 323452928. Throughput: 0: 44100.7. Samples: 323575140. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 20:08:38,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:08:41,036][46990] Updated weights for policy 0, policy_version 19750 (0.0030) +[2024-06-10 20:08:43,244][46753] Fps is (10 sec: 40941.6, 60 sec: 43687.4, 300 sec: 43875.1). Total num frames: 323649536. Throughput: 0: 43910.8. Samples: 323835900. Policy #0 lag: (min: 0.0, avg: 11.1, max: 22.0) +[2024-06-10 20:08:43,245][46753] Avg episode reward: [(0, '0.236')] +[2024-06-10 20:08:44,451][46970] Signal inference workers to stop experience collection... (4550 times) +[2024-06-10 20:08:44,484][46990] InferenceWorker_p0-w0: stopping experience collection (4550 times) +[2024-06-10 20:08:44,494][46970] Signal inference workers to resume experience collection... (4550 times) +[2024-06-10 20:08:44,509][46990] InferenceWorker_p0-w0: resuming experience collection (4550 times) +[2024-06-10 20:08:44,630][46990] Updated weights for policy 0, policy_version 19760 (0.0032) +[2024-06-10 20:08:48,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 323895296. Throughput: 0: 43871.2. Samples: 323963740. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:08:48,240][46753] Avg episode reward: [(0, '0.239')] +[2024-06-10 20:08:48,270][46990] Updated weights for policy 0, policy_version 19770 (0.0034) +[2024-06-10 20:08:52,135][46990] Updated weights for policy 0, policy_version 19780 (0.0026) +[2024-06-10 20:08:53,239][46753] Fps is (10 sec: 45896.1, 60 sec: 43963.9, 300 sec: 43931.3). Total num frames: 324108288. Throughput: 0: 43785.4. Samples: 324225520. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:08:53,240][46753] Avg episode reward: [(0, '0.224')] +[2024-06-10 20:08:55,696][46990] Updated weights for policy 0, policy_version 19790 (0.0033) +[2024-06-10 20:08:58,244][46753] Fps is (10 sec: 42579.7, 60 sec: 43960.5, 300 sec: 43875.4). Total num frames: 324321280. Throughput: 0: 44061.0. Samples: 324500340. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:08:58,244][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:08:59,510][46990] Updated weights for policy 0, policy_version 19800 (0.0029) +[2024-06-10 20:09:03,238][46990] Updated weights for policy 0, policy_version 19810 (0.0038) +[2024-06-10 20:09:03,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43963.9, 300 sec: 43931.4). Total num frames: 324567040. Throughput: 0: 43997.8. Samples: 324624240. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:09:03,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:09:06,993][46990] Updated weights for policy 0, policy_version 19820 (0.0028) +[2024-06-10 20:09:08,239][46753] Fps is (10 sec: 45896.2, 60 sec: 43963.8, 300 sec: 43986.9). Total num frames: 324780032. Throughput: 0: 44084.4. Samples: 324890960. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:09:08,240][46753] Avg episode reward: [(0, '0.233')] +[2024-06-10 20:09:10,825][46990] Updated weights for policy 0, policy_version 19830 (0.0041) +[2024-06-10 20:09:13,240][46753] Fps is (10 sec: 40959.0, 60 sec: 43963.6, 300 sec: 43875.8). Total num frames: 324976640. Throughput: 0: 43938.4. Samples: 325152780. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:09:13,240][46753] Avg episode reward: [(0, '0.245')] +[2024-06-10 20:09:14,711][46990] Updated weights for policy 0, policy_version 19840 (0.0039) +[2024-06-10 20:09:18,227][46990] Updated weights for policy 0, policy_version 19850 (0.0037) +[2024-06-10 20:09:18,240][46753] Fps is (10 sec: 44235.7, 60 sec: 44236.7, 300 sec: 43931.3). Total num frames: 325222400. Throughput: 0: 43876.3. Samples: 325280420. Policy #0 lag: (min: 0.0, avg: 10.4, max: 23.0) +[2024-06-10 20:09:18,240][46753] Avg episode reward: [(0, '0.249')] +[2024-06-10 20:09:21,865][46990] Updated weights for policy 0, policy_version 19860 (0.0037) +[2024-06-10 20:09:23,240][46753] Fps is (10 sec: 45875.5, 60 sec: 43963.6, 300 sec: 43986.9). Total num frames: 325435392. Throughput: 0: 43590.5. Samples: 325536720. Policy #0 lag: (min: 0.0, avg: 10.4, max: 23.0) +[2024-06-10 20:09:23,240][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 20:09:25,501][46990] Updated weights for policy 0, policy_version 19870 (0.0032) +[2024-06-10 20:09:28,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 325632000. Throughput: 0: 43908.4. Samples: 325811580. Policy #0 lag: (min: 0.0, avg: 10.4, max: 23.0) +[2024-06-10 20:09:28,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:09:29,287][46990] Updated weights for policy 0, policy_version 19880 (0.0033) +[2024-06-10 20:09:33,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 325861376. Throughput: 0: 43919.6. Samples: 325940120. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 20:09:33,240][46753] Avg episode reward: [(0, '0.234')] +[2024-06-10 20:09:33,271][46990] Updated weights for policy 0, policy_version 19890 (0.0027) +[2024-06-10 20:09:36,839][46990] Updated weights for policy 0, policy_version 19900 (0.0039) +[2024-06-10 20:09:38,244][46753] Fps is (10 sec: 45854.8, 60 sec: 43960.4, 300 sec: 43875.1). Total num frames: 326090752. Throughput: 0: 43871.1. Samples: 326199920. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 20:09:38,245][46753] Avg episode reward: [(0, '0.239')] +[2024-06-10 20:09:40,745][46990] Updated weights for policy 0, policy_version 19910 (0.0029) +[2024-06-10 20:09:43,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43967.1, 300 sec: 43875.8). Total num frames: 326287360. Throughput: 0: 43690.6. Samples: 326466220. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:09:43,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:09:44,493][46990] Updated weights for policy 0, policy_version 19920 (0.0037) +[2024-06-10 20:09:48,185][46990] Updated weights for policy 0, policy_version 19930 (0.0025) +[2024-06-10 20:09:48,240][46753] Fps is (10 sec: 44256.1, 60 sec: 43963.7, 300 sec: 43931.4). Total num frames: 326533120. Throughput: 0: 43734.9. Samples: 326592320. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:09:48,240][46753] Avg episode reward: [(0, '0.225')] +[2024-06-10 20:09:51,700][46990] Updated weights for policy 0, policy_version 19940 (0.0034) +[2024-06-10 20:09:53,239][46753] Fps is (10 sec: 47513.8, 60 sec: 44236.8, 300 sec: 44042.4). Total num frames: 326762496. Throughput: 0: 43572.0. Samples: 326851700. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:09:53,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:09:55,714][46990] Updated weights for policy 0, policy_version 19950 (0.0028) +[2024-06-10 20:09:58,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43967.0, 300 sec: 43875.8). Total num frames: 326959104. Throughput: 0: 43782.8. Samples: 327123000. Policy #0 lag: (min: 1.0, avg: 9.6, max: 21.0) +[2024-06-10 20:09:58,240][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:09:58,241][46970] Saving new best policy, reward=0.257! +[2024-06-10 20:09:59,183][46990] Updated weights for policy 0, policy_version 19960 (0.0046) +[2024-06-10 20:10:02,992][46990] Updated weights for policy 0, policy_version 19970 (0.0034) +[2024-06-10 20:10:03,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 327188480. Throughput: 0: 43946.0. Samples: 327257980. Policy #0 lag: (min: 1.0, avg: 9.6, max: 21.0) +[2024-06-10 20:10:03,240][46753] Avg episode reward: [(0, '0.239')] +[2024-06-10 20:10:06,695][46990] Updated weights for policy 0, policy_version 19980 (0.0038) +[2024-06-10 20:10:08,244][46753] Fps is (10 sec: 45854.6, 60 sec: 43960.4, 300 sec: 43930.7). Total num frames: 327417856. Throughput: 0: 43981.0. Samples: 327516060. Policy #0 lag: (min: 1.0, avg: 9.6, max: 21.0) +[2024-06-10 20:10:08,244][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 20:10:09,513][46970] Signal inference workers to stop experience collection... (4600 times) +[2024-06-10 20:10:09,562][46990] InferenceWorker_p0-w0: stopping experience collection (4600 times) +[2024-06-10 20:10:09,566][46970] Signal inference workers to resume experience collection... (4600 times) +[2024-06-10 20:10:09,579][46990] InferenceWorker_p0-w0: resuming experience collection (4600 times) +[2024-06-10 20:10:10,668][46990] Updated weights for policy 0, policy_version 19990 (0.0034) +[2024-06-10 20:10:13,241][46753] Fps is (10 sec: 42589.9, 60 sec: 43962.4, 300 sec: 43875.5). Total num frames: 327614464. Throughput: 0: 43772.4. Samples: 327781420. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 20:10:13,242][46753] Avg episode reward: [(0, '0.226')] +[2024-06-10 20:10:13,901][46990] Updated weights for policy 0, policy_version 20000 (0.0029) +[2024-06-10 20:10:18,122][46990] Updated weights for policy 0, policy_version 20010 (0.0032) +[2024-06-10 20:10:18,240][46753] Fps is (10 sec: 42615.6, 60 sec: 43690.4, 300 sec: 43875.7). Total num frames: 327843840. Throughput: 0: 43779.1. Samples: 327910200. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 20:10:18,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:10:21,909][46990] Updated weights for policy 0, policy_version 20020 (0.0026) +[2024-06-10 20:10:23,239][46753] Fps is (10 sec: 44245.3, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 328056832. Throughput: 0: 43873.2. Samples: 328174020. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 20:10:23,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:10:23,289][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000020024_328073216.pth... +[2024-06-10 20:10:23,335][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000019380_317521920.pth +[2024-06-10 20:10:25,327][46990] Updated weights for policy 0, policy_version 20030 (0.0032) +[2024-06-10 20:10:28,239][46753] Fps is (10 sec: 44238.6, 60 sec: 44236.8, 300 sec: 43875.8). Total num frames: 328286208. Throughput: 0: 43664.8. Samples: 328431140. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 20:10:28,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:10:29,389][46990] Updated weights for policy 0, policy_version 20040 (0.0042) +[2024-06-10 20:10:33,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 328482816. Throughput: 0: 43883.8. Samples: 328567080. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 20:10:33,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:10:33,246][46990] Updated weights for policy 0, policy_version 20050 (0.0039) +[2024-06-10 20:10:36,537][46990] Updated weights for policy 0, policy_version 20060 (0.0022) +[2024-06-10 20:10:38,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43967.0, 300 sec: 43986.9). Total num frames: 328728576. Throughput: 0: 44049.6. Samples: 328833940. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 20:10:38,240][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:10:40,427][46990] Updated weights for policy 0, policy_version 20070 (0.0030) +[2024-06-10 20:10:43,244][46753] Fps is (10 sec: 44216.3, 60 sec: 43960.4, 300 sec: 43875.1). Total num frames: 328925184. Throughput: 0: 43719.6. Samples: 329090580. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 20:10:43,245][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:10:44,228][46990] Updated weights for policy 0, policy_version 20080 (0.0027) +[2024-06-10 20:10:48,118][46990] Updated weights for policy 0, policy_version 20090 (0.0032) +[2024-06-10 20:10:48,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 329154560. Throughput: 0: 43715.9. Samples: 329225200. Policy #0 lag: (min: 0.0, avg: 10.8, max: 22.0) +[2024-06-10 20:10:48,240][46753] Avg episode reward: [(0, '0.245')] +[2024-06-10 20:10:51,493][46990] Updated weights for policy 0, policy_version 20100 (0.0041) +[2024-06-10 20:10:53,240][46753] Fps is (10 sec: 44256.3, 60 sec: 43417.4, 300 sec: 43931.3). Total num frames: 329367552. Throughput: 0: 43854.9. Samples: 329489340. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 20:10:53,241][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:10:56,134][46990] Updated weights for policy 0, policy_version 20110 (0.0048) +[2024-06-10 20:10:58,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 329580544. Throughput: 0: 43786.3. Samples: 329751720. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 20:10:58,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:10:59,012][46990] Updated weights for policy 0, policy_version 20120 (0.0034) +[2024-06-10 20:11:03,240][46753] Fps is (10 sec: 42598.5, 60 sec: 43417.5, 300 sec: 43820.2). Total num frames: 329793536. Throughput: 0: 43838.1. Samples: 329882900. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 20:11:03,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:11:03,634][46990] Updated weights for policy 0, policy_version 20130 (0.0036) +[2024-06-10 20:11:06,504][46990] Updated weights for policy 0, policy_version 20140 (0.0045) +[2024-06-10 20:11:08,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43420.9, 300 sec: 43875.8). Total num frames: 330022912. Throughput: 0: 43819.6. Samples: 330145900. Policy #0 lag: (min: 1.0, avg: 10.1, max: 21.0) +[2024-06-10 20:11:08,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:11:11,090][46990] Updated weights for policy 0, policy_version 20150 (0.0038) +[2024-06-10 20:11:13,239][46753] Fps is (10 sec: 47514.1, 60 sec: 44238.2, 300 sec: 43875.8). Total num frames: 330268672. Throughput: 0: 43859.2. Samples: 330404800. Policy #0 lag: (min: 1.0, avg: 10.1, max: 21.0) +[2024-06-10 20:11:13,240][46753] Avg episode reward: [(0, '0.241')] +[2024-06-10 20:11:14,063][46990] Updated weights for policy 0, policy_version 20160 (0.0032) +[2024-06-10 20:11:18,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43144.9, 300 sec: 43764.7). Total num frames: 330432512. Throughput: 0: 43806.7. Samples: 330538380. Policy #0 lag: (min: 1.0, avg: 10.1, max: 21.0) +[2024-06-10 20:11:18,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:11:18,401][46990] Updated weights for policy 0, policy_version 20170 (0.0044) +[2024-06-10 20:11:21,634][46990] Updated weights for policy 0, policy_version 20180 (0.0027) +[2024-06-10 20:11:23,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 330678272. Throughput: 0: 43797.8. Samples: 330804840. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 20:11:23,240][46753] Avg episode reward: [(0, '0.251')] +[2024-06-10 20:11:25,803][46990] Updated weights for policy 0, policy_version 20190 (0.0035) +[2024-06-10 20:11:28,239][46753] Fps is (10 sec: 47513.1, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 330907648. Throughput: 0: 43831.1. Samples: 331062780. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 20:11:28,240][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:11:29,216][46990] Updated weights for policy 0, policy_version 20200 (0.0027) +[2024-06-10 20:11:30,250][46970] Signal inference workers to stop experience collection... (4650 times) +[2024-06-10 20:11:30,251][46970] Signal inference workers to resume experience collection... (4650 times) +[2024-06-10 20:11:30,271][46990] InferenceWorker_p0-w0: stopping experience collection (4650 times) +[2024-06-10 20:11:30,271][46990] InferenceWorker_p0-w0: resuming experience collection (4650 times) +[2024-06-10 20:11:33,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 331104256. Throughput: 0: 43848.1. Samples: 331198360. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 20:11:33,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:11:33,290][46990] Updated weights for policy 0, policy_version 20210 (0.0034) +[2024-06-10 20:11:36,403][46990] Updated weights for policy 0, policy_version 20220 (0.0033) +[2024-06-10 20:11:38,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43144.6, 300 sec: 43764.7). Total num frames: 331317248. Throughput: 0: 43873.5. Samples: 331463640. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 20:11:38,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:11:41,025][46990] Updated weights for policy 0, policy_version 20230 (0.0042) +[2024-06-10 20:11:43,240][46753] Fps is (10 sec: 47513.1, 60 sec: 44240.1, 300 sec: 43875.8). Total num frames: 331579392. Throughput: 0: 43762.2. Samples: 331721020. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 20:11:43,240][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:11:44,335][46990] Updated weights for policy 0, policy_version 20240 (0.0044) +[2024-06-10 20:11:48,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 331759616. Throughput: 0: 43796.1. Samples: 331853720. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 20:11:48,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:11:48,257][46990] Updated weights for policy 0, policy_version 20250 (0.0038) +[2024-06-10 20:11:51,806][46990] Updated weights for policy 0, policy_version 20260 (0.0026) +[2024-06-10 20:11:53,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 332005376. Throughput: 0: 43918.6. Samples: 332122240. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 20:11:53,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:11:55,339][46990] Updated weights for policy 0, policy_version 20270 (0.0031) +[2024-06-10 20:11:58,239][46753] Fps is (10 sec: 47513.9, 60 sec: 44236.8, 300 sec: 43820.3). Total num frames: 332234752. Throughput: 0: 43891.6. Samples: 332379920. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 20:11:58,240][46753] Avg episode reward: [(0, '0.249')] +[2024-06-10 20:11:59,134][46990] Updated weights for policy 0, policy_version 20280 (0.0034) +[2024-06-10 20:12:03,171][46990] Updated weights for policy 0, policy_version 20290 (0.0038) +[2024-06-10 20:12:03,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 332431360. Throughput: 0: 43875.3. Samples: 332512780. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 20:12:03,240][46753] Avg episode reward: [(0, '0.240')] +[2024-06-10 20:12:06,605][46990] Updated weights for policy 0, policy_version 20300 (0.0036) +[2024-06-10 20:12:08,242][46753] Fps is (10 sec: 40951.2, 60 sec: 43689.1, 300 sec: 43819.9). Total num frames: 332644352. Throughput: 0: 43698.4. Samples: 332771360. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 20:12:08,242][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:12:10,937][46990] Updated weights for policy 0, policy_version 20310 (0.0039) +[2024-06-10 20:12:13,240][46753] Fps is (10 sec: 44237.0, 60 sec: 43417.5, 300 sec: 43820.2). Total num frames: 332873728. Throughput: 0: 43718.6. Samples: 333030120. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 20:12:13,240][46753] Avg episode reward: [(0, '0.243')] +[2024-06-10 20:12:14,450][46990] Updated weights for policy 0, policy_version 20320 (0.0044) +[2024-06-10 20:12:18,143][46990] Updated weights for policy 0, policy_version 20330 (0.0037) +[2024-06-10 20:12:18,239][46753] Fps is (10 sec: 44246.3, 60 sec: 44236.8, 300 sec: 43820.3). Total num frames: 333086720. Throughput: 0: 43608.0. Samples: 333160720. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 20:12:18,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:12:22,015][46990] Updated weights for policy 0, policy_version 20340 (0.0026) +[2024-06-10 20:12:23,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 333316096. Throughput: 0: 43728.9. Samples: 333431440. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 20:12:23,240][46753] Avg episode reward: [(0, '0.262')] +[2024-06-10 20:12:23,248][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000020344_333316096.pth... +[2024-06-10 20:12:23,301][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000019703_322813952.pth +[2024-06-10 20:12:23,309][46970] Saving new best policy, reward=0.262! +[2024-06-10 20:12:25,510][46990] Updated weights for policy 0, policy_version 20350 (0.0035) +[2024-06-10 20:12:28,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 333545472. Throughput: 0: 43833.5. Samples: 333693520. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 20:12:28,240][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:12:29,490][46990] Updated weights for policy 0, policy_version 20360 (0.0036) +[2024-06-10 20:12:33,044][46970] Signal inference workers to stop experience collection... (4700 times) +[2024-06-10 20:12:33,044][46970] Signal inference workers to resume experience collection... (4700 times) +[2024-06-10 20:12:33,089][46990] InferenceWorker_p0-w0: stopping experience collection (4700 times) +[2024-06-10 20:12:33,089][46990] InferenceWorker_p0-w0: resuming experience collection (4700 times) +[2024-06-10 20:12:33,179][46990] Updated weights for policy 0, policy_version 20370 (0.0024) +[2024-06-10 20:12:33,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43963.7, 300 sec: 43820.8). Total num frames: 333742080. Throughput: 0: 43776.9. Samples: 333823680. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 20:12:33,240][46753] Avg episode reward: [(0, '0.239')] +[2024-06-10 20:12:37,134][46990] Updated weights for policy 0, policy_version 20380 (0.0048) +[2024-06-10 20:12:38,239][46753] Fps is (10 sec: 42597.8, 60 sec: 44236.7, 300 sec: 43875.8). Total num frames: 333971456. Throughput: 0: 43741.3. Samples: 334090600. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 20:12:38,240][46753] Avg episode reward: [(0, '0.252')] +[2024-06-10 20:12:40,770][46990] Updated weights for policy 0, policy_version 20390 (0.0042) +[2024-06-10 20:12:43,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 334200832. Throughput: 0: 43698.2. Samples: 334346340. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 20:12:43,240][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:12:44,580][46990] Updated weights for policy 0, policy_version 20400 (0.0048) +[2024-06-10 20:12:48,181][46990] Updated weights for policy 0, policy_version 20410 (0.0026) +[2024-06-10 20:12:48,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 334397440. Throughput: 0: 43588.7. Samples: 334474260. Policy #0 lag: (min: 0.0, avg: 11.3, max: 22.0) +[2024-06-10 20:12:48,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:12:52,097][46990] Updated weights for policy 0, policy_version 20420 (0.0037) +[2024-06-10 20:12:53,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 334626816. Throughput: 0: 43827.5. Samples: 334743500. Policy #0 lag: (min: 0.0, avg: 11.3, max: 22.0) +[2024-06-10 20:12:53,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:12:55,402][46990] Updated weights for policy 0, policy_version 20430 (0.0023) +[2024-06-10 20:12:58,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 334839808. Throughput: 0: 43881.0. Samples: 335004760. Policy #0 lag: (min: 0.0, avg: 11.3, max: 22.0) +[2024-06-10 20:12:58,240][46753] Avg episode reward: [(0, '0.243')] +[2024-06-10 20:12:59,398][46990] Updated weights for policy 0, policy_version 20440 (0.0031) +[2024-06-10 20:13:03,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43417.8, 300 sec: 43709.2). Total num frames: 335036416. Throughput: 0: 43844.9. Samples: 335133740. Policy #0 lag: (min: 1.0, avg: 9.2, max: 22.0) +[2024-06-10 20:13:03,240][46753] Avg episode reward: [(0, '0.258')] +[2024-06-10 20:13:03,349][46990] Updated weights for policy 0, policy_version 20450 (0.0043) +[2024-06-10 20:13:06,934][46990] Updated weights for policy 0, policy_version 20460 (0.0042) +[2024-06-10 20:13:08,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43965.3, 300 sec: 43875.8). Total num frames: 335282176. Throughput: 0: 43640.4. Samples: 335395260. Policy #0 lag: (min: 1.0, avg: 9.2, max: 22.0) +[2024-06-10 20:13:08,240][46753] Avg episode reward: [(0, '0.238')] +[2024-06-10 20:13:10,926][46990] Updated weights for policy 0, policy_version 20470 (0.0035) +[2024-06-10 20:13:13,240][46753] Fps is (10 sec: 45874.5, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 335495168. Throughput: 0: 43502.1. Samples: 335651120. Policy #0 lag: (min: 1.0, avg: 9.2, max: 22.0) +[2024-06-10 20:13:13,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:13:14,497][46990] Updated weights for policy 0, policy_version 20480 (0.0052) +[2024-06-10 20:13:18,175][46990] Updated weights for policy 0, policy_version 20490 (0.0031) +[2024-06-10 20:13:18,244][46753] Fps is (10 sec: 42579.3, 60 sec: 43687.4, 300 sec: 43764.0). Total num frames: 335708160. Throughput: 0: 43567.3. Samples: 335784400. Policy #0 lag: (min: 1.0, avg: 10.3, max: 21.0) +[2024-06-10 20:13:18,244][46753] Avg episode reward: [(0, '0.237')] +[2024-06-10 20:13:21,944][46990] Updated weights for policy 0, policy_version 20500 (0.0034) +[2024-06-10 20:13:23,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 335921152. Throughput: 0: 43663.7. Samples: 336055460. Policy #0 lag: (min: 1.0, avg: 10.3, max: 21.0) +[2024-06-10 20:13:23,240][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:13:25,568][46990] Updated weights for policy 0, policy_version 20510 (0.0034) +[2024-06-10 20:13:28,239][46753] Fps is (10 sec: 44256.2, 60 sec: 43417.5, 300 sec: 43764.7). Total num frames: 336150528. Throughput: 0: 43944.4. Samples: 336323840. Policy #0 lag: (min: 1.0, avg: 10.3, max: 21.0) +[2024-06-10 20:13:28,241][46753] Avg episode reward: [(0, '0.243')] +[2024-06-10 20:13:29,224][46990] Updated weights for policy 0, policy_version 20520 (0.0026) +[2024-06-10 20:13:33,056][46990] Updated weights for policy 0, policy_version 20530 (0.0034) +[2024-06-10 20:13:33,239][46753] Fps is (10 sec: 45874.7, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 336379904. Throughput: 0: 43939.4. Samples: 336451540. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:13:33,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:13:36,493][46990] Updated weights for policy 0, policy_version 20540 (0.0039) +[2024-06-10 20:13:38,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43417.7, 300 sec: 43820.9). Total num frames: 336576512. Throughput: 0: 43792.0. Samples: 336714140. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:13:38,240][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:13:40,506][46990] Updated weights for policy 0, policy_version 20550 (0.0031) +[2024-06-10 20:13:43,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 336805888. Throughput: 0: 43753.3. Samples: 336973660. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:13:43,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:13:44,239][46990] Updated weights for policy 0, policy_version 20560 (0.0038) +[2024-06-10 20:13:47,784][46990] Updated weights for policy 0, policy_version 20570 (0.0042) +[2024-06-10 20:13:48,239][46753] Fps is (10 sec: 47513.1, 60 sec: 44236.7, 300 sec: 43875.8). Total num frames: 337051648. Throughput: 0: 43895.5. Samples: 337109040. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 20:13:48,240][46753] Avg episode reward: [(0, '0.239')] +[2024-06-10 20:13:52,070][46990] Updated weights for policy 0, policy_version 20580 (0.0023) +[2024-06-10 20:13:53,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.6, 300 sec: 43765.4). Total num frames: 337231872. Throughput: 0: 44034.2. Samples: 337376800. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 20:13:53,240][46753] Avg episode reward: [(0, '0.251')] +[2024-06-10 20:13:55,239][46990] Updated weights for policy 0, policy_version 20590 (0.0028) +[2024-06-10 20:13:56,279][46970] Signal inference workers to stop experience collection... (4750 times) +[2024-06-10 20:13:56,280][46970] Signal inference workers to resume experience collection... (4750 times) +[2024-06-10 20:13:56,311][46990] InferenceWorker_p0-w0: stopping experience collection (4750 times) +[2024-06-10 20:13:56,311][46990] InferenceWorker_p0-w0: resuming experience collection (4750 times) +[2024-06-10 20:13:58,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 337477632. Throughput: 0: 44080.6. Samples: 337634740. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 20:13:58,240][46753] Avg episode reward: [(0, '0.258')] +[2024-06-10 20:13:59,249][46990] Updated weights for policy 0, policy_version 20600 (0.0036) +[2024-06-10 20:14:02,693][46990] Updated weights for policy 0, policy_version 20610 (0.0025) +[2024-06-10 20:14:03,239][46753] Fps is (10 sec: 45875.5, 60 sec: 44236.8, 300 sec: 43764.7). Total num frames: 337690624. Throughput: 0: 43892.0. Samples: 337759340. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 20:14:03,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:14:06,382][46990] Updated weights for policy 0, policy_version 20620 (0.0031) +[2024-06-10 20:14:08,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 337903616. Throughput: 0: 43868.8. Samples: 338029560. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 20:14:08,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:14:09,944][46990] Updated weights for policy 0, policy_version 20630 (0.0027) +[2024-06-10 20:14:13,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 338132992. Throughput: 0: 43746.7. Samples: 338292440. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 20:14:13,240][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:14:14,168][46990] Updated weights for policy 0, policy_version 20640 (0.0055) +[2024-06-10 20:14:17,298][46990] Updated weights for policy 0, policy_version 20650 (0.0039) +[2024-06-10 20:14:18,239][46753] Fps is (10 sec: 47513.5, 60 sec: 44513.1, 300 sec: 43875.8). Total num frames: 338378752. Throughput: 0: 43858.7. Samples: 338425180. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 20:14:18,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:14:21,787][46990] Updated weights for policy 0, policy_version 20660 (0.0034) +[2024-06-10 20:14:23,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 338558976. Throughput: 0: 44059.0. Samples: 338696800. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 20:14:23,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:14:23,322][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000020665_338575360.pth... +[2024-06-10 20:14:23,384][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000020024_328073216.pth +[2024-06-10 20:14:24,716][46990] Updated weights for policy 0, policy_version 20670 (0.0033) +[2024-06-10 20:14:28,240][46753] Fps is (10 sec: 40959.8, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 338788352. Throughput: 0: 44114.1. Samples: 338958800. Policy #0 lag: (min: 0.0, avg: 12.2, max: 22.0) +[2024-06-10 20:14:28,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:14:29,156][46990] Updated weights for policy 0, policy_version 20680 (0.0037) +[2024-06-10 20:14:32,118][46990] Updated weights for policy 0, policy_version 20690 (0.0038) +[2024-06-10 20:14:33,240][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.7, 300 sec: 43820.9). Total num frames: 339017728. Throughput: 0: 43938.2. Samples: 339086260. Policy #0 lag: (min: 0.0, avg: 12.2, max: 22.0) +[2024-06-10 20:14:33,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:14:36,388][46990] Updated weights for policy 0, policy_version 20700 (0.0044) +[2024-06-10 20:14:38,240][46753] Fps is (10 sec: 44237.0, 60 sec: 44236.7, 300 sec: 43875.8). Total num frames: 339230720. Throughput: 0: 43907.0. Samples: 339352620. Policy #0 lag: (min: 0.0, avg: 12.2, max: 22.0) +[2024-06-10 20:14:38,240][46753] Avg episode reward: [(0, '0.252')] +[2024-06-10 20:14:39,554][46990] Updated weights for policy 0, policy_version 20710 (0.0034) +[2024-06-10 20:14:43,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 339443712. Throughput: 0: 43882.2. Samples: 339609440. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 20:14:43,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:14:43,877][46990] Updated weights for policy 0, policy_version 20720 (0.0041) +[2024-06-10 20:14:47,240][46990] Updated weights for policy 0, policy_version 20730 (0.0035) +[2024-06-10 20:14:48,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43963.8, 300 sec: 43820.2). Total num frames: 339689472. Throughput: 0: 44152.4. Samples: 339746200. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 20:14:48,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:14:51,587][46990] Updated weights for policy 0, policy_version 20740 (0.0027) +[2024-06-10 20:14:53,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 339869696. Throughput: 0: 44101.4. Samples: 340014120. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 20:14:53,248][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:14:54,439][46990] Updated weights for policy 0, policy_version 20750 (0.0024) +[2024-06-10 20:14:58,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 340099072. Throughput: 0: 44082.2. Samples: 340276140. Policy #0 lag: (min: 1.0, avg: 11.4, max: 22.0) +[2024-06-10 20:14:58,240][46753] Avg episode reward: [(0, '0.239')] +[2024-06-10 20:14:59,001][46990] Updated weights for policy 0, policy_version 20760 (0.0035) +[2024-06-10 20:15:01,931][46990] Updated weights for policy 0, policy_version 20770 (0.0036) +[2024-06-10 20:15:03,239][46753] Fps is (10 sec: 47513.3, 60 sec: 44236.7, 300 sec: 43820.9). Total num frames: 340344832. Throughput: 0: 43996.9. Samples: 340405040. Policy #0 lag: (min: 1.0, avg: 11.4, max: 22.0) +[2024-06-10 20:15:03,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:15:06,168][46990] Updated weights for policy 0, policy_version 20780 (0.0036) +[2024-06-10 20:15:08,239][46753] Fps is (10 sec: 45875.1, 60 sec: 44236.8, 300 sec: 43876.1). Total num frames: 340557824. Throughput: 0: 43873.4. Samples: 340671100. Policy #0 lag: (min: 1.0, avg: 11.4, max: 22.0) +[2024-06-10 20:15:08,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:15:09,159][46990] Updated weights for policy 0, policy_version 20790 (0.0037) +[2024-06-10 20:15:13,240][46753] Fps is (10 sec: 40959.6, 60 sec: 43690.6, 300 sec: 43764.8). Total num frames: 340754432. Throughput: 0: 43898.7. Samples: 340934240. Policy #0 lag: (min: 0.0, avg: 10.9, max: 24.0) +[2024-06-10 20:15:13,240][46753] Avg episode reward: [(0, '0.249')] +[2024-06-10 20:15:13,497][46990] Updated weights for policy 0, policy_version 20800 (0.0045) +[2024-06-10 20:15:16,802][46990] Updated weights for policy 0, policy_version 20810 (0.0035) +[2024-06-10 20:15:18,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.8, 300 sec: 43931.3). Total num frames: 341016576. Throughput: 0: 43958.3. Samples: 341064380. Policy #0 lag: (min: 0.0, avg: 10.9, max: 24.0) +[2024-06-10 20:15:18,240][46753] Avg episode reward: [(0, '0.241')] +[2024-06-10 20:15:21,150][46990] Updated weights for policy 0, policy_version 20820 (0.0025) +[2024-06-10 20:15:23,239][46753] Fps is (10 sec: 45875.8, 60 sec: 44236.8, 300 sec: 43820.3). Total num frames: 341213184. Throughput: 0: 44073.4. Samples: 341335920. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:15:23,240][46753] Avg episode reward: [(0, '0.251')] +[2024-06-10 20:15:23,624][46970] Signal inference workers to stop experience collection... (4800 times) +[2024-06-10 20:15:23,625][46970] Signal inference workers to resume experience collection... (4800 times) +[2024-06-10 20:15:23,647][46990] InferenceWorker_p0-w0: stopping experience collection (4800 times) +[2024-06-10 20:15:23,647][46990] InferenceWorker_p0-w0: resuming experience collection (4800 times) +[2024-06-10 20:15:24,056][46990] Updated weights for policy 0, policy_version 20830 (0.0034) +[2024-06-10 20:15:28,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43690.8, 300 sec: 43820.2). Total num frames: 341409792. Throughput: 0: 44009.8. Samples: 341589880. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:15:28,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:15:28,791][46990] Updated weights for policy 0, policy_version 20840 (0.0034) +[2024-06-10 20:15:31,724][46990] Updated weights for policy 0, policy_version 20850 (0.0034) +[2024-06-10 20:15:33,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 341655552. Throughput: 0: 43831.0. Samples: 341718600. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:15:33,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:15:36,121][46990] Updated weights for policy 0, policy_version 20860 (0.0037) +[2024-06-10 20:15:38,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43963.9, 300 sec: 43876.5). Total num frames: 341868544. Throughput: 0: 43823.2. Samples: 341986160. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 20:15:38,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:15:39,105][46990] Updated weights for policy 0, policy_version 20870 (0.0045) +[2024-06-10 20:15:43,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 342081536. Throughput: 0: 43739.1. Samples: 342244400. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 20:15:43,240][46753] Avg episode reward: [(0, '0.252')] +[2024-06-10 20:15:43,334][46990] Updated weights for policy 0, policy_version 20880 (0.0032) +[2024-06-10 20:15:46,839][46990] Updated weights for policy 0, policy_version 20890 (0.0047) +[2024-06-10 20:15:48,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 342310912. Throughput: 0: 43813.9. Samples: 342376660. Policy #0 lag: (min: 0.0, avg: 8.8, max: 22.0) +[2024-06-10 20:15:48,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:15:51,058][46990] Updated weights for policy 0, policy_version 20900 (0.0030) +[2024-06-10 20:15:53,239][46753] Fps is (10 sec: 44236.9, 60 sec: 44236.8, 300 sec: 43875.8). Total num frames: 342523904. Throughput: 0: 43935.6. Samples: 342648200. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 20:15:53,240][46753] Avg episode reward: [(0, '0.245')] +[2024-06-10 20:15:54,133][46990] Updated weights for policy 0, policy_version 20910 (0.0028) +[2024-06-10 20:15:58,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 342720512. Throughput: 0: 43739.7. Samples: 342902520. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 20:15:58,244][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:15:58,938][46990] Updated weights for policy 0, policy_version 20920 (0.0035) +[2024-06-10 20:16:01,536][46990] Updated weights for policy 0, policy_version 20930 (0.0035) +[2024-06-10 20:16:03,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 342966272. Throughput: 0: 43652.9. Samples: 343028760. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 20:16:03,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:16:06,085][46990] Updated weights for policy 0, policy_version 20940 (0.0050) +[2024-06-10 20:16:08,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 343179264. Throughput: 0: 43721.8. Samples: 343303400. Policy #0 lag: (min: 0.0, avg: 8.4, max: 20.0) +[2024-06-10 20:16:08,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:16:08,902][46990] Updated weights for policy 0, policy_version 20950 (0.0032) +[2024-06-10 20:16:13,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43963.9, 300 sec: 43931.3). Total num frames: 343392256. Throughput: 0: 43939.6. Samples: 343567160. Policy #0 lag: (min: 0.0, avg: 8.4, max: 20.0) +[2024-06-10 20:16:13,248][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:16:13,333][46990] Updated weights for policy 0, policy_version 20960 (0.0042) +[2024-06-10 20:16:16,522][46990] Updated weights for policy 0, policy_version 20970 (0.0030) +[2024-06-10 20:16:18,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43417.7, 300 sec: 43875.8). Total num frames: 343621632. Throughput: 0: 43880.6. Samples: 343693220. Policy #0 lag: (min: 0.0, avg: 8.4, max: 20.0) +[2024-06-10 20:16:18,248][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:16:21,144][46990] Updated weights for policy 0, policy_version 20980 (0.0042) +[2024-06-10 20:16:23,239][46753] Fps is (10 sec: 47513.2, 60 sec: 44236.8, 300 sec: 43931.3). Total num frames: 343867392. Throughput: 0: 44051.4. Samples: 343968480. Policy #0 lag: (min: 0.0, avg: 8.3, max: 22.0) +[2024-06-10 20:16:23,240][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:16:23,263][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000020988_343867392.pth... +[2024-06-10 20:16:23,328][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000020344_333316096.pth +[2024-06-10 20:16:23,883][46990] Updated weights for policy 0, policy_version 20990 (0.0022) +[2024-06-10 20:16:28,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 344047616. Throughput: 0: 43985.8. Samples: 344223760. Policy #0 lag: (min: 0.0, avg: 8.3, max: 22.0) +[2024-06-10 20:16:28,240][46753] Avg episode reward: [(0, '0.252')] +[2024-06-10 20:16:28,958][46990] Updated weights for policy 0, policy_version 21000 (0.0033) +[2024-06-10 20:16:31,335][46990] Updated weights for policy 0, policy_version 21010 (0.0034) +[2024-06-10 20:16:33,240][46753] Fps is (10 sec: 42597.9, 60 sec: 43963.7, 300 sec: 43986.9). Total num frames: 344293376. Throughput: 0: 43924.7. Samples: 344353280. Policy #0 lag: (min: 0.0, avg: 8.3, max: 22.0) +[2024-06-10 20:16:33,240][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:16:36,131][46990] Updated weights for policy 0, policy_version 21020 (0.0037) +[2024-06-10 20:16:38,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43690.5, 300 sec: 43764.7). Total num frames: 344489984. Throughput: 0: 43659.4. Samples: 344612880. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:16:38,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:16:38,823][46990] Updated weights for policy 0, policy_version 21030 (0.0033) +[2024-06-10 20:16:43,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 344686592. Throughput: 0: 44084.8. Samples: 344886340. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:16:43,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:16:43,707][46990] Updated weights for policy 0, policy_version 21040 (0.0038) +[2024-06-10 20:16:45,903][46970] Signal inference workers to stop experience collection... (4850 times) +[2024-06-10 20:16:45,903][46970] Signal inference workers to resume experience collection... (4850 times) +[2024-06-10 20:16:45,943][46990] InferenceWorker_p0-w0: stopping experience collection (4850 times) +[2024-06-10 20:16:45,944][46990] InferenceWorker_p0-w0: resuming experience collection (4850 times) +[2024-06-10 20:16:46,268][46990] Updated weights for policy 0, policy_version 21050 (0.0034) +[2024-06-10 20:16:48,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 344948736. Throughput: 0: 44039.6. Samples: 345010540. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:16:48,240][46753] Avg episode reward: [(0, '0.240')] +[2024-06-10 20:16:50,975][46990] Updated weights for policy 0, policy_version 21060 (0.0048) +[2024-06-10 20:16:53,239][46753] Fps is (10 sec: 49151.9, 60 sec: 44236.7, 300 sec: 43875.8). Total num frames: 345178112. Throughput: 0: 44007.9. Samples: 345283760. Policy #0 lag: (min: 0.0, avg: 7.6, max: 22.0) +[2024-06-10 20:16:53,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:16:53,838][46990] Updated weights for policy 0, policy_version 21070 (0.0037) +[2024-06-10 20:16:58,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 345358336. Throughput: 0: 43876.9. Samples: 345541620. Policy #0 lag: (min: 0.0, avg: 7.6, max: 22.0) +[2024-06-10 20:16:58,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:16:58,852][46990] Updated weights for policy 0, policy_version 21080 (0.0025) +[2024-06-10 20:17:01,330][46990] Updated weights for policy 0, policy_version 21090 (0.0034) +[2024-06-10 20:17:03,240][46753] Fps is (10 sec: 40959.6, 60 sec: 43690.6, 300 sec: 43876.1). Total num frames: 345587712. Throughput: 0: 43863.8. Samples: 345667100. Policy #0 lag: (min: 0.0, avg: 7.6, max: 22.0) +[2024-06-10 20:17:03,240][46753] Avg episode reward: [(0, '0.241')] +[2024-06-10 20:17:06,020][46990] Updated weights for policy 0, policy_version 21100 (0.0037) +[2024-06-10 20:17:08,240][46753] Fps is (10 sec: 45872.9, 60 sec: 43963.4, 300 sec: 43875.7). Total num frames: 345817088. Throughput: 0: 43603.6. Samples: 345930660. Policy #0 lag: (min: 0.0, avg: 7.8, max: 21.0) +[2024-06-10 20:17:08,240][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:17:08,688][46990] Updated weights for policy 0, policy_version 21110 (0.0037) +[2024-06-10 20:17:13,239][46753] Fps is (10 sec: 40960.9, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 345997312. Throughput: 0: 44000.9. Samples: 346203800. Policy #0 lag: (min: 0.0, avg: 7.8, max: 21.0) +[2024-06-10 20:17:13,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:17:13,558][46990] Updated weights for policy 0, policy_version 21120 (0.0047) +[2024-06-10 20:17:16,130][46990] Updated weights for policy 0, policy_version 21130 (0.0040) +[2024-06-10 20:17:18,239][46753] Fps is (10 sec: 45877.8, 60 sec: 44236.8, 300 sec: 43931.3). Total num frames: 346275840. Throughput: 0: 43773.6. Samples: 346323080. Policy #0 lag: (min: 0.0, avg: 7.8, max: 21.0) +[2024-06-10 20:17:18,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:17:20,923][46990] Updated weights for policy 0, policy_version 21140 (0.0024) +[2024-06-10 20:17:23,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43417.7, 300 sec: 43820.3). Total num frames: 346472448. Throughput: 0: 44048.2. Samples: 346595040. Policy #0 lag: (min: 0.0, avg: 7.6, max: 23.0) +[2024-06-10 20:17:23,240][46753] Avg episode reward: [(0, '0.241')] +[2024-06-10 20:17:23,676][46990] Updated weights for policy 0, policy_version 21150 (0.0033) +[2024-06-10 20:17:28,239][46753] Fps is (10 sec: 37682.8, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 346652672. Throughput: 0: 43753.8. Samples: 346855260. Policy #0 lag: (min: 0.0, avg: 7.6, max: 23.0) +[2024-06-10 20:17:28,240][46753] Avg episode reward: [(0, '0.243')] +[2024-06-10 20:17:28,478][46990] Updated weights for policy 0, policy_version 21160 (0.0033) +[2024-06-10 20:17:31,272][46990] Updated weights for policy 0, policy_version 21170 (0.0034) +[2024-06-10 20:17:33,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 346898432. Throughput: 0: 43850.1. Samples: 346983800. Policy #0 lag: (min: 0.0, avg: 7.6, max: 23.0) +[2024-06-10 20:17:33,240][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:17:33,247][46970] Saving new best policy, reward=0.263! +[2024-06-10 20:17:35,799][46990] Updated weights for policy 0, policy_version 21180 (0.0025) +[2024-06-10 20:17:38,240][46753] Fps is (10 sec: 49149.6, 60 sec: 44236.6, 300 sec: 43875.7). Total num frames: 347144192. Throughput: 0: 43539.6. Samples: 347243060. Policy #0 lag: (min: 0.0, avg: 8.4, max: 22.0) +[2024-06-10 20:17:38,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:17:38,710][46990] Updated weights for policy 0, policy_version 21190 (0.0042) +[2024-06-10 20:17:43,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 347324416. Throughput: 0: 43682.3. Samples: 347507320. Policy #0 lag: (min: 0.0, avg: 8.4, max: 22.0) +[2024-06-10 20:17:43,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:17:43,257][46990] Updated weights for policy 0, policy_version 21200 (0.0048) +[2024-06-10 20:17:46,353][46990] Updated weights for policy 0, policy_version 21210 (0.0027) +[2024-06-10 20:17:48,239][46753] Fps is (10 sec: 40962.2, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 347553792. Throughput: 0: 43594.4. Samples: 347628840. Policy #0 lag: (min: 0.0, avg: 8.4, max: 22.0) +[2024-06-10 20:17:48,240][46753] Avg episode reward: [(0, '0.249')] +[2024-06-10 20:17:50,943][46990] Updated weights for policy 0, policy_version 21220 (0.0030) +[2024-06-10 20:17:52,347][46970] Signal inference workers to stop experience collection... (4900 times) +[2024-06-10 20:17:52,348][46970] Signal inference workers to resume experience collection... (4900 times) +[2024-06-10 20:17:52,387][46990] InferenceWorker_p0-w0: stopping experience collection (4900 times) +[2024-06-10 20:17:52,387][46990] InferenceWorker_p0-w0: resuming experience collection (4900 times) +[2024-06-10 20:17:53,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43417.7, 300 sec: 43875.8). Total num frames: 347783168. Throughput: 0: 43801.4. Samples: 347901700. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 20:17:53,240][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:17:53,734][46990] Updated weights for policy 0, policy_version 21230 (0.0047) +[2024-06-10 20:17:58,223][46990] Updated weights for policy 0, policy_version 21240 (0.0027) +[2024-06-10 20:17:58,240][46753] Fps is (10 sec: 44235.5, 60 sec: 43963.5, 300 sec: 43931.3). Total num frames: 347996160. Throughput: 0: 43547.7. Samples: 348163460. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 20:17:58,244][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:18:01,193][46990] Updated weights for policy 0, policy_version 21250 (0.0039) +[2024-06-10 20:18:03,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 348209152. Throughput: 0: 43724.3. Samples: 348290680. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 20:18:03,240][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:18:05,611][46990] Updated weights for policy 0, policy_version 21260 (0.0043) +[2024-06-10 20:18:08,239][46753] Fps is (10 sec: 45876.3, 60 sec: 43964.1, 300 sec: 43931.4). Total num frames: 348454912. Throughput: 0: 43568.8. Samples: 348555640. Policy #0 lag: (min: 1.0, avg: 11.2, max: 22.0) +[2024-06-10 20:18:08,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:18:08,945][46990] Updated weights for policy 0, policy_version 21270 (0.0036) +[2024-06-10 20:18:12,967][46990] Updated weights for policy 0, policy_version 21280 (0.0033) +[2024-06-10 20:18:13,239][46753] Fps is (10 sec: 44237.2, 60 sec: 44236.8, 300 sec: 43876.5). Total num frames: 348651520. Throughput: 0: 43564.4. Samples: 348815660. Policy #0 lag: (min: 1.0, avg: 11.2, max: 22.0) +[2024-06-10 20:18:13,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:18:16,296][46990] Updated weights for policy 0, policy_version 21290 (0.0040) +[2024-06-10 20:18:18,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43144.5, 300 sec: 43875.8). Total num frames: 348864512. Throughput: 0: 43541.0. Samples: 348943140. Policy #0 lag: (min: 1.0, avg: 11.2, max: 22.0) +[2024-06-10 20:18:18,240][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:18:20,504][46990] Updated weights for policy 0, policy_version 21300 (0.0030) +[2024-06-10 20:18:23,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 349093888. Throughput: 0: 43800.9. Samples: 349214080. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 20:18:23,240][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:18:23,292][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000021308_349110272.pth... +[2024-06-10 20:18:23,367][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000020665_338575360.pth +[2024-06-10 20:18:23,702][46990] Updated weights for policy 0, policy_version 21310 (0.0032) +[2024-06-10 20:18:28,090][46990] Updated weights for policy 0, policy_version 21320 (0.0028) +[2024-06-10 20:18:28,239][46753] Fps is (10 sec: 44236.6, 60 sec: 44236.8, 300 sec: 43820.3). Total num frames: 349306880. Throughput: 0: 43771.1. Samples: 349477020. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 20:18:28,240][46753] Avg episode reward: [(0, '0.243')] +[2024-06-10 20:18:31,284][46990] Updated weights for policy 0, policy_version 21330 (0.0039) +[2024-06-10 20:18:33,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 349519872. Throughput: 0: 43907.9. Samples: 349604700. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 20:18:33,249][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:18:33,267][46970] Saving new best policy, reward=0.267! +[2024-06-10 20:18:35,356][46990] Updated weights for policy 0, policy_version 21340 (0.0034) +[2024-06-10 20:18:38,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43417.9, 300 sec: 43875.8). Total num frames: 349749248. Throughput: 0: 43628.3. Samples: 349864980. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 20:18:38,240][46753] Avg episode reward: [(0, '0.245')] +[2024-06-10 20:18:38,750][46990] Updated weights for policy 0, policy_version 21350 (0.0040) +[2024-06-10 20:18:42,674][46990] Updated weights for policy 0, policy_version 21360 (0.0034) +[2024-06-10 20:18:43,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 349962240. Throughput: 0: 43555.4. Samples: 350123440. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 20:18:43,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:18:46,290][46990] Updated weights for policy 0, policy_version 21370 (0.0043) +[2024-06-10 20:18:48,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 350175232. Throughput: 0: 43698.2. Samples: 350257100. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 20:18:48,240][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:18:50,536][46990] Updated weights for policy 0, policy_version 21380 (0.0038) +[2024-06-10 20:18:53,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43690.5, 300 sec: 43820.2). Total num frames: 350404608. Throughput: 0: 43742.1. Samples: 350524040. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 20:18:53,240][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:18:53,924][46990] Updated weights for policy 0, policy_version 21390 (0.0025) +[2024-06-10 20:18:58,006][46990] Updated weights for policy 0, policy_version 21400 (0.0033) +[2024-06-10 20:18:58,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.8, 300 sec: 43820.2). Total num frames: 350617600. Throughput: 0: 43823.0. Samples: 350787700. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 20:18:58,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:19:01,252][46990] Updated weights for policy 0, policy_version 21410 (0.0043) +[2024-06-10 20:19:03,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 350830592. Throughput: 0: 43858.9. Samples: 350916800. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 20:19:03,240][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:19:04,570][46970] Signal inference workers to stop experience collection... (4950 times) +[2024-06-10 20:19:04,572][46970] Signal inference workers to resume experience collection... (4950 times) +[2024-06-10 20:19:04,618][46990] InferenceWorker_p0-w0: stopping experience collection (4950 times) +[2024-06-10 20:19:04,619][46990] InferenceWorker_p0-w0: resuming experience collection (4950 times) +[2024-06-10 20:19:05,613][46990] Updated weights for policy 0, policy_version 21420 (0.0031) +[2024-06-10 20:19:08,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43417.7, 300 sec: 43820.3). Total num frames: 351059968. Throughput: 0: 43642.8. Samples: 351178000. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 20:19:08,240][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:19:08,883][46990] Updated weights for policy 0, policy_version 21430 (0.0028) +[2024-06-10 20:19:12,838][46990] Updated weights for policy 0, policy_version 21440 (0.0041) +[2024-06-10 20:19:13,240][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 351272960. Throughput: 0: 43551.9. Samples: 351436860. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 20:19:13,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:19:16,256][46990] Updated weights for policy 0, policy_version 21450 (0.0034) +[2024-06-10 20:19:18,240][46753] Fps is (10 sec: 42597.5, 60 sec: 43690.5, 300 sec: 43820.2). Total num frames: 351485952. Throughput: 0: 43660.4. Samples: 351569420. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 20:19:18,240][46753] Avg episode reward: [(0, '0.246')] +[2024-06-10 20:19:20,667][46990] Updated weights for policy 0, policy_version 21460 (0.0032) +[2024-06-10 20:19:23,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 351715328. Throughput: 0: 43797.7. Samples: 351835880. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 20:19:23,240][46753] Avg episode reward: [(0, '0.258')] +[2024-06-10 20:19:23,788][46990] Updated weights for policy 0, policy_version 21470 (0.0053) +[2024-06-10 20:19:28,220][46990] Updated weights for policy 0, policy_version 21480 (0.0040) +[2024-06-10 20:19:28,240][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 351928320. Throughput: 0: 43896.3. Samples: 352098780. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 20:19:28,240][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 20:19:31,329][46990] Updated weights for policy 0, policy_version 21490 (0.0048) +[2024-06-10 20:19:33,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 352141312. Throughput: 0: 43634.3. Samples: 352220640. Policy #0 lag: (min: 0.0, avg: 9.1, max: 21.0) +[2024-06-10 20:19:33,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:19:35,599][46990] Updated weights for policy 0, policy_version 21500 (0.0046) +[2024-06-10 20:19:38,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 352370688. Throughput: 0: 43568.6. Samples: 352484620. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 20:19:38,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:19:38,783][46990] Updated weights for policy 0, policy_version 21510 (0.0029) +[2024-06-10 20:19:43,221][46990] Updated weights for policy 0, policy_version 21520 (0.0035) +[2024-06-10 20:19:43,244][46753] Fps is (10 sec: 44217.3, 60 sec: 43687.4, 300 sec: 43708.5). Total num frames: 352583680. Throughput: 0: 43552.7. Samples: 352747760. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 20:19:43,244][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 20:19:46,429][46990] Updated weights for policy 0, policy_version 21530 (0.0033) +[2024-06-10 20:19:48,244][46753] Fps is (10 sec: 42579.2, 60 sec: 43687.5, 300 sec: 43819.6). Total num frames: 352796672. Throughput: 0: 43590.5. Samples: 352878560. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 20:19:48,244][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:19:50,574][46990] Updated weights for policy 0, policy_version 21540 (0.0045) +[2024-06-10 20:19:53,239][46753] Fps is (10 sec: 45895.5, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 353042432. Throughput: 0: 43670.6. Samples: 353143180. Policy #0 lag: (min: 0.0, avg: 9.0, max: 21.0) +[2024-06-10 20:19:53,240][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:19:53,724][46990] Updated weights for policy 0, policy_version 21550 (0.0040) +[2024-06-10 20:19:57,879][46990] Updated weights for policy 0, policy_version 21560 (0.0037) +[2024-06-10 20:19:58,239][46753] Fps is (10 sec: 44256.4, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 353239040. Throughput: 0: 43702.7. Samples: 353403480. Policy #0 lag: (min: 0.0, avg: 9.0, max: 21.0) +[2024-06-10 20:19:58,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:19:58,240][46970] Saving new best policy, reward=0.269! +[2024-06-10 20:20:01,316][46990] Updated weights for policy 0, policy_version 21570 (0.0031) +[2024-06-10 20:20:03,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 353452032. Throughput: 0: 43648.6. Samples: 353533600. Policy #0 lag: (min: 0.0, avg: 9.0, max: 21.0) +[2024-06-10 20:20:03,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:20:05,594][46990] Updated weights for policy 0, policy_version 21580 (0.0034) +[2024-06-10 20:20:08,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 353681408. Throughput: 0: 43599.3. Samples: 353797840. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 20:20:08,240][46753] Avg episode reward: [(0, '0.244')] +[2024-06-10 20:20:08,758][46990] Updated weights for policy 0, policy_version 21590 (0.0027) +[2024-06-10 20:20:12,922][46990] Updated weights for policy 0, policy_version 21600 (0.0024) +[2024-06-10 20:20:13,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 353894400. Throughput: 0: 43545.4. Samples: 354058320. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 20:20:13,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:20:16,488][46990] Updated weights for policy 0, policy_version 21610 (0.0036) +[2024-06-10 20:20:18,239][46753] Fps is (10 sec: 44235.9, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 354123776. Throughput: 0: 43861.3. Samples: 354194400. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 20:20:18,240][46753] Avg episode reward: [(0, '0.249')] +[2024-06-10 20:20:20,626][46990] Updated weights for policy 0, policy_version 21620 (0.0040) +[2024-06-10 20:20:23,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43690.8, 300 sec: 43820.3). Total num frames: 354336768. Throughput: 0: 43708.9. Samples: 354451520. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 20:20:23,240][46753] Avg episode reward: [(0, '0.251')] +[2024-06-10 20:20:23,347][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000021628_354353152.pth... +[2024-06-10 20:20:23,400][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000020988_343867392.pth +[2024-06-10 20:20:23,705][46990] Updated weights for policy 0, policy_version 21630 (0.0036) +[2024-06-10 20:20:26,609][46970] Signal inference workers to stop experience collection... (5000 times) +[2024-06-10 20:20:26,612][46970] Signal inference workers to resume experience collection... (5000 times) +[2024-06-10 20:20:26,650][46990] InferenceWorker_p0-w0: stopping experience collection (5000 times) +[2024-06-10 20:20:26,650][46990] InferenceWorker_p0-w0: resuming experience collection (5000 times) +[2024-06-10 20:20:28,100][46990] Updated weights for policy 0, policy_version 21640 (0.0043) +[2024-06-10 20:20:28,240][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 354549760. Throughput: 0: 43663.3. Samples: 354712420. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 20:20:28,240][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:20:31,647][46990] Updated weights for policy 0, policy_version 21650 (0.0043) +[2024-06-10 20:20:33,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 354762752. Throughput: 0: 43566.6. Samples: 354838860. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 20:20:33,240][46753] Avg episode reward: [(0, '0.252')] +[2024-06-10 20:20:35,322][46990] Updated weights for policy 0, policy_version 21660 (0.0045) +[2024-06-10 20:20:38,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 354975744. Throughput: 0: 43562.7. Samples: 355103500. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 20:20:38,240][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:20:39,163][46990] Updated weights for policy 0, policy_version 21670 (0.0028) +[2024-06-10 20:20:42,994][46990] Updated weights for policy 0, policy_version 21680 (0.0031) +[2024-06-10 20:20:43,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43693.9, 300 sec: 43709.2). Total num frames: 355205120. Throughput: 0: 43634.7. Samples: 355367040. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 20:20:43,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:20:46,598][46990] Updated weights for policy 0, policy_version 21690 (0.0037) +[2024-06-10 20:20:48,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43693.9, 300 sec: 43709.2). Total num frames: 355418112. Throughput: 0: 43802.2. Samples: 355504700. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 20:20:48,240][46753] Avg episode reward: [(0, '0.268')] +[2024-06-10 20:20:50,711][46990] Updated weights for policy 0, policy_version 21700 (0.0022) +[2024-06-10 20:20:53,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43144.5, 300 sec: 43764.7). Total num frames: 355631104. Throughput: 0: 43520.7. Samples: 355756280. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:20:53,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:20:53,937][46990] Updated weights for policy 0, policy_version 21710 (0.0038) +[2024-06-10 20:20:57,968][46990] Updated weights for policy 0, policy_version 21720 (0.0029) +[2024-06-10 20:20:58,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 355860480. Throughput: 0: 43590.6. Samples: 356019900. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:20:58,240][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:21:01,547][46990] Updated weights for policy 0, policy_version 21730 (0.0036) +[2024-06-10 20:21:03,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 356073472. Throughput: 0: 43506.7. Samples: 356152200. Policy #0 lag: (min: 0.0, avg: 9.6, max: 21.0) +[2024-06-10 20:21:03,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:21:05,231][46990] Updated weights for policy 0, policy_version 21740 (0.0039) +[2024-06-10 20:21:08,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.4, 300 sec: 43709.2). Total num frames: 356286464. Throughput: 0: 43575.8. Samples: 356412440. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 20:21:08,240][46753] Avg episode reward: [(0, '0.245')] +[2024-06-10 20:21:09,223][46990] Updated weights for policy 0, policy_version 21750 (0.0031) +[2024-06-10 20:21:12,999][46990] Updated weights for policy 0, policy_version 21760 (0.0031) +[2024-06-10 20:21:13,240][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 356515840. Throughput: 0: 43560.5. Samples: 356672640. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 20:21:13,240][46753] Avg episode reward: [(0, '0.258')] +[2024-06-10 20:21:16,894][46990] Updated weights for policy 0, policy_version 21770 (0.0035) +[2024-06-10 20:21:18,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 356728832. Throughput: 0: 43838.6. Samples: 356811600. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 20:21:18,240][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:21:20,664][46990] Updated weights for policy 0, policy_version 21780 (0.0028) +[2024-06-10 20:21:23,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 356941824. Throughput: 0: 43656.0. Samples: 357068020. Policy #0 lag: (min: 1.0, avg: 10.0, max: 20.0) +[2024-06-10 20:21:23,240][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:21:24,114][46990] Updated weights for policy 0, policy_version 21790 (0.0035) +[2024-06-10 20:21:27,941][46990] Updated weights for policy 0, policy_version 21800 (0.0029) +[2024-06-10 20:21:28,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 357171200. Throughput: 0: 43580.0. Samples: 357328140. Policy #0 lag: (min: 1.0, avg: 10.0, max: 20.0) +[2024-06-10 20:21:28,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:21:31,896][46990] Updated weights for policy 0, policy_version 21810 (0.0048) +[2024-06-10 20:21:33,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 357384192. Throughput: 0: 43586.3. Samples: 357466080. Policy #0 lag: (min: 1.0, avg: 10.0, max: 20.0) +[2024-06-10 20:21:33,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:21:33,339][46970] Saving new best policy, reward=0.271! +[2024-06-10 20:21:35,219][46990] Updated weights for policy 0, policy_version 21820 (0.0038) +[2024-06-10 20:21:38,240][46753] Fps is (10 sec: 42597.6, 60 sec: 43690.5, 300 sec: 43764.7). Total num frames: 357597184. Throughput: 0: 43746.1. Samples: 357724860. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 20:21:38,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:21:39,447][46990] Updated weights for policy 0, policy_version 21830 (0.0027) +[2024-06-10 20:21:42,790][46990] Updated weights for policy 0, policy_version 21840 (0.0036) +[2024-06-10 20:21:43,240][46753] Fps is (10 sec: 45874.3, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 357842944. Throughput: 0: 43644.8. Samples: 357983920. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 20:21:43,240][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:21:46,954][46990] Updated weights for policy 0, policy_version 21850 (0.0041) +[2024-06-10 20:21:48,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 358039552. Throughput: 0: 43793.3. Samples: 358122900. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 20:21:48,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:21:50,388][46990] Updated weights for policy 0, policy_version 21860 (0.0044) +[2024-06-10 20:21:53,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 358268928. Throughput: 0: 43763.7. Samples: 358381800. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 20:21:53,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:21:54,166][46990] Updated weights for policy 0, policy_version 21870 (0.0029) +[2024-06-10 20:21:57,873][46990] Updated weights for policy 0, policy_version 21880 (0.0035) +[2024-06-10 20:21:58,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43963.8, 300 sec: 43764.8). Total num frames: 358498304. Throughput: 0: 43688.6. Samples: 358638620. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 20:21:58,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:22:01,894][46990] Updated weights for policy 0, policy_version 21890 (0.0025) +[2024-06-10 20:22:03,240][46753] Fps is (10 sec: 42597.2, 60 sec: 43690.5, 300 sec: 43653.7). Total num frames: 358694912. Throughput: 0: 43751.8. Samples: 358780440. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 20:22:03,240][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:22:05,133][46990] Updated weights for policy 0, policy_version 21900 (0.0036) +[2024-06-10 20:22:08,240][46753] Fps is (10 sec: 40959.4, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 358907904. Throughput: 0: 43795.9. Samples: 359038840. Policy #0 lag: (min: 0.0, avg: 9.6, max: 22.0) +[2024-06-10 20:22:08,240][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:22:09,863][46990] Updated weights for policy 0, policy_version 21910 (0.0032) +[2024-06-10 20:22:12,758][46990] Updated weights for policy 0, policy_version 21920 (0.0036) +[2024-06-10 20:22:13,239][46753] Fps is (10 sec: 45876.5, 60 sec: 43963.8, 300 sec: 43653.6). Total num frames: 359153664. Throughput: 0: 43689.4. Samples: 359294160. Policy #0 lag: (min: 0.0, avg: 9.6, max: 22.0) +[2024-06-10 20:22:13,240][46753] Avg episode reward: [(0, '0.251')] +[2024-06-10 20:22:17,311][46990] Updated weights for policy 0, policy_version 21930 (0.0033) +[2024-06-10 20:22:18,243][46753] Fps is (10 sec: 45859.4, 60 sec: 43961.1, 300 sec: 43708.6). Total num frames: 359366656. Throughput: 0: 43733.9. Samples: 359434260. Policy #0 lag: (min: 0.0, avg: 9.6, max: 22.0) +[2024-06-10 20:22:18,243][46753] Avg episode reward: [(0, '0.262')] +[2024-06-10 20:22:20,542][46990] Updated weights for policy 0, policy_version 21940 (0.0044) +[2024-06-10 20:22:23,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 359563264. Throughput: 0: 43790.0. Samples: 359695400. Policy #0 lag: (min: 0.0, avg: 10.1, max: 19.0) +[2024-06-10 20:22:23,240][46753] Avg episode reward: [(0, '0.262')] +[2024-06-10 20:22:23,356][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000021947_359579648.pth... +[2024-06-10 20:22:23,404][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000021308_349110272.pth +[2024-06-10 20:22:24,819][46990] Updated weights for policy 0, policy_version 21950 (0.0050) +[2024-06-10 20:22:27,736][46970] Signal inference workers to stop experience collection... (5050 times) +[2024-06-10 20:22:27,760][46990] InferenceWorker_p0-w0: stopping experience collection (5050 times) +[2024-06-10 20:22:27,797][46970] Signal inference workers to resume experience collection... (5050 times) +[2024-06-10 20:22:27,797][46990] InferenceWorker_p0-w0: resuming experience collection (5050 times) +[2024-06-10 20:22:27,939][46990] Updated weights for policy 0, policy_version 21960 (0.0025) +[2024-06-10 20:22:28,239][46753] Fps is (10 sec: 45891.4, 60 sec: 44236.8, 300 sec: 43820.3). Total num frames: 359825408. Throughput: 0: 43663.2. Samples: 359948760. Policy #0 lag: (min: 0.0, avg: 10.1, max: 19.0) +[2024-06-10 20:22:28,240][46753] Avg episode reward: [(0, '0.258')] +[2024-06-10 20:22:32,235][46990] Updated weights for policy 0, policy_version 21970 (0.0038) +[2024-06-10 20:22:33,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.6, 300 sec: 43598.2). Total num frames: 360005632. Throughput: 0: 43603.6. Samples: 360085060. Policy #0 lag: (min: 0.0, avg: 10.1, max: 19.0) +[2024-06-10 20:22:33,241][46753] Avg episode reward: [(0, '0.251')] +[2024-06-10 20:22:35,162][46990] Updated weights for policy 0, policy_version 21980 (0.0021) +[2024-06-10 20:22:38,244][46753] Fps is (10 sec: 39304.2, 60 sec: 43687.5, 300 sec: 43708.5). Total num frames: 360218624. Throughput: 0: 43669.4. Samples: 360347120. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 20:22:38,244][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:22:39,787][46990] Updated weights for policy 0, policy_version 21990 (0.0028) +[2024-06-10 20:22:42,665][46990] Updated weights for policy 0, policy_version 22000 (0.0038) +[2024-06-10 20:22:43,240][46753] Fps is (10 sec: 45874.9, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 360464384. Throughput: 0: 43620.7. Samples: 360601560. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 20:22:43,240][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:22:47,307][46990] Updated weights for policy 0, policy_version 22010 (0.0040) +[2024-06-10 20:22:48,240][46753] Fps is (10 sec: 44256.0, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 360660992. Throughput: 0: 43648.1. Samples: 360744600. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 20:22:48,240][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:22:50,302][46990] Updated weights for policy 0, policy_version 22020 (0.0031) +[2024-06-10 20:22:53,239][46753] Fps is (10 sec: 39322.5, 60 sec: 43144.6, 300 sec: 43598.2). Total num frames: 360857600. Throughput: 0: 43588.6. Samples: 361000320. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 20:22:53,240][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:22:54,655][46990] Updated weights for policy 0, policy_version 22030 (0.0033) +[2024-06-10 20:22:57,841][46990] Updated weights for policy 0, policy_version 22040 (0.0031) +[2024-06-10 20:22:58,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 361119744. Throughput: 0: 43516.8. Samples: 361252420. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 20:22:58,240][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:23:02,404][46990] Updated weights for policy 0, policy_version 22050 (0.0035) +[2024-06-10 20:23:03,240][46753] Fps is (10 sec: 45873.8, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 361316352. Throughput: 0: 43528.1. Samples: 361392880. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 20:23:03,240][46753] Avg episode reward: [(0, '0.248')] +[2024-06-10 20:23:05,116][46990] Updated weights for policy 0, policy_version 22060 (0.0047) +[2024-06-10 20:23:08,240][46753] Fps is (10 sec: 39321.3, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 361512960. Throughput: 0: 43458.5. Samples: 361651040. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 20:23:08,240][46753] Avg episode reward: [(0, '0.258')] +[2024-06-10 20:23:09,966][46990] Updated weights for policy 0, policy_version 22070 (0.0035) +[2024-06-10 20:23:12,736][46990] Updated weights for policy 0, policy_version 22080 (0.0024) +[2024-06-10 20:23:13,239][46753] Fps is (10 sec: 45876.0, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 361775104. Throughput: 0: 43426.7. Samples: 361902960. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 20:23:13,240][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:23:17,449][46990] Updated weights for policy 0, policy_version 22090 (0.0036) +[2024-06-10 20:23:18,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43147.1, 300 sec: 43598.1). Total num frames: 361955328. Throughput: 0: 43532.5. Samples: 362044020. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 20:23:18,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:23:20,322][46990] Updated weights for policy 0, policy_version 22100 (0.0027) +[2024-06-10 20:23:23,239][46753] Fps is (10 sec: 37683.0, 60 sec: 43144.5, 300 sec: 43542.6). Total num frames: 362151936. Throughput: 0: 43434.9. Samples: 362301500. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 20:23:23,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:23:24,917][46990] Updated weights for policy 0, policy_version 22110 (0.0046) +[2024-06-10 20:23:27,757][46990] Updated weights for policy 0, policy_version 22120 (0.0036) +[2024-06-10 20:23:28,239][46753] Fps is (10 sec: 47514.1, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 362430464. Throughput: 0: 43395.3. Samples: 362554340. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 20:23:28,240][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:23:32,613][46990] Updated weights for policy 0, policy_version 22130 (0.0035) +[2024-06-10 20:23:33,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 362610688. Throughput: 0: 43528.6. Samples: 362703380. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 20:23:33,240][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:23:34,682][46970] Signal inference workers to stop experience collection... (5100 times) +[2024-06-10 20:23:34,682][46970] Signal inference workers to resume experience collection... (5100 times) +[2024-06-10 20:23:34,705][46990] InferenceWorker_p0-w0: stopping experience collection (5100 times) +[2024-06-10 20:23:34,712][46990] InferenceWorker_p0-w0: resuming experience collection (5100 times) +[2024-06-10 20:23:34,973][46990] Updated weights for policy 0, policy_version 22140 (0.0035) +[2024-06-10 20:23:38,244][46753] Fps is (10 sec: 37665.8, 60 sec: 43144.5, 300 sec: 43541.9). Total num frames: 362807296. Throughput: 0: 43465.8. Samples: 362956480. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 20:23:38,245][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:23:40,248][46990] Updated weights for policy 0, policy_version 22150 (0.0047) +[2024-06-10 20:23:42,638][46990] Updated weights for policy 0, policy_version 22160 (0.0033) +[2024-06-10 20:23:43,239][46753] Fps is (10 sec: 47513.3, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 363085824. Throughput: 0: 43505.8. Samples: 363210180. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 20:23:43,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:23:47,383][46990] Updated weights for policy 0, policy_version 22170 (0.0043) +[2024-06-10 20:23:48,239][46753] Fps is (10 sec: 45896.4, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 363266048. Throughput: 0: 43672.7. Samples: 363358140. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 20:23:48,240][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:23:50,050][46990] Updated weights for policy 0, policy_version 22180 (0.0026) +[2024-06-10 20:23:53,240][46753] Fps is (10 sec: 36044.7, 60 sec: 43144.4, 300 sec: 43487.0). Total num frames: 363446272. Throughput: 0: 43602.7. Samples: 363613160. Policy #0 lag: (min: 0.0, avg: 9.9, max: 24.0) +[2024-06-10 20:23:53,240][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:23:54,897][46990] Updated weights for policy 0, policy_version 22190 (0.0036) +[2024-06-10 20:23:57,641][46990] Updated weights for policy 0, policy_version 22200 (0.0038) +[2024-06-10 20:23:58,243][46753] Fps is (10 sec: 47497.6, 60 sec: 43688.3, 300 sec: 43764.3). Total num frames: 363741184. Throughput: 0: 43624.0. Samples: 363866180. Policy #0 lag: (min: 0.0, avg: 9.9, max: 24.0) +[2024-06-10 20:23:58,243][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:24:02,827][46990] Updated weights for policy 0, policy_version 22210 (0.0035) +[2024-06-10 20:24:03,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43144.7, 300 sec: 43542.6). Total num frames: 363905024. Throughput: 0: 43653.4. Samples: 364008420. Policy #0 lag: (min: 0.0, avg: 9.9, max: 24.0) +[2024-06-10 20:24:03,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:24:04,908][46990] Updated weights for policy 0, policy_version 22220 (0.0030) +[2024-06-10 20:24:08,239][46753] Fps is (10 sec: 36056.7, 60 sec: 43144.6, 300 sec: 43487.0). Total num frames: 364101632. Throughput: 0: 43585.4. Samples: 364262840. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 20:24:08,240][46753] Avg episode reward: [(0, '0.258')] +[2024-06-10 20:24:10,294][46990] Updated weights for policy 0, policy_version 22230 (0.0041) +[2024-06-10 20:24:12,472][46990] Updated weights for policy 0, policy_version 22240 (0.0030) +[2024-06-10 20:24:13,239][46753] Fps is (10 sec: 49151.7, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 364396544. Throughput: 0: 43554.1. Samples: 364514280. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 20:24:13,240][46753] Avg episode reward: [(0, '0.258')] +[2024-06-10 20:24:17,549][46990] Updated weights for policy 0, policy_version 22250 (0.0037) +[2024-06-10 20:24:18,239][46753] Fps is (10 sec: 47513.3, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 364576768. Throughput: 0: 43641.7. Samples: 364667260. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 20:24:18,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:24:19,990][46990] Updated weights for policy 0, policy_version 22260 (0.0028) +[2024-06-10 20:24:23,240][46753] Fps is (10 sec: 37682.7, 60 sec: 43690.6, 300 sec: 43542.6). Total num frames: 364773376. Throughput: 0: 43680.2. Samples: 364921900. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 20:24:23,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:24:23,246][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000022264_364773376.pth... +[2024-06-10 20:24:23,304][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000021628_354353152.pth +[2024-06-10 20:24:23,307][46970] Saving new best policy, reward=0.272! +[2024-06-10 20:24:25,185][46990] Updated weights for policy 0, policy_version 22270 (0.0029) +[2024-06-10 20:24:27,366][46990] Updated weights for policy 0, policy_version 22280 (0.0035) +[2024-06-10 20:24:28,239][46753] Fps is (10 sec: 47514.1, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 365051904. Throughput: 0: 43621.4. Samples: 365173140. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 20:24:28,240][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:24:32,179][46970] Signal inference workers to stop experience collection... (5150 times) +[2024-06-10 20:24:32,180][46970] Signal inference workers to resume experience collection... (5150 times) +[2024-06-10 20:24:32,234][46990] InferenceWorker_p0-w0: stopping experience collection (5150 times) +[2024-06-10 20:24:32,234][46990] InferenceWorker_p0-w0: resuming experience collection (5150 times) +[2024-06-10 20:24:33,013][46990] Updated weights for policy 0, policy_version 22290 (0.0044) +[2024-06-10 20:24:33,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43417.6, 300 sec: 43542.5). Total num frames: 365215744. Throughput: 0: 43497.2. Samples: 365315520. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 20:24:33,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:24:34,814][46990] Updated weights for policy 0, policy_version 22300 (0.0022) +[2024-06-10 20:24:38,240][46753] Fps is (10 sec: 37682.6, 60 sec: 43693.9, 300 sec: 43543.2). Total num frames: 365428736. Throughput: 0: 43579.6. Samples: 365574240. Policy #0 lag: (min: 0.0, avg: 9.3, max: 22.0) +[2024-06-10 20:24:38,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:24:40,519][46990] Updated weights for policy 0, policy_version 22310 (0.0036) +[2024-06-10 20:24:42,432][46990] Updated weights for policy 0, policy_version 22320 (0.0030) +[2024-06-10 20:24:43,240][46753] Fps is (10 sec: 49151.6, 60 sec: 43690.6, 300 sec: 43765.4). Total num frames: 365707264. Throughput: 0: 43535.9. Samples: 365825160. Policy #0 lag: (min: 0.0, avg: 9.3, max: 22.0) +[2024-06-10 20:24:43,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:24:47,689][46990] Updated weights for policy 0, policy_version 22330 (0.0040) +[2024-06-10 20:24:48,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 365871104. Throughput: 0: 43656.0. Samples: 365972940. Policy #0 lag: (min: 0.0, avg: 9.3, max: 22.0) +[2024-06-10 20:24:48,240][46753] Avg episode reward: [(0, '0.258')] +[2024-06-10 20:24:50,034][46990] Updated weights for policy 0, policy_version 22340 (0.0028) +[2024-06-10 20:24:53,239][46753] Fps is (10 sec: 37683.9, 60 sec: 43963.8, 300 sec: 43542.6). Total num frames: 366084096. Throughput: 0: 43785.4. Samples: 366233180. Policy #0 lag: (min: 1.0, avg: 10.9, max: 24.0) +[2024-06-10 20:24:53,240][46753] Avg episode reward: [(0, '0.256')] +[2024-06-10 20:24:55,294][46990] Updated weights for policy 0, policy_version 22350 (0.0039) +[2024-06-10 20:24:57,585][46990] Updated weights for policy 0, policy_version 22360 (0.0040) +[2024-06-10 20:24:58,239][46753] Fps is (10 sec: 50790.1, 60 sec: 43966.1, 300 sec: 43820.3). Total num frames: 366379008. Throughput: 0: 43717.8. Samples: 366481580. Policy #0 lag: (min: 1.0, avg: 10.9, max: 24.0) +[2024-06-10 20:24:58,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:25:03,139][46990] Updated weights for policy 0, policy_version 22370 (0.0033) +[2024-06-10 20:25:03,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 366510080. Throughput: 0: 43376.1. Samples: 366619180. Policy #0 lag: (min: 1.0, avg: 10.9, max: 24.0) +[2024-06-10 20:25:03,240][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:25:05,080][46990] Updated weights for policy 0, policy_version 22380 (0.0036) +[2024-06-10 20:25:08,239][46753] Fps is (10 sec: 34406.6, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 366723072. Throughput: 0: 43459.8. Samples: 366877580. Policy #0 lag: (min: 1.0, avg: 10.0, max: 19.0) +[2024-06-10 20:25:08,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:25:08,290][46970] Saving new best policy, reward=0.279! +[2024-06-10 20:25:10,609][46990] Updated weights for policy 0, policy_version 22390 (0.0037) +[2024-06-10 20:25:12,572][46990] Updated weights for policy 0, policy_version 22400 (0.0039) +[2024-06-10 20:25:13,240][46753] Fps is (10 sec: 52427.9, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 367034368. Throughput: 0: 43557.2. Samples: 367133220. Policy #0 lag: (min: 1.0, avg: 10.0, max: 19.0) +[2024-06-10 20:25:13,240][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:25:17,824][46990] Updated weights for policy 0, policy_version 22410 (0.0034) +[2024-06-10 20:25:18,240][46753] Fps is (10 sec: 45874.0, 60 sec: 43417.5, 300 sec: 43542.5). Total num frames: 367181824. Throughput: 0: 43471.9. Samples: 367271760. Policy #0 lag: (min: 1.0, avg: 10.0, max: 19.0) +[2024-06-10 20:25:18,240][46753] Avg episode reward: [(0, '0.242')] +[2024-06-10 20:25:20,329][46990] Updated weights for policy 0, policy_version 22420 (0.0033) +[2024-06-10 20:25:23,240][46753] Fps is (10 sec: 37683.4, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 367411200. Throughput: 0: 43564.5. Samples: 367534640. Policy #0 lag: (min: 0.0, avg: 10.8, max: 24.0) +[2024-06-10 20:25:23,242][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:25:25,456][46990] Updated weights for policy 0, policy_version 22430 (0.0033) +[2024-06-10 20:25:26,958][46970] Signal inference workers to stop experience collection... (5200 times) +[2024-06-10 20:25:26,958][46970] Signal inference workers to resume experience collection... (5200 times) +[2024-06-10 20:25:26,987][46990] InferenceWorker_p0-w0: stopping experience collection (5200 times) +[2024-06-10 20:25:26,987][46990] InferenceWorker_p0-w0: resuming experience collection (5200 times) +[2024-06-10 20:25:27,786][46990] Updated weights for policy 0, policy_version 22440 (0.0034) +[2024-06-10 20:25:28,239][46753] Fps is (10 sec: 50791.7, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 367689728. Throughput: 0: 43672.6. Samples: 367790420. Policy #0 lag: (min: 0.0, avg: 10.8, max: 24.0) +[2024-06-10 20:25:28,240][46753] Avg episode reward: [(0, '0.262')] +[2024-06-10 20:25:33,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43144.6, 300 sec: 43487.0). Total num frames: 367804416. Throughput: 0: 43316.4. Samples: 367922180. Policy #0 lag: (min: 0.0, avg: 10.8, max: 24.0) +[2024-06-10 20:25:33,240][46753] Avg episode reward: [(0, '0.252')] +[2024-06-10 20:25:33,250][46990] Updated weights for policy 0, policy_version 22450 (0.0019) +[2024-06-10 20:25:35,281][46990] Updated weights for policy 0, policy_version 22460 (0.0029) +[2024-06-10 20:25:38,240][46753] Fps is (10 sec: 36044.2, 60 sec: 43690.7, 300 sec: 43542.5). Total num frames: 368050176. Throughput: 0: 43276.7. Samples: 368180640. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 20:25:38,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:25:40,738][46990] Updated weights for policy 0, policy_version 22470 (0.0023) +[2024-06-10 20:25:42,877][46990] Updated weights for policy 0, policy_version 22480 (0.0042) +[2024-06-10 20:25:43,239][46753] Fps is (10 sec: 52429.2, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 368328704. Throughput: 0: 43521.4. Samples: 368440040. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 20:25:43,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:25:47,971][46990] Updated weights for policy 0, policy_version 22490 (0.0033) +[2024-06-10 20:25:48,239][46753] Fps is (10 sec: 42599.1, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 368476160. Throughput: 0: 43440.9. Samples: 368574020. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 20:25:48,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:25:50,663][46990] Updated weights for policy 0, policy_version 22500 (0.0037) +[2024-06-10 20:25:53,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 368721920. Throughput: 0: 43512.5. Samples: 368835640. Policy #0 lag: (min: 0.0, avg: 10.8, max: 20.0) +[2024-06-10 20:25:53,240][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:25:55,353][46990] Updated weights for policy 0, policy_version 22510 (0.0029) +[2024-06-10 20:25:58,226][46990] Updated weights for policy 0, policy_version 22520 (0.0025) +[2024-06-10 20:25:58,244][46753] Fps is (10 sec: 49129.6, 60 sec: 43141.3, 300 sec: 43708.5). Total num frames: 368967680. Throughput: 0: 43685.5. Samples: 369099260. Policy #0 lag: (min: 0.0, avg: 10.8, max: 20.0) +[2024-06-10 20:25:58,245][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:26:03,239][46753] Fps is (10 sec: 39321.4, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 369115136. Throughput: 0: 43225.6. Samples: 369216900. Policy #0 lag: (min: 0.0, avg: 10.8, max: 20.0) +[2024-06-10 20:26:03,240][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:26:03,346][46990] Updated weights for policy 0, policy_version 22530 (0.0033) +[2024-06-10 20:26:05,785][46990] Updated weights for policy 0, policy_version 22540 (0.0032) +[2024-06-10 20:26:08,239][46753] Fps is (10 sec: 40978.4, 60 sec: 44236.7, 300 sec: 43598.1). Total num frames: 369377280. Throughput: 0: 43282.3. Samples: 369482340. Policy #0 lag: (min: 1.0, avg: 10.6, max: 22.0) +[2024-06-10 20:26:08,240][46753] Avg episode reward: [(0, '0.247')] +[2024-06-10 20:26:10,959][46990] Updated weights for policy 0, policy_version 22550 (0.0031) +[2024-06-10 20:26:13,042][46990] Updated weights for policy 0, policy_version 22560 (0.0028) +[2024-06-10 20:26:13,240][46753] Fps is (10 sec: 50789.4, 60 sec: 43144.5, 300 sec: 43709.2). Total num frames: 369623040. Throughput: 0: 43548.7. Samples: 369750120. Policy #0 lag: (min: 1.0, avg: 10.6, max: 22.0) +[2024-06-10 20:26:13,240][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:26:13,787][46970] Signal inference workers to stop experience collection... (5250 times) +[2024-06-10 20:26:13,840][46990] InferenceWorker_p0-w0: stopping experience collection (5250 times) +[2024-06-10 20:26:13,843][46970] Signal inference workers to resume experience collection... (5250 times) +[2024-06-10 20:26:13,853][46990] InferenceWorker_p0-w0: resuming experience collection (5250 times) +[2024-06-10 20:26:18,181][46990] Updated weights for policy 0, policy_version 22570 (0.0045) +[2024-06-10 20:26:18,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43417.8, 300 sec: 43542.6). Total num frames: 369786880. Throughput: 0: 43490.3. Samples: 369879240. Policy #0 lag: (min: 1.0, avg: 10.6, max: 22.0) +[2024-06-10 20:26:18,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:26:20,791][46990] Updated weights for policy 0, policy_version 22580 (0.0029) +[2024-06-10 20:26:23,240][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 370049024. Throughput: 0: 43611.6. Samples: 370143160. Policy #0 lag: (min: 0.0, avg: 11.2, max: 24.0) +[2024-06-10 20:26:23,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:26:23,245][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000022586_370049024.pth... +[2024-06-10 20:26:23,305][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000021947_359579648.pth +[2024-06-10 20:26:25,761][46990] Updated weights for policy 0, policy_version 22590 (0.0035) +[2024-06-10 20:26:28,239][46753] Fps is (10 sec: 47514.0, 60 sec: 42871.5, 300 sec: 43653.6). Total num frames: 370262016. Throughput: 0: 43848.1. Samples: 370413200. Policy #0 lag: (min: 0.0, avg: 11.2, max: 24.0) +[2024-06-10 20:26:28,240][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:26:28,252][46990] Updated weights for policy 0, policy_version 22600 (0.0049) +[2024-06-10 20:26:33,239][46753] Fps is (10 sec: 37683.8, 60 sec: 43690.7, 300 sec: 43487.1). Total num frames: 370425856. Throughput: 0: 43593.3. Samples: 370535720. Policy #0 lag: (min: 0.0, avg: 11.2, max: 24.0) +[2024-06-10 20:26:33,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:26:33,576][46990] Updated weights for policy 0, policy_version 22610 (0.0047) +[2024-06-10 20:26:35,600][46990] Updated weights for policy 0, policy_version 22620 (0.0048) +[2024-06-10 20:26:38,244][46753] Fps is (10 sec: 44216.6, 60 sec: 44233.6, 300 sec: 43597.5). Total num frames: 370704384. Throughput: 0: 43610.3. Samples: 370798300. Policy #0 lag: (min: 0.0, avg: 12.8, max: 25.0) +[2024-06-10 20:26:38,244][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:26:40,678][46990] Updated weights for policy 0, policy_version 22630 (0.0045) +[2024-06-10 20:26:43,009][46990] Updated weights for policy 0, policy_version 22640 (0.0037) +[2024-06-10 20:26:43,240][46753] Fps is (10 sec: 50789.6, 60 sec: 43417.5, 300 sec: 43709.2). Total num frames: 370933760. Throughput: 0: 43737.6. Samples: 371067260. Policy #0 lag: (min: 0.0, avg: 12.8, max: 25.0) +[2024-06-10 20:26:43,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:26:47,918][46990] Updated weights for policy 0, policy_version 22650 (0.0048) +[2024-06-10 20:26:48,239][46753] Fps is (10 sec: 39339.0, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 371097600. Throughput: 0: 43942.1. Samples: 371194300. Policy #0 lag: (min: 0.0, avg: 12.8, max: 25.0) +[2024-06-10 20:26:48,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:26:50,901][46990] Updated weights for policy 0, policy_version 22660 (0.0057) +[2024-06-10 20:26:53,240][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 371359744. Throughput: 0: 43789.2. Samples: 371452860. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 20:26:53,240][46753] Avg episode reward: [(0, '0.251')] +[2024-06-10 20:26:55,723][46990] Updated weights for policy 0, policy_version 22670 (0.0048) +[2024-06-10 20:26:58,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43147.7, 300 sec: 43598.1). Total num frames: 371556352. Throughput: 0: 43971.2. Samples: 371728820. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 20:26:58,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:26:58,464][46990] Updated weights for policy 0, policy_version 22680 (0.0042) +[2024-06-10 20:27:03,239][46753] Fps is (10 sec: 37683.4, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 371736576. Throughput: 0: 43642.6. Samples: 371843160. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 20:27:03,240][46753] Avg episode reward: [(0, '0.258')] +[2024-06-10 20:27:03,441][46990] Updated weights for policy 0, policy_version 22690 (0.0033) +[2024-06-10 20:27:05,839][46990] Updated weights for policy 0, policy_version 22700 (0.0034) +[2024-06-10 20:27:08,240][46753] Fps is (10 sec: 47513.1, 60 sec: 44236.7, 300 sec: 43653.6). Total num frames: 372031488. Throughput: 0: 43620.8. Samples: 372106100. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:27:08,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:27:10,663][46990] Updated weights for policy 0, policy_version 22710 (0.0027) +[2024-06-10 20:27:12,109][46970] Signal inference workers to stop experience collection... (5300 times) +[2024-06-10 20:27:12,158][46990] InferenceWorker_p0-w0: stopping experience collection (5300 times) +[2024-06-10 20:27:12,164][46970] Signal inference workers to resume experience collection... (5300 times) +[2024-06-10 20:27:12,167][46990] InferenceWorker_p0-w0: resuming experience collection (5300 times) +[2024-06-10 20:27:13,196][46990] Updated weights for policy 0, policy_version 22720 (0.0034) +[2024-06-10 20:27:13,239][46753] Fps is (10 sec: 50790.4, 60 sec: 43690.7, 300 sec: 43654.2). Total num frames: 372244480. Throughput: 0: 43670.1. Samples: 372378360. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:27:13,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:27:17,767][46990] Updated weights for policy 0, policy_version 22730 (0.0039) +[2024-06-10 20:27:18,239][46753] Fps is (10 sec: 37683.6, 60 sec: 43690.6, 300 sec: 43542.6). Total num frames: 372408320. Throughput: 0: 43726.6. Samples: 372503420. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:27:18,240][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:27:20,912][46990] Updated weights for policy 0, policy_version 22740 (0.0037) +[2024-06-10 20:27:23,239][46753] Fps is (10 sec: 45875.3, 60 sec: 44236.8, 300 sec: 43653.6). Total num frames: 372703232. Throughput: 0: 43826.1. Samples: 372770280. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:27:23,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:27:25,422][46990] Updated weights for policy 0, policy_version 22750 (0.0031) +[2024-06-10 20:27:28,234][46990] Updated weights for policy 0, policy_version 22760 (0.0031) +[2024-06-10 20:27:28,240][46753] Fps is (10 sec: 49151.5, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 372899840. Throughput: 0: 43876.9. Samples: 373041720. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:27:28,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:27:33,240][46753] Fps is (10 sec: 36043.9, 60 sec: 43963.5, 300 sec: 43543.2). Total num frames: 373063680. Throughput: 0: 43815.8. Samples: 373166020. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:27:33,241][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:27:33,243][46990] Updated weights for policy 0, policy_version 22770 (0.0038) +[2024-06-10 20:27:35,900][46990] Updated weights for policy 0, policy_version 22780 (0.0039) +[2024-06-10 20:27:38,240][46753] Fps is (10 sec: 45875.3, 60 sec: 44240.0, 300 sec: 43709.2). Total num frames: 373358592. Throughput: 0: 44012.9. Samples: 373433440. Policy #0 lag: (min: 2.0, avg: 12.6, max: 22.0) +[2024-06-10 20:27:38,240][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:27:40,442][46990] Updated weights for policy 0, policy_version 22790 (0.0028) +[2024-06-10 20:27:43,239][46753] Fps is (10 sec: 47515.1, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 373538816. Throughput: 0: 43826.7. Samples: 373701020. Policy #0 lag: (min: 2.0, avg: 12.6, max: 22.0) +[2024-06-10 20:27:43,240][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:27:43,323][46990] Updated weights for policy 0, policy_version 22800 (0.0038) +[2024-06-10 20:27:47,623][46990] Updated weights for policy 0, policy_version 22810 (0.0034) +[2024-06-10 20:27:48,239][46753] Fps is (10 sec: 36045.6, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 373719040. Throughput: 0: 43990.8. Samples: 373822740. Policy #0 lag: (min: 2.0, avg: 12.6, max: 22.0) +[2024-06-10 20:27:48,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:27:50,891][46990] Updated weights for policy 0, policy_version 22820 (0.0028) +[2024-06-10 20:27:53,241][46753] Fps is (10 sec: 47506.0, 60 sec: 44235.7, 300 sec: 43709.0). Total num frames: 374013952. Throughput: 0: 44011.5. Samples: 374086680. Policy #0 lag: (min: 0.0, avg: 13.1, max: 21.0) +[2024-06-10 20:27:53,241][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:27:55,441][46990] Updated weights for policy 0, policy_version 22830 (0.0035) +[2024-06-10 20:27:58,239][46753] Fps is (10 sec: 47513.0, 60 sec: 43963.8, 300 sec: 43653.7). Total num frames: 374194176. Throughput: 0: 44012.1. Samples: 374358900. Policy #0 lag: (min: 0.0, avg: 13.1, max: 21.0) +[2024-06-10 20:27:58,240][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:27:58,306][46990] Updated weights for policy 0, policy_version 22840 (0.0031) +[2024-06-10 20:28:03,057][46990] Updated weights for policy 0, policy_version 22850 (0.0028) +[2024-06-10 20:28:03,240][46753] Fps is (10 sec: 36049.8, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 374374400. Throughput: 0: 43997.2. Samples: 374483300. Policy #0 lag: (min: 0.0, avg: 13.1, max: 21.0) +[2024-06-10 20:28:03,240][46753] Avg episode reward: [(0, '0.255')] +[2024-06-10 20:28:05,732][46990] Updated weights for policy 0, policy_version 22860 (0.0040) +[2024-06-10 20:28:08,239][46753] Fps is (10 sec: 47513.5, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 374669312. Throughput: 0: 43876.9. Samples: 374744740. Policy #0 lag: (min: 0.0, avg: 13.1, max: 21.0) +[2024-06-10 20:28:08,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:28:10,261][46990] Updated weights for policy 0, policy_version 22870 (0.0030) +[2024-06-10 20:28:13,080][46990] Updated weights for policy 0, policy_version 22880 (0.0035) +[2024-06-10 20:28:13,240][46753] Fps is (10 sec: 49152.4, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 374865920. Throughput: 0: 43840.0. Samples: 375014520. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:28:13,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:28:17,426][46990] Updated weights for policy 0, policy_version 22890 (0.0036) +[2024-06-10 20:28:18,241][46753] Fps is (10 sec: 37676.6, 60 sec: 43962.4, 300 sec: 43708.9). Total num frames: 375046144. Throughput: 0: 43978.1. Samples: 375145100. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:28:18,242][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:28:18,697][46970] Signal inference workers to stop experience collection... (5350 times) +[2024-06-10 20:28:18,710][46990] InferenceWorker_p0-w0: stopping experience collection (5350 times) +[2024-06-10 20:28:18,756][46970] Signal inference workers to resume experience collection... (5350 times) +[2024-06-10 20:28:18,757][46990] InferenceWorker_p0-w0: resuming experience collection (5350 times) +[2024-06-10 20:28:20,626][46990] Updated weights for policy 0, policy_version 22900 (0.0039) +[2024-06-10 20:28:23,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 375324672. Throughput: 0: 43870.8. Samples: 375407620. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:28:23,240][46753] Avg episode reward: [(0, '0.262')] +[2024-06-10 20:28:23,262][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000022908_375324672.pth... +[2024-06-10 20:28:23,311][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000022264_364773376.pth +[2024-06-10 20:28:25,269][46990] Updated weights for policy 0, policy_version 22910 (0.0029) +[2024-06-10 20:28:28,062][46990] Updated weights for policy 0, policy_version 22920 (0.0044) +[2024-06-10 20:28:28,239][46753] Fps is (10 sec: 47522.4, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 375521280. Throughput: 0: 43684.5. Samples: 375666820. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 20:28:28,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:28:32,870][46990] Updated weights for policy 0, policy_version 22930 (0.0032) +[2024-06-10 20:28:33,239][46753] Fps is (10 sec: 37683.0, 60 sec: 43963.9, 300 sec: 43709.8). Total num frames: 375701504. Throughput: 0: 43883.8. Samples: 375797520. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 20:28:33,240][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:28:35,754][46990] Updated weights for policy 0, policy_version 22940 (0.0042) +[2024-06-10 20:28:38,239][46753] Fps is (10 sec: 45874.6, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 375980032. Throughput: 0: 43828.1. Samples: 376058880. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 20:28:38,243][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:28:40,119][46990] Updated weights for policy 0, policy_version 22950 (0.0028) +[2024-06-10 20:28:43,107][46990] Updated weights for policy 0, policy_version 22960 (0.0032) +[2024-06-10 20:28:43,240][46753] Fps is (10 sec: 47513.4, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 376176640. Throughput: 0: 43736.8. Samples: 376327060. Policy #0 lag: (min: 0.0, avg: 7.4, max: 21.0) +[2024-06-10 20:28:43,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:28:47,298][46990] Updated weights for policy 0, policy_version 22970 (0.0047) +[2024-06-10 20:28:48,239][46753] Fps is (10 sec: 36045.1, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 376340480. Throughput: 0: 43934.8. Samples: 376460360. Policy #0 lag: (min: 0.0, avg: 7.4, max: 21.0) +[2024-06-10 20:28:48,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:28:50,735][46990] Updated weights for policy 0, policy_version 22980 (0.0038) +[2024-06-10 20:28:53,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43691.8, 300 sec: 43709.7). Total num frames: 376635392. Throughput: 0: 43724.0. Samples: 376712320. Policy #0 lag: (min: 0.0, avg: 7.4, max: 21.0) +[2024-06-10 20:28:53,244][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:28:55,046][46990] Updated weights for policy 0, policy_version 22990 (0.0037) +[2024-06-10 20:28:58,239][46990] Updated weights for policy 0, policy_version 23000 (0.0023) +[2024-06-10 20:28:58,239][46753] Fps is (10 sec: 49151.9, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 376832000. Throughput: 0: 43650.7. Samples: 376978800. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:28:58,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:29:02,892][46990] Updated weights for policy 0, policy_version 23010 (0.0031) +[2024-06-10 20:29:03,240][46753] Fps is (10 sec: 36044.6, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 376995840. Throughput: 0: 43554.1. Samples: 377104960. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:29:03,240][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:29:05,668][46990] Updated weights for policy 0, policy_version 23020 (0.0025) +[2024-06-10 20:29:08,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 377290752. Throughput: 0: 43387.2. Samples: 377360040. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:29:08,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:29:10,207][46990] Updated weights for policy 0, policy_version 23030 (0.0028) +[2024-06-10 20:29:12,680][46970] Signal inference workers to stop experience collection... (5400 times) +[2024-06-10 20:29:12,680][46970] Signal inference workers to resume experience collection... (5400 times) +[2024-06-10 20:29:12,722][46990] InferenceWorker_p0-w0: stopping experience collection (5400 times) +[2024-06-10 20:29:12,722][46990] InferenceWorker_p0-w0: resuming experience collection (5400 times) +[2024-06-10 20:29:13,239][46753] Fps is (10 sec: 47514.0, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 377470976. Throughput: 0: 43647.0. Samples: 377630940. Policy #0 lag: (min: 0.0, avg: 7.7, max: 21.0) +[2024-06-10 20:29:13,240][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:29:13,274][46990] Updated weights for policy 0, policy_version 23040 (0.0058) +[2024-06-10 20:29:17,515][46990] Updated weights for policy 0, policy_version 23050 (0.0032) +[2024-06-10 20:29:18,240][46753] Fps is (10 sec: 36044.1, 60 sec: 43418.8, 300 sec: 43653.7). Total num frames: 377651200. Throughput: 0: 43559.0. Samples: 377757680. Policy #0 lag: (min: 0.0, avg: 7.7, max: 21.0) +[2024-06-10 20:29:18,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:29:21,007][46990] Updated weights for policy 0, policy_version 23060 (0.0027) +[2024-06-10 20:29:23,240][46753] Fps is (10 sec: 47513.0, 60 sec: 43690.6, 300 sec: 43709.1). Total num frames: 377946112. Throughput: 0: 43507.0. Samples: 378016700. Policy #0 lag: (min: 0.0, avg: 7.7, max: 21.0) +[2024-06-10 20:29:23,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:29:25,329][46990] Updated weights for policy 0, policy_version 23070 (0.0048) +[2024-06-10 20:29:28,244][46753] Fps is (10 sec: 45855.4, 60 sec: 43141.3, 300 sec: 43708.5). Total num frames: 378109952. Throughput: 0: 43434.9. Samples: 378281820. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 20:29:28,244][46753] Avg episode reward: [(0, '0.254')] +[2024-06-10 20:29:28,639][46990] Updated weights for policy 0, policy_version 23080 (0.0032) +[2024-06-10 20:29:33,239][46753] Fps is (10 sec: 34406.9, 60 sec: 43144.6, 300 sec: 43598.1). Total num frames: 378290176. Throughput: 0: 43255.6. Samples: 378406860. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 20:29:33,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:29:33,246][46990] Updated weights for policy 0, policy_version 23090 (0.0031) +[2024-06-10 20:29:35,931][46990] Updated weights for policy 0, policy_version 23100 (0.0032) +[2024-06-10 20:29:38,239][46753] Fps is (10 sec: 49173.6, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 378601472. Throughput: 0: 43393.8. Samples: 378665040. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 20:29:38,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:29:40,487][46990] Updated weights for policy 0, policy_version 23110 (0.0024) +[2024-06-10 20:29:43,239][46753] Fps is (10 sec: 49152.1, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 378781696. Throughput: 0: 43519.1. Samples: 378937160. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 20:29:43,240][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:29:43,328][46990] Updated weights for policy 0, policy_version 23120 (0.0037) +[2024-06-10 20:29:48,143][46990] Updated weights for policy 0, policy_version 23130 (0.0048) +[2024-06-10 20:29:48,240][46753] Fps is (10 sec: 36044.3, 60 sec: 43690.5, 300 sec: 43653.6). Total num frames: 378961920. Throughput: 0: 43523.5. Samples: 379063520. Policy #0 lag: (min: 1.0, avg: 12.1, max: 25.0) +[2024-06-10 20:29:48,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:29:51,254][46990] Updated weights for policy 0, policy_version 23140 (0.0043) +[2024-06-10 20:29:53,239][46753] Fps is (10 sec: 47513.7, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 379256832. Throughput: 0: 43586.6. Samples: 379321440. Policy #0 lag: (min: 1.0, avg: 12.1, max: 25.0) +[2024-06-10 20:29:53,240][46753] Avg episode reward: [(0, '0.262')] +[2024-06-10 20:29:55,826][46990] Updated weights for policy 0, policy_version 23150 (0.0033) +[2024-06-10 20:29:58,239][46753] Fps is (10 sec: 44237.8, 60 sec: 42871.5, 300 sec: 43709.2). Total num frames: 379404288. Throughput: 0: 43556.5. Samples: 379590980. Policy #0 lag: (min: 1.0, avg: 12.1, max: 25.0) +[2024-06-10 20:29:58,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:29:58,609][46990] Updated weights for policy 0, policy_version 23160 (0.0037) +[2024-06-10 20:30:03,239][46753] Fps is (10 sec: 34406.5, 60 sec: 43417.7, 300 sec: 43653.6). Total num frames: 379600896. Throughput: 0: 43421.5. Samples: 379711640. Policy #0 lag: (min: 0.0, avg: 11.4, max: 24.0) +[2024-06-10 20:30:03,240][46753] Avg episode reward: [(0, '0.268')] +[2024-06-10 20:30:03,274][46990] Updated weights for policy 0, policy_version 23170 (0.0029) +[2024-06-10 20:30:05,998][46990] Updated weights for policy 0, policy_version 23180 (0.0049) +[2024-06-10 20:30:08,239][46753] Fps is (10 sec: 50790.4, 60 sec: 43690.6, 300 sec: 43653.7). Total num frames: 379912192. Throughput: 0: 43452.2. Samples: 379972040. Policy #0 lag: (min: 0.0, avg: 11.4, max: 24.0) +[2024-06-10 20:30:08,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:30:10,487][46990] Updated weights for policy 0, policy_version 23190 (0.0034) +[2024-06-10 20:30:12,914][46970] Signal inference workers to stop experience collection... (5450 times) +[2024-06-10 20:30:12,915][46970] Signal inference workers to resume experience collection... (5450 times) +[2024-06-10 20:30:12,930][46990] InferenceWorker_p0-w0: stopping experience collection (5450 times) +[2024-06-10 20:30:12,930][46990] InferenceWorker_p0-w0: resuming experience collection (5450 times) +[2024-06-10 20:30:13,239][46753] Fps is (10 sec: 47513.0, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 380076032. Throughput: 0: 43635.8. Samples: 380245240. Policy #0 lag: (min: 0.0, avg: 11.4, max: 24.0) +[2024-06-10 20:30:13,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:30:13,530][46990] Updated weights for policy 0, policy_version 23200 (0.0028) +[2024-06-10 20:30:18,239][46753] Fps is (10 sec: 36045.2, 60 sec: 43690.9, 300 sec: 43598.1). Total num frames: 380272640. Throughput: 0: 43568.6. Samples: 380367440. Policy #0 lag: (min: 0.0, avg: 12.3, max: 24.0) +[2024-06-10 20:30:18,240][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:30:18,250][46990] Updated weights for policy 0, policy_version 23210 (0.0045) +[2024-06-10 20:30:21,489][46990] Updated weights for policy 0, policy_version 23220 (0.0035) +[2024-06-10 20:30:23,240][46753] Fps is (10 sec: 47513.3, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 380551168. Throughput: 0: 43627.1. Samples: 380628260. Policy #0 lag: (min: 0.0, avg: 12.3, max: 24.0) +[2024-06-10 20:30:23,240][46753] Avg episode reward: [(0, '0.253')] +[2024-06-10 20:30:23,257][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000023227_380551168.pth... +[2024-06-10 20:30:23,315][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000022586_370049024.pth +[2024-06-10 20:30:25,934][46990] Updated weights for policy 0, policy_version 23230 (0.0029) +[2024-06-10 20:30:28,239][46753] Fps is (10 sec: 40959.4, 60 sec: 42874.6, 300 sec: 43653.6). Total num frames: 380682240. Throughput: 0: 43463.1. Samples: 380893000. Policy #0 lag: (min: 0.0, avg: 12.3, max: 24.0) +[2024-06-10 20:30:28,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:30:29,050][46990] Updated weights for policy 0, policy_version 23240 (0.0042) +[2024-06-10 20:30:33,239][46753] Fps is (10 sec: 36045.5, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 380911616. Throughput: 0: 43159.8. Samples: 381005700. Policy #0 lag: (min: 0.0, avg: 14.5, max: 26.0) +[2024-06-10 20:30:33,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:30:33,290][46990] Updated weights for policy 0, policy_version 23250 (0.0035) +[2024-06-10 20:30:36,404][46990] Updated weights for policy 0, policy_version 23260 (0.0030) +[2024-06-10 20:30:38,240][46753] Fps is (10 sec: 52428.3, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 381206528. Throughput: 0: 43470.1. Samples: 381277600. Policy #0 lag: (min: 0.0, avg: 14.5, max: 26.0) +[2024-06-10 20:30:38,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:30:40,395][46990] Updated weights for policy 0, policy_version 23270 (0.0027) +[2024-06-10 20:30:43,239][46753] Fps is (10 sec: 45874.8, 60 sec: 43144.5, 300 sec: 43709.2). Total num frames: 381370368. Throughput: 0: 43643.9. Samples: 381554960. Policy #0 lag: (min: 0.0, avg: 14.5, max: 26.0) +[2024-06-10 20:30:43,244][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:30:43,844][46990] Updated weights for policy 0, policy_version 23280 (0.0039) +[2024-06-10 20:30:48,033][46990] Updated weights for policy 0, policy_version 23290 (0.0038) +[2024-06-10 20:30:48,240][46753] Fps is (10 sec: 37681.8, 60 sec: 43690.4, 300 sec: 43598.0). Total num frames: 381583360. Throughput: 0: 43625.2. Samples: 381674800. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 20:30:48,241][46753] Avg episode reward: [(0, '0.251')] +[2024-06-10 20:30:51,682][46990] Updated weights for policy 0, policy_version 23300 (0.0042) +[2024-06-10 20:30:53,239][46753] Fps is (10 sec: 49151.7, 60 sec: 43417.5, 300 sec: 43709.8). Total num frames: 381861888. Throughput: 0: 43660.3. Samples: 381936760. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 20:30:53,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:30:53,246][46970] Saving new best policy, reward=0.283! +[2024-06-10 20:30:55,759][46990] Updated weights for policy 0, policy_version 23310 (0.0040) +[2024-06-10 20:30:58,239][46753] Fps is (10 sec: 40962.3, 60 sec: 43144.6, 300 sec: 43653.6). Total num frames: 381992960. Throughput: 0: 43623.7. Samples: 382208300. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 20:30:58,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:30:59,073][46990] Updated weights for policy 0, policy_version 23320 (0.0041) +[2024-06-10 20:31:03,159][46990] Updated weights for policy 0, policy_version 23330 (0.0028) +[2024-06-10 20:31:03,244][46753] Fps is (10 sec: 37666.4, 60 sec: 43960.4, 300 sec: 43597.4). Total num frames: 382238720. Throughput: 0: 43378.2. Samples: 382319660. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 20:31:03,245][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:31:06,673][46990] Updated weights for policy 0, policy_version 23340 (0.0042) +[2024-06-10 20:31:08,239][46753] Fps is (10 sec: 50790.3, 60 sec: 43144.5, 300 sec: 43653.7). Total num frames: 382500864. Throughput: 0: 43572.2. Samples: 382589000. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 20:31:08,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:31:10,291][46990] Updated weights for policy 0, policy_version 23350 (0.0039) +[2024-06-10 20:31:11,407][46970] Signal inference workers to stop experience collection... (5500 times) +[2024-06-10 20:31:11,408][46970] Signal inference workers to resume experience collection... (5500 times) +[2024-06-10 20:31:11,464][46990] InferenceWorker_p0-w0: stopping experience collection (5500 times) +[2024-06-10 20:31:11,464][46990] InferenceWorker_p0-w0: resuming experience collection (5500 times) +[2024-06-10 20:31:13,239][46753] Fps is (10 sec: 40978.8, 60 sec: 42871.5, 300 sec: 43598.1). Total num frames: 382648320. Throughput: 0: 43639.2. Samples: 382856760. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 20:31:13,240][46753] Avg episode reward: [(0, '0.262')] +[2024-06-10 20:31:14,020][46990] Updated weights for policy 0, policy_version 23360 (0.0034) +[2024-06-10 20:31:17,842][46990] Updated weights for policy 0, policy_version 23370 (0.0036) +[2024-06-10 20:31:18,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 382910464. Throughput: 0: 43751.1. Samples: 382974500. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 20:31:18,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:31:21,945][46990] Updated weights for policy 0, policy_version 23380 (0.0037) +[2024-06-10 20:31:23,239][46753] Fps is (10 sec: 50790.7, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 383156224. Throughput: 0: 43719.8. Samples: 383244980. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 20:31:23,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:31:25,371][46990] Updated weights for policy 0, policy_version 23390 (0.0037) +[2024-06-10 20:31:28,244][46753] Fps is (10 sec: 39303.7, 60 sec: 43687.4, 300 sec: 43653.0). Total num frames: 383303680. Throughput: 0: 43254.8. Samples: 383501620. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 20:31:28,245][46753] Avg episode reward: [(0, '0.268')] +[2024-06-10 20:31:29,511][46990] Updated weights for policy 0, policy_version 23400 (0.0024) +[2024-06-10 20:31:32,495][46990] Updated weights for policy 0, policy_version 23410 (0.0039) +[2024-06-10 20:31:33,239][46753] Fps is (10 sec: 39321.1, 60 sec: 43963.7, 300 sec: 43543.2). Total num frames: 383549440. Throughput: 0: 43354.2. Samples: 383625720. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 20:31:33,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:31:36,831][46990] Updated weights for policy 0, policy_version 23420 (0.0031) +[2024-06-10 20:31:38,241][46753] Fps is (10 sec: 50806.4, 60 sec: 43416.7, 300 sec: 43653.5). Total num frames: 383811584. Throughput: 0: 43618.3. Samples: 383899640. Policy #0 lag: (min: 0.0, avg: 6.9, max: 21.0) +[2024-06-10 20:31:38,241][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:31:40,092][46990] Updated weights for policy 0, policy_version 23430 (0.0037) +[2024-06-10 20:31:43,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43144.6, 300 sec: 43598.1). Total num frames: 383959040. Throughput: 0: 43628.9. Samples: 384171600. Policy #0 lag: (min: 0.0, avg: 6.9, max: 21.0) +[2024-06-10 20:31:43,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:31:44,207][46990] Updated weights for policy 0, policy_version 23440 (0.0039) +[2024-06-10 20:31:47,264][46990] Updated weights for policy 0, policy_version 23450 (0.0029) +[2024-06-10 20:31:48,240][46753] Fps is (10 sec: 42603.4, 60 sec: 44237.0, 300 sec: 43653.6). Total num frames: 384237568. Throughput: 0: 43794.9. Samples: 384290240. Policy #0 lag: (min: 0.0, avg: 6.9, max: 21.0) +[2024-06-10 20:31:48,240][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:31:51,823][46990] Updated weights for policy 0, policy_version 23460 (0.0032) +[2024-06-10 20:31:53,239][46753] Fps is (10 sec: 50790.3, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 384466944. Throughput: 0: 43748.0. Samples: 384557660. Policy #0 lag: (min: 0.0, avg: 8.8, max: 23.0) +[2024-06-10 20:31:53,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 20:31:53,302][46970] Saving new best policy, reward=0.284! +[2024-06-10 20:31:54,830][46990] Updated weights for policy 0, policy_version 23470 (0.0032) +[2024-06-10 20:31:58,239][46753] Fps is (10 sec: 37683.7, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 384614400. Throughput: 0: 43647.0. Samples: 384820880. Policy #0 lag: (min: 0.0, avg: 8.8, max: 23.0) +[2024-06-10 20:31:58,251][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:31:59,559][46990] Updated weights for policy 0, policy_version 23480 (0.0040) +[2024-06-10 20:32:02,738][46990] Updated weights for policy 0, policy_version 23490 (0.0048) +[2024-06-10 20:32:03,244][46753] Fps is (10 sec: 44216.7, 60 sec: 44509.9, 300 sec: 43653.0). Total num frames: 384909312. Throughput: 0: 43670.7. Samples: 384939880. Policy #0 lag: (min: 0.0, avg: 8.8, max: 23.0) +[2024-06-10 20:32:03,245][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:32:07,028][46970] Signal inference workers to stop experience collection... (5550 times) +[2024-06-10 20:32:07,028][46970] Signal inference workers to resume experience collection... (5550 times) +[2024-06-10 20:32:07,029][46990] Updated weights for policy 0, policy_version 23500 (0.0034) +[2024-06-10 20:32:07,077][46990] InferenceWorker_p0-w0: stopping experience collection (5550 times) +[2024-06-10 20:32:07,077][46990] InferenceWorker_p0-w0: resuming experience collection (5550 times) +[2024-06-10 20:32:08,239][46753] Fps is (10 sec: 50790.7, 60 sec: 43690.6, 300 sec: 43653.7). Total num frames: 385122304. Throughput: 0: 43815.0. Samples: 385216660. Policy #0 lag: (min: 1.0, avg: 10.4, max: 23.0) +[2024-06-10 20:32:08,240][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:32:09,895][46990] Updated weights for policy 0, policy_version 23510 (0.0035) +[2024-06-10 20:32:13,239][46753] Fps is (10 sec: 36061.2, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 385269760. Throughput: 0: 43901.8. Samples: 385477000. Policy #0 lag: (min: 1.0, avg: 10.4, max: 23.0) +[2024-06-10 20:32:13,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:32:14,343][46990] Updated weights for policy 0, policy_version 23520 (0.0030) +[2024-06-10 20:32:17,234][46990] Updated weights for policy 0, policy_version 23530 (0.0036) +[2024-06-10 20:32:18,239][46753] Fps is (10 sec: 44236.4, 60 sec: 44236.7, 300 sec: 43598.1). Total num frames: 385564672. Throughput: 0: 43907.1. Samples: 385601540. Policy #0 lag: (min: 1.0, avg: 10.4, max: 23.0) +[2024-06-10 20:32:18,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:32:22,050][46990] Updated weights for policy 0, policy_version 23540 (0.0038) +[2024-06-10 20:32:23,239][46753] Fps is (10 sec: 49152.0, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 385761280. Throughput: 0: 43808.5. Samples: 385870960. Policy #0 lag: (min: 1.0, avg: 10.4, max: 23.0) +[2024-06-10 20:32:23,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:32:23,286][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000023546_385777664.pth... +[2024-06-10 20:32:23,340][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000022908_375324672.pth +[2024-06-10 20:32:24,851][46990] Updated weights for policy 0, policy_version 23550 (0.0029) +[2024-06-10 20:32:28,239][46753] Fps is (10 sec: 36045.2, 60 sec: 43694.0, 300 sec: 43598.1). Total num frames: 385925120. Throughput: 0: 43488.8. Samples: 386128600. Policy #0 lag: (min: 0.0, avg: 11.1, max: 20.0) +[2024-06-10 20:32:28,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:32:29,652][46990] Updated weights for policy 0, policy_version 23560 (0.0031) +[2024-06-10 20:32:32,187][46990] Updated weights for policy 0, policy_version 23570 (0.0045) +[2024-06-10 20:32:33,239][46753] Fps is (10 sec: 45875.3, 60 sec: 44509.9, 300 sec: 43598.1). Total num frames: 386220032. Throughput: 0: 43577.1. Samples: 386251200. Policy #0 lag: (min: 0.0, avg: 11.1, max: 20.0) +[2024-06-10 20:32:33,240][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:32:36,902][46990] Updated weights for policy 0, policy_version 23580 (0.0026) +[2024-06-10 20:32:38,239][46753] Fps is (10 sec: 50790.5, 60 sec: 43691.7, 300 sec: 43709.2). Total num frames: 386433024. Throughput: 0: 43821.3. Samples: 386529620. Policy #0 lag: (min: 0.0, avg: 11.1, max: 20.0) +[2024-06-10 20:32:38,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:32:39,681][46990] Updated weights for policy 0, policy_version 23590 (0.0035) +[2024-06-10 20:32:43,239][46753] Fps is (10 sec: 36044.6, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 386580480. Throughput: 0: 43968.9. Samples: 386799480. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:32:43,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:32:44,205][46990] Updated weights for policy 0, policy_version 23600 (0.0041) +[2024-06-10 20:32:46,733][46990] Updated weights for policy 0, policy_version 23610 (0.0040) +[2024-06-10 20:32:48,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.8, 300 sec: 43598.3). Total num frames: 386875392. Throughput: 0: 43913.7. Samples: 386915800. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:32:48,244][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:32:51,490][46990] Updated weights for policy 0, policy_version 23620 (0.0028) +[2024-06-10 20:32:53,239][46753] Fps is (10 sec: 49152.4, 60 sec: 43417.6, 300 sec: 43653.7). Total num frames: 387072000. Throughput: 0: 43845.9. Samples: 387189720. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:32:53,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:32:54,398][46990] Updated weights for policy 0, policy_version 23630 (0.0035) +[2024-06-10 20:32:58,239][46753] Fps is (10 sec: 36045.2, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 387235840. Throughput: 0: 43977.4. Samples: 387455980. Policy #0 lag: (min: 0.0, avg: 13.0, max: 22.0) +[2024-06-10 20:32:58,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:32:59,180][46990] Updated weights for policy 0, policy_version 23640 (0.0044) +[2024-06-10 20:33:01,611][46990] Updated weights for policy 0, policy_version 23650 (0.0046) +[2024-06-10 20:33:03,239][46753] Fps is (10 sec: 45874.4, 60 sec: 43693.9, 300 sec: 43598.1). Total num frames: 387530752. Throughput: 0: 43844.9. Samples: 387574560. Policy #0 lag: (min: 0.0, avg: 13.0, max: 22.0) +[2024-06-10 20:33:03,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:33:06,560][46990] Updated weights for policy 0, policy_version 23660 (0.0028) +[2024-06-10 20:33:08,240][46753] Fps is (10 sec: 49151.2, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 387727360. Throughput: 0: 43927.0. Samples: 387847680. Policy #0 lag: (min: 0.0, avg: 13.0, max: 22.0) +[2024-06-10 20:33:08,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:33:08,856][46970] Signal inference workers to stop experience collection... (5600 times) +[2024-06-10 20:33:08,857][46970] Signal inference workers to resume experience collection... (5600 times) +[2024-06-10 20:33:08,904][46990] InferenceWorker_p0-w0: stopping experience collection (5600 times) +[2024-06-10 20:33:08,904][46990] InferenceWorker_p0-w0: resuming experience collection (5600 times) +[2024-06-10 20:33:09,303][46990] Updated weights for policy 0, policy_version 23670 (0.0039) +[2024-06-10 20:33:13,239][46753] Fps is (10 sec: 37683.3, 60 sec: 43963.7, 300 sec: 43598.4). Total num frames: 387907584. Throughput: 0: 44180.8. Samples: 388116740. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:33:13,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:33:13,825][46990] Updated weights for policy 0, policy_version 23680 (0.0038) +[2024-06-10 20:33:16,758][46990] Updated weights for policy 0, policy_version 23690 (0.0039) +[2024-06-10 20:33:18,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 388186112. Throughput: 0: 43923.1. Samples: 388227740. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:33:18,240][46753] Avg episode reward: [(0, '0.262')] +[2024-06-10 20:33:21,579][46990] Updated weights for policy 0, policy_version 23700 (0.0033) +[2024-06-10 20:33:23,240][46753] Fps is (10 sec: 47512.9, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 388382720. Throughput: 0: 43754.4. Samples: 388498580. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:33:23,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:33:24,327][46990] Updated weights for policy 0, policy_version 23710 (0.0033) +[2024-06-10 20:33:28,239][46753] Fps is (10 sec: 36044.4, 60 sec: 43690.6, 300 sec: 43542.6). Total num frames: 388546560. Throughput: 0: 43807.5. Samples: 388770820. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 20:33:28,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:33:29,198][46990] Updated weights for policy 0, policy_version 23720 (0.0041) +[2024-06-10 20:33:31,930][46990] Updated weights for policy 0, policy_version 23730 (0.0026) +[2024-06-10 20:33:33,239][46753] Fps is (10 sec: 45876.4, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 388841472. Throughput: 0: 43761.4. Samples: 388885060. Policy #0 lag: (min: 0.0, avg: 6.0, max: 21.0) +[2024-06-10 20:33:33,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:33:36,454][46990] Updated weights for policy 0, policy_version 23740 (0.0025) +[2024-06-10 20:33:38,240][46753] Fps is (10 sec: 50790.3, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 389054464. Throughput: 0: 43708.3. Samples: 389156600. Policy #0 lag: (min: 0.0, avg: 6.0, max: 21.0) +[2024-06-10 20:33:38,240][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:33:39,230][46990] Updated weights for policy 0, policy_version 23750 (0.0038) +[2024-06-10 20:33:43,239][46753] Fps is (10 sec: 37683.1, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 389218304. Throughput: 0: 43893.7. Samples: 389431200. Policy #0 lag: (min: 0.0, avg: 6.0, max: 21.0) +[2024-06-10 20:33:43,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:33:43,821][46990] Updated weights for policy 0, policy_version 23760 (0.0043) +[2024-06-10 20:33:46,876][46990] Updated weights for policy 0, policy_version 23770 (0.0033) +[2024-06-10 20:33:48,240][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 389496832. Throughput: 0: 43748.4. Samples: 389543240. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:33:48,240][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:33:51,549][46990] Updated weights for policy 0, policy_version 23780 (0.0038) +[2024-06-10 20:33:53,239][46753] Fps is (10 sec: 50790.1, 60 sec: 44236.7, 300 sec: 43709.2). Total num frames: 389726208. Throughput: 0: 43522.7. Samples: 389806200. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:33:53,240][46753] Avg episode reward: [(0, '0.268')] +[2024-06-10 20:33:54,228][46990] Updated weights for policy 0, policy_version 23790 (0.0043) +[2024-06-10 20:33:58,239][46753] Fps is (10 sec: 36045.3, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 389857280. Throughput: 0: 43754.3. Samples: 390085680. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:33:58,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:33:59,047][46990] Updated weights for policy 0, policy_version 23800 (0.0038) +[2024-06-10 20:34:01,993][46990] Updated weights for policy 0, policy_version 23810 (0.0041) +[2024-06-10 20:34:03,240][46753] Fps is (10 sec: 42596.3, 60 sec: 43690.4, 300 sec: 43598.0). Total num frames: 390152192. Throughput: 0: 43747.0. Samples: 390196380. Policy #0 lag: (min: 0.0, avg: 11.9, max: 22.0) +[2024-06-10 20:34:03,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:34:06,720][46990] Updated weights for policy 0, policy_version 23820 (0.0040) +[2024-06-10 20:34:08,239][46753] Fps is (10 sec: 50789.8, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 390365184. Throughput: 0: 43733.0. Samples: 390466560. Policy #0 lag: (min: 0.0, avg: 11.9, max: 22.0) +[2024-06-10 20:34:08,240][46753] Avg episode reward: [(0, '0.262')] +[2024-06-10 20:34:09,382][46990] Updated weights for policy 0, policy_version 23830 (0.0035) +[2024-06-10 20:34:13,239][46753] Fps is (10 sec: 36046.7, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 390512640. Throughput: 0: 43645.0. Samples: 390734840. Policy #0 lag: (min: 0.0, avg: 11.9, max: 22.0) +[2024-06-10 20:34:13,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:34:14,046][46970] Signal inference workers to stop experience collection... (5650 times) +[2024-06-10 20:34:14,046][46970] Signal inference workers to resume experience collection... (5650 times) +[2024-06-10 20:34:14,090][46990] InferenceWorker_p0-w0: stopping experience collection (5650 times) +[2024-06-10 20:34:14,090][46990] InferenceWorker_p0-w0: resuming experience collection (5650 times) +[2024-06-10 20:34:14,495][46990] Updated weights for policy 0, policy_version 23840 (0.0029) +[2024-06-10 20:34:16,853][46990] Updated weights for policy 0, policy_version 23850 (0.0042) +[2024-06-10 20:34:18,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 390807552. Throughput: 0: 43549.8. Samples: 390844800. Policy #0 lag: (min: 0.0, avg: 12.5, max: 21.0) +[2024-06-10 20:34:18,244][46753] Avg episode reward: [(0, '0.250')] +[2024-06-10 20:34:21,943][46990] Updated weights for policy 0, policy_version 23860 (0.0034) +[2024-06-10 20:34:23,240][46753] Fps is (10 sec: 50789.5, 60 sec: 43963.8, 300 sec: 43765.4). Total num frames: 391020544. Throughput: 0: 43519.9. Samples: 391115000. Policy #0 lag: (min: 0.0, avg: 12.5, max: 21.0) +[2024-06-10 20:34:23,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:34:23,256][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000023866_391020544.pth... +[2024-06-10 20:34:23,317][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000023227_380551168.pth +[2024-06-10 20:34:24,355][46990] Updated weights for policy 0, policy_version 23870 (0.0024) +[2024-06-10 20:34:28,239][46753] Fps is (10 sec: 36044.9, 60 sec: 43690.8, 300 sec: 43653.7). Total num frames: 391168000. Throughput: 0: 43446.7. Samples: 391386300. Policy #0 lag: (min: 0.0, avg: 12.5, max: 21.0) +[2024-06-10 20:34:28,240][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:34:29,352][46990] Updated weights for policy 0, policy_version 23880 (0.0027) +[2024-06-10 20:34:31,806][46990] Updated weights for policy 0, policy_version 23890 (0.0031) +[2024-06-10 20:34:33,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 391446528. Throughput: 0: 43432.1. Samples: 391497680. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:34:33,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:34:36,815][46990] Updated weights for policy 0, policy_version 23900 (0.0037) +[2024-06-10 20:34:38,239][46753] Fps is (10 sec: 49151.9, 60 sec: 43417.7, 300 sec: 43653.6). Total num frames: 391659520. Throughput: 0: 43769.9. Samples: 391775840. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:34:38,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:34:39,286][46990] Updated weights for policy 0, policy_version 23910 (0.0035) +[2024-06-10 20:34:43,239][46753] Fps is (10 sec: 37683.2, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 391823360. Throughput: 0: 43347.5. Samples: 392036320. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:34:43,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:34:44,325][46990] Updated weights for policy 0, policy_version 23920 (0.0042) +[2024-06-10 20:34:47,022][46990] Updated weights for policy 0, policy_version 23930 (0.0029) +[2024-06-10 20:34:48,240][46753] Fps is (10 sec: 44235.9, 60 sec: 43417.5, 300 sec: 43542.5). Total num frames: 392101888. Throughput: 0: 43458.1. Samples: 392151980. Policy #0 lag: (min: 0.0, avg: 11.8, max: 22.0) +[2024-06-10 20:34:48,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:34:52,129][46990] Updated weights for policy 0, policy_version 23940 (0.0031) +[2024-06-10 20:34:53,239][46753] Fps is (10 sec: 50790.3, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 392331264. Throughput: 0: 43591.6. Samples: 392428180. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 20:34:53,240][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:34:54,234][46990] Updated weights for policy 0, policy_version 23950 (0.0038) +[2024-06-10 20:34:58,239][46753] Fps is (10 sec: 37683.9, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 392478720. Throughput: 0: 43531.1. Samples: 392693740. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 20:34:58,240][46753] Avg episode reward: [(0, '0.259')] +[2024-06-10 20:34:59,305][46990] Updated weights for policy 0, policy_version 23960 (0.0031) +[2024-06-10 20:35:02,189][46990] Updated weights for policy 0, policy_version 23970 (0.0031) +[2024-06-10 20:35:03,244][46753] Fps is (10 sec: 42579.3, 60 sec: 43414.7, 300 sec: 43541.9). Total num frames: 392757248. Throughput: 0: 43759.6. Samples: 392814180. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 20:35:03,244][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:35:06,114][46970] Signal inference workers to stop experience collection... (5700 times) +[2024-06-10 20:35:06,119][46970] Signal inference workers to resume experience collection... (5700 times) +[2024-06-10 20:35:06,146][46990] InferenceWorker_p0-w0: stopping experience collection (5700 times) +[2024-06-10 20:35:06,147][46990] InferenceWorker_p0-w0: resuming experience collection (5700 times) +[2024-06-10 20:35:06,874][46990] Updated weights for policy 0, policy_version 23980 (0.0029) +[2024-06-10 20:35:08,239][46753] Fps is (10 sec: 50790.2, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 392986624. Throughput: 0: 43751.7. Samples: 393083820. Policy #0 lag: (min: 0.0, avg: 6.6, max: 20.0) +[2024-06-10 20:35:08,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:35:09,483][46990] Updated weights for policy 0, policy_version 23990 (0.0031) +[2024-06-10 20:35:13,240][46753] Fps is (10 sec: 37699.5, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 393134080. Throughput: 0: 43523.8. Samples: 393344880. Policy #0 lag: (min: 0.0, avg: 6.6, max: 20.0) +[2024-06-10 20:35:13,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:35:14,481][46990] Updated weights for policy 0, policy_version 24000 (0.0030) +[2024-06-10 20:35:17,037][46990] Updated weights for policy 0, policy_version 24010 (0.0036) +[2024-06-10 20:35:18,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43144.4, 300 sec: 43542.6). Total num frames: 393396224. Throughput: 0: 43697.2. Samples: 393464060. Policy #0 lag: (min: 0.0, avg: 6.6, max: 20.0) +[2024-06-10 20:35:18,240][46753] Avg episode reward: [(0, '0.263')] +[2024-06-10 20:35:22,360][46990] Updated weights for policy 0, policy_version 24020 (0.0036) +[2024-06-10 20:35:23,239][46753] Fps is (10 sec: 49153.2, 60 sec: 43417.8, 300 sec: 43875.8). Total num frames: 393625600. Throughput: 0: 43524.9. Samples: 393734460. Policy #0 lag: (min: 1.0, avg: 8.5, max: 22.0) +[2024-06-10 20:35:23,240][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:35:24,527][46990] Updated weights for policy 0, policy_version 24030 (0.0031) +[2024-06-10 20:35:28,239][46753] Fps is (10 sec: 39322.2, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 393789440. Throughput: 0: 43562.3. Samples: 393996620. Policy #0 lag: (min: 1.0, avg: 8.5, max: 22.0) +[2024-06-10 20:35:28,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:35:29,741][46990] Updated weights for policy 0, policy_version 24040 (0.0041) +[2024-06-10 20:35:32,312][46990] Updated weights for policy 0, policy_version 24050 (0.0027) +[2024-06-10 20:35:33,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 394067968. Throughput: 0: 43724.2. Samples: 394119560. Policy #0 lag: (min: 1.0, avg: 8.5, max: 22.0) +[2024-06-10 20:35:33,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:35:36,909][46990] Updated weights for policy 0, policy_version 24060 (0.0030) +[2024-06-10 20:35:38,239][46753] Fps is (10 sec: 50790.4, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 394297344. Throughput: 0: 43712.1. Samples: 394395220. Policy #0 lag: (min: 1.0, avg: 11.6, max: 25.0) +[2024-06-10 20:35:38,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:35:39,510][46990] Updated weights for policy 0, policy_version 24070 (0.0037) +[2024-06-10 20:35:43,244][46753] Fps is (10 sec: 37666.1, 60 sec: 43687.4, 300 sec: 43597.5). Total num frames: 394444800. Throughput: 0: 43538.3. Samples: 394653160. Policy #0 lag: (min: 1.0, avg: 11.6, max: 25.0) +[2024-06-10 20:35:43,245][46753] Avg episode reward: [(0, '0.268')] +[2024-06-10 20:35:44,503][46990] Updated weights for policy 0, policy_version 24080 (0.0040) +[2024-06-10 20:35:47,168][46990] Updated weights for policy 0, policy_version 24090 (0.0040) +[2024-06-10 20:35:48,240][46753] Fps is (10 sec: 40959.3, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 394706944. Throughput: 0: 43642.0. Samples: 394777880. Policy #0 lag: (min: 1.0, avg: 11.6, max: 25.0) +[2024-06-10 20:35:48,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:35:52,230][46990] Updated weights for policy 0, policy_version 24100 (0.0033) +[2024-06-10 20:35:53,239][46753] Fps is (10 sec: 50813.4, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 394952704. Throughput: 0: 43664.9. Samples: 395048740. Policy #0 lag: (min: 1.0, avg: 11.6, max: 25.0) +[2024-06-10 20:35:53,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:35:54,613][46990] Updated weights for policy 0, policy_version 24110 (0.0041) +[2024-06-10 20:35:58,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43963.7, 300 sec: 43654.3). Total num frames: 395116544. Throughput: 0: 43618.8. Samples: 395307720. Policy #0 lag: (min: 0.0, avg: 12.5, max: 23.0) +[2024-06-10 20:35:58,240][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:35:59,064][46970] Signal inference workers to stop experience collection... (5750 times) +[2024-06-10 20:35:59,064][46970] Signal inference workers to resume experience collection... (5750 times) +[2024-06-10 20:35:59,081][46990] InferenceWorker_p0-w0: stopping experience collection (5750 times) +[2024-06-10 20:35:59,109][46990] InferenceWorker_p0-w0: resuming experience collection (5750 times) +[2024-06-10 20:35:59,510][46990] Updated weights for policy 0, policy_version 24120 (0.0028) +[2024-06-10 20:36:02,102][46990] Updated weights for policy 0, policy_version 24130 (0.0035) +[2024-06-10 20:36:03,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43967.0, 300 sec: 43709.2). Total num frames: 395395072. Throughput: 0: 43820.5. Samples: 395435980. Policy #0 lag: (min: 0.0, avg: 12.5, max: 23.0) +[2024-06-10 20:36:03,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:36:06,815][46990] Updated weights for policy 0, policy_version 24140 (0.0033) +[2024-06-10 20:36:08,239][46753] Fps is (10 sec: 47514.1, 60 sec: 43417.6, 300 sec: 43875.8). Total num frames: 395591680. Throughput: 0: 43746.6. Samples: 395703060. Policy #0 lag: (min: 0.0, avg: 12.5, max: 23.0) +[2024-06-10 20:36:08,240][46753] Avg episode reward: [(0, '0.268')] +[2024-06-10 20:36:09,830][46990] Updated weights for policy 0, policy_version 24150 (0.0045) +[2024-06-10 20:36:13,239][46753] Fps is (10 sec: 37683.5, 60 sec: 43963.9, 300 sec: 43598.1). Total num frames: 395771904. Throughput: 0: 43679.1. Samples: 395962180. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 20:36:13,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:36:14,399][46990] Updated weights for policy 0, policy_version 24160 (0.0033) +[2024-06-10 20:36:17,527][46990] Updated weights for policy 0, policy_version 24170 (0.0035) +[2024-06-10 20:36:18,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 396034048. Throughput: 0: 43709.2. Samples: 396086480. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 20:36:18,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:36:22,132][46990] Updated weights for policy 0, policy_version 24180 (0.0042) +[2024-06-10 20:36:23,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43417.6, 300 sec: 43820.9). Total num frames: 396230656. Throughput: 0: 43580.0. Samples: 396356320. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 20:36:23,240][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:36:23,368][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000024185_396247040.pth... +[2024-06-10 20:36:23,441][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000023546_385777664.pth +[2024-06-10 20:36:24,823][46990] Updated weights for policy 0, policy_version 24190 (0.0029) +[2024-06-10 20:36:28,239][46753] Fps is (10 sec: 37683.1, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 396410880. Throughput: 0: 43608.3. Samples: 396615340. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 20:36:28,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:36:29,499][46990] Updated weights for policy 0, policy_version 24200 (0.0027) +[2024-06-10 20:36:32,281][46990] Updated weights for policy 0, policy_version 24210 (0.0029) +[2024-06-10 20:36:33,240][46753] Fps is (10 sec: 47513.0, 60 sec: 43963.6, 300 sec: 43709.4). Total num frames: 396705792. Throughput: 0: 43727.6. Samples: 396745620. Policy #0 lag: (min: 0.0, avg: 9.9, max: 20.0) +[2024-06-10 20:36:33,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:36:37,050][46990] Updated weights for policy 0, policy_version 24220 (0.0036) +[2024-06-10 20:36:38,240][46753] Fps is (10 sec: 49152.0, 60 sec: 43417.5, 300 sec: 43875.8). Total num frames: 396902400. Throughput: 0: 43740.3. Samples: 397017060. Policy #0 lag: (min: 0.0, avg: 9.9, max: 20.0) +[2024-06-10 20:36:38,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:36:39,971][46990] Updated weights for policy 0, policy_version 24230 (0.0034) +[2024-06-10 20:36:43,239][46753] Fps is (10 sec: 37683.6, 60 sec: 43967.0, 300 sec: 43542.6). Total num frames: 397082624. Throughput: 0: 43645.4. Samples: 397271760. Policy #0 lag: (min: 0.0, avg: 9.9, max: 20.0) +[2024-06-10 20:36:43,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:36:44,280][46990] Updated weights for policy 0, policy_version 24240 (0.0038) +[2024-06-10 20:36:47,430][46990] Updated weights for policy 0, policy_version 24250 (0.0039) +[2024-06-10 20:36:48,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 397328384. Throughput: 0: 43649.8. Samples: 397400220. Policy #0 lag: (min: 0.0, avg: 8.3, max: 22.0) +[2024-06-10 20:36:48,241][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:36:51,759][46990] Updated weights for policy 0, policy_version 24260 (0.0024) +[2024-06-10 20:36:53,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43144.5, 300 sec: 43820.3). Total num frames: 397541376. Throughput: 0: 43546.6. Samples: 397662660. Policy #0 lag: (min: 0.0, avg: 8.3, max: 22.0) +[2024-06-10 20:36:53,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:36:54,910][46990] Updated weights for policy 0, policy_version 24270 (0.0030) +[2024-06-10 20:36:58,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43417.6, 300 sec: 43432.1). Total num frames: 397721600. Throughput: 0: 43765.3. Samples: 397931620. Policy #0 lag: (min: 0.0, avg: 8.3, max: 22.0) +[2024-06-10 20:36:58,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:36:59,361][46990] Updated weights for policy 0, policy_version 24280 (0.0027) +[2024-06-10 20:37:02,533][46990] Updated weights for policy 0, policy_version 24290 (0.0034) +[2024-06-10 20:37:03,239][46753] Fps is (10 sec: 47514.0, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 398016512. Throughput: 0: 43837.9. Samples: 398059180. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 20:37:03,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:37:06,748][46970] Signal inference workers to stop experience collection... (5800 times) +[2024-06-10 20:37:06,748][46970] Signal inference workers to resume experience collection... (5800 times) +[2024-06-10 20:37:06,787][46990] InferenceWorker_p0-w0: stopping experience collection (5800 times) +[2024-06-10 20:37:06,787][46990] InferenceWorker_p0-w0: resuming experience collection (5800 times) +[2024-06-10 20:37:06,882][46990] Updated weights for policy 0, policy_version 24300 (0.0029) +[2024-06-10 20:37:08,239][46753] Fps is (10 sec: 49152.3, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 398213120. Throughput: 0: 43733.8. Samples: 398324340. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 20:37:08,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:37:09,867][46990] Updated weights for policy 0, policy_version 24310 (0.0032) +[2024-06-10 20:37:13,239][46753] Fps is (10 sec: 39321.2, 60 sec: 43963.7, 300 sec: 43542.6). Total num frames: 398409728. Throughput: 0: 43716.5. Samples: 398582580. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 20:37:13,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:37:14,261][46990] Updated weights for policy 0, policy_version 24320 (0.0041) +[2024-06-10 20:37:17,300][46990] Updated weights for policy 0, policy_version 24330 (0.0025) +[2024-06-10 20:37:18,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 398639104. Throughput: 0: 43668.4. Samples: 398710700. Policy #0 lag: (min: 1.0, avg: 11.5, max: 23.0) +[2024-06-10 20:37:18,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:37:21,745][46990] Updated weights for policy 0, policy_version 24340 (0.0030) +[2024-06-10 20:37:23,240][46753] Fps is (10 sec: 44236.6, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 398852096. Throughput: 0: 43560.0. Samples: 398977260. Policy #0 lag: (min: 1.0, avg: 11.5, max: 23.0) +[2024-06-10 20:37:23,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:37:24,930][46990] Updated weights for policy 0, policy_version 24350 (0.0035) +[2024-06-10 20:37:28,239][46753] Fps is (10 sec: 44236.9, 60 sec: 44509.9, 300 sec: 43598.1). Total num frames: 399081472. Throughput: 0: 43748.8. Samples: 399240460. Policy #0 lag: (min: 1.0, avg: 11.5, max: 23.0) +[2024-06-10 20:37:28,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:37:29,075][46990] Updated weights for policy 0, policy_version 24360 (0.0035) +[2024-06-10 20:37:32,259][46990] Updated weights for policy 0, policy_version 24370 (0.0038) +[2024-06-10 20:37:33,244][46753] Fps is (10 sec: 47492.6, 60 sec: 43687.4, 300 sec: 43708.5). Total num frames: 399327232. Throughput: 0: 43908.1. Samples: 399376280. Policy #0 lag: (min: 1.0, avg: 11.5, max: 23.0) +[2024-06-10 20:37:33,245][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:37:36,835][46990] Updated weights for policy 0, policy_version 24380 (0.0037) +[2024-06-10 20:37:38,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43417.7, 300 sec: 43820.3). Total num frames: 399507456. Throughput: 0: 43832.5. Samples: 399635120. Policy #0 lag: (min: 0.0, avg: 11.9, max: 22.0) +[2024-06-10 20:37:38,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:37:39,837][46990] Updated weights for policy 0, policy_version 24390 (0.0039) +[2024-06-10 20:37:43,239][46753] Fps is (10 sec: 40978.3, 60 sec: 44236.7, 300 sec: 43598.1). Total num frames: 399736832. Throughput: 0: 43374.6. Samples: 399883480. Policy #0 lag: (min: 0.0, avg: 11.9, max: 22.0) +[2024-06-10 20:37:43,240][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:37:44,125][46990] Updated weights for policy 0, policy_version 24400 (0.0034) +[2024-06-10 20:37:47,817][46990] Updated weights for policy 0, policy_version 24410 (0.0039) +[2024-06-10 20:37:48,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 399949824. Throughput: 0: 43466.5. Samples: 400015180. Policy #0 lag: (min: 0.0, avg: 11.9, max: 22.0) +[2024-06-10 20:37:48,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:37:51,716][46990] Updated weights for policy 0, policy_version 24420 (0.0037) +[2024-06-10 20:37:53,244][46753] Fps is (10 sec: 40941.9, 60 sec: 43414.4, 300 sec: 43764.0). Total num frames: 400146432. Throughput: 0: 43433.4. Samples: 400279040. Policy #0 lag: (min: 0.0, avg: 10.5, max: 20.0) +[2024-06-10 20:37:53,244][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:37:55,226][46990] Updated weights for policy 0, policy_version 24430 (0.0033) +[2024-06-10 20:37:58,240][46753] Fps is (10 sec: 44234.9, 60 sec: 44509.5, 300 sec: 43598.0). Total num frames: 400392192. Throughput: 0: 43472.0. Samples: 400538840. Policy #0 lag: (min: 0.0, avg: 10.5, max: 20.0) +[2024-06-10 20:37:58,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:37:59,388][46990] Updated weights for policy 0, policy_version 24440 (0.0029) +[2024-06-10 20:38:02,382][46990] Updated weights for policy 0, policy_version 24450 (0.0023) +[2024-06-10 20:38:03,240][46753] Fps is (10 sec: 45894.8, 60 sec: 43144.4, 300 sec: 43653.6). Total num frames: 400605184. Throughput: 0: 43744.3. Samples: 400679200. Policy #0 lag: (min: 0.0, avg: 10.5, max: 20.0) +[2024-06-10 20:38:03,249][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:38:06,881][46990] Updated weights for policy 0, policy_version 24460 (0.0040) +[2024-06-10 20:38:08,239][46753] Fps is (10 sec: 42600.5, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 400818176. Throughput: 0: 43541.9. Samples: 400936640. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 20:38:08,241][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:38:10,011][46990] Updated weights for policy 0, policy_version 24470 (0.0036) +[2024-06-10 20:38:13,239][46753] Fps is (10 sec: 44237.8, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 401047552. Throughput: 0: 43481.4. Samples: 401197120. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 20:38:13,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:38:14,000][46990] Updated weights for policy 0, policy_version 24480 (0.0034) +[2024-06-10 20:38:17,811][46990] Updated weights for policy 0, policy_version 24490 (0.0030) +[2024-06-10 20:38:18,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.6, 300 sec: 43653.7). Total num frames: 401260544. Throughput: 0: 43483.4. Samples: 401332840. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 20:38:18,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:38:21,802][46990] Updated weights for policy 0, policy_version 24500 (0.0032) +[2024-06-10 20:38:23,244][46753] Fps is (10 sec: 42579.1, 60 sec: 43687.5, 300 sec: 43819.6). Total num frames: 401473536. Throughput: 0: 43547.6. Samples: 401594960. Policy #0 lag: (min: 1.0, avg: 9.1, max: 22.0) +[2024-06-10 20:38:23,244][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:38:23,274][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000024504_401473536.pth... +[2024-06-10 20:38:23,336][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000023866_391020544.pth +[2024-06-10 20:38:24,297][46970] Signal inference workers to stop experience collection... (5850 times) +[2024-06-10 20:38:24,302][46970] Signal inference workers to resume experience collection... (5850 times) +[2024-06-10 20:38:24,340][46990] InferenceWorker_p0-w0: stopping experience collection (5850 times) +[2024-06-10 20:38:24,340][46990] InferenceWorker_p0-w0: resuming experience collection (5850 times) +[2024-06-10 20:38:25,022][46990] Updated weights for policy 0, policy_version 24510 (0.0039) +[2024-06-10 20:38:28,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 401702912. Throughput: 0: 43905.4. Samples: 401859220. Policy #0 lag: (min: 1.0, avg: 9.1, max: 22.0) +[2024-06-10 20:38:28,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:38:29,461][46990] Updated weights for policy 0, policy_version 24520 (0.0030) +[2024-06-10 20:38:32,493][46990] Updated weights for policy 0, policy_version 24530 (0.0049) +[2024-06-10 20:38:33,240][46753] Fps is (10 sec: 45895.4, 60 sec: 43420.8, 300 sec: 43653.6). Total num frames: 401932288. Throughput: 0: 44017.3. Samples: 401995960. Policy #0 lag: (min: 1.0, avg: 9.1, max: 22.0) +[2024-06-10 20:38:33,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:38:36,633][46990] Updated weights for policy 0, policy_version 24540 (0.0040) +[2024-06-10 20:38:38,239][46753] Fps is (10 sec: 40959.6, 60 sec: 43417.5, 300 sec: 43709.2). Total num frames: 402112512. Throughput: 0: 44017.6. Samples: 402259640. Policy #0 lag: (min: 1.0, avg: 9.1, max: 22.0) +[2024-06-10 20:38:38,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:38:39,922][46990] Updated weights for policy 0, policy_version 24550 (0.0038) +[2024-06-10 20:38:43,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 402358272. Throughput: 0: 43831.5. Samples: 402511240. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 20:38:43,240][46753] Avg episode reward: [(0, '0.268')] +[2024-06-10 20:38:44,525][46990] Updated weights for policy 0, policy_version 24560 (0.0033) +[2024-06-10 20:38:47,689][46990] Updated weights for policy 0, policy_version 24570 (0.0053) +[2024-06-10 20:38:48,240][46753] Fps is (10 sec: 45874.9, 60 sec: 43690.6, 300 sec: 43542.5). Total num frames: 402571264. Throughput: 0: 43619.6. Samples: 402642080. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 20:38:48,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:38:52,218][46990] Updated weights for policy 0, policy_version 24580 (0.0045) +[2024-06-10 20:38:53,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43693.9, 300 sec: 43764.7). Total num frames: 402767872. Throughput: 0: 43875.9. Samples: 402911060. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 20:38:53,240][46753] Avg episode reward: [(0, '0.268')] +[2024-06-10 20:38:54,859][46990] Updated weights for policy 0, policy_version 24590 (0.0030) +[2024-06-10 20:38:58,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43691.0, 300 sec: 43598.2). Total num frames: 403013632. Throughput: 0: 43739.0. Samples: 403165380. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 20:38:58,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:38:59,895][46990] Updated weights for policy 0, policy_version 24600 (0.0033) +[2024-06-10 20:39:02,479][46990] Updated weights for policy 0, policy_version 24610 (0.0040) +[2024-06-10 20:39:03,240][46753] Fps is (10 sec: 45874.9, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 403226624. Throughput: 0: 43741.8. Samples: 403301220. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 20:39:03,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:39:07,262][46990] Updated weights for policy 0, policy_version 24620 (0.0033) +[2024-06-10 20:39:08,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43144.6, 300 sec: 43709.2). Total num frames: 403406848. Throughput: 0: 43741.3. Samples: 403563120. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 20:39:08,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:39:09,893][46990] Updated weights for policy 0, policy_version 24630 (0.0045) +[2024-06-10 20:39:13,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 403668992. Throughput: 0: 43364.4. Samples: 403810620. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:39:13,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:39:14,962][46990] Updated weights for policy 0, policy_version 24640 (0.0041) +[2024-06-10 20:39:17,573][46990] Updated weights for policy 0, policy_version 24650 (0.0036) +[2024-06-10 20:39:18,239][46753] Fps is (10 sec: 47513.4, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 403881984. Throughput: 0: 43406.8. Samples: 403949260. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:39:18,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:39:22,357][46990] Updated weights for policy 0, policy_version 24660 (0.0037) +[2024-06-10 20:39:23,240][46753] Fps is (10 sec: 40959.6, 60 sec: 43420.8, 300 sec: 43764.7). Total num frames: 404078592. Throughput: 0: 43526.1. Samples: 404218320. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:39:23,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:39:24,879][46990] Updated weights for policy 0, policy_version 24670 (0.0042) +[2024-06-10 20:39:28,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 404324352. Throughput: 0: 43573.0. Samples: 404472020. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:39:28,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:39:29,996][46990] Updated weights for policy 0, policy_version 24680 (0.0043) +[2024-06-10 20:39:32,546][46990] Updated weights for policy 0, policy_version 24690 (0.0033) +[2024-06-10 20:39:33,240][46753] Fps is (10 sec: 45875.5, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 404537344. Throughput: 0: 43695.6. Samples: 404608380. Policy #0 lag: (min: 0.0, avg: 8.2, max: 21.0) +[2024-06-10 20:39:33,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:39:37,446][46990] Updated weights for policy 0, policy_version 24700 (0.0042) +[2024-06-10 20:39:38,239][46753] Fps is (10 sec: 39321.6, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 404717568. Throughput: 0: 43661.3. Samples: 404875820. Policy #0 lag: (min: 0.0, avg: 8.2, max: 21.0) +[2024-06-10 20:39:38,240][46753] Avg episode reward: [(0, '0.268')] +[2024-06-10 20:39:39,906][46990] Updated weights for policy 0, policy_version 24710 (0.0032) +[2024-06-10 20:39:42,620][46970] Signal inference workers to stop experience collection... (5900 times) +[2024-06-10 20:39:42,621][46970] Signal inference workers to resume experience collection... (5900 times) +[2024-06-10 20:39:42,668][46990] InferenceWorker_p0-w0: stopping experience collection (5900 times) +[2024-06-10 20:39:42,668][46990] InferenceWorker_p0-w0: resuming experience collection (5900 times) +[2024-06-10 20:39:43,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 404979712. Throughput: 0: 43510.2. Samples: 405123340. Policy #0 lag: (min: 0.0, avg: 8.2, max: 21.0) +[2024-06-10 20:39:43,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:39:45,272][46990] Updated weights for policy 0, policy_version 24720 (0.0043) +[2024-06-10 20:39:47,570][46990] Updated weights for policy 0, policy_version 24730 (0.0032) +[2024-06-10 20:39:48,239][46753] Fps is (10 sec: 47514.1, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 405192704. Throughput: 0: 43549.1. Samples: 405260920. Policy #0 lag: (min: 0.0, avg: 8.5, max: 23.0) +[2024-06-10 20:39:48,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:39:52,405][46990] Updated weights for policy 0, policy_version 24740 (0.0027) +[2024-06-10 20:39:53,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 405372928. Throughput: 0: 43640.0. Samples: 405526920. Policy #0 lag: (min: 0.0, avg: 8.5, max: 23.0) +[2024-06-10 20:39:53,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:39:54,860][46990] Updated weights for policy 0, policy_version 24750 (0.0039) +[2024-06-10 20:39:58,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43417.6, 300 sec: 43598.8). Total num frames: 405618688. Throughput: 0: 43860.5. Samples: 405784340. Policy #0 lag: (min: 0.0, avg: 8.5, max: 23.0) +[2024-06-10 20:39:58,240][46753] Avg episode reward: [(0, '0.262')] +[2024-06-10 20:39:59,592][46990] Updated weights for policy 0, policy_version 24760 (0.0033) +[2024-06-10 20:40:02,620][46990] Updated weights for policy 0, policy_version 24770 (0.0033) +[2024-06-10 20:40:03,239][46753] Fps is (10 sec: 49151.5, 60 sec: 43963.8, 300 sec: 43653.6). Total num frames: 405864448. Throughput: 0: 43890.2. Samples: 405924320. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 20:40:03,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:40:07,311][46990] Updated weights for policy 0, policy_version 24780 (0.0036) +[2024-06-10 20:40:08,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 406028288. Throughput: 0: 43738.8. Samples: 406186560. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 20:40:08,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 20:40:09,832][46990] Updated weights for policy 0, policy_version 24790 (0.0041) +[2024-06-10 20:40:13,244][46753] Fps is (10 sec: 40941.9, 60 sec: 43414.4, 300 sec: 43653.0). Total num frames: 406274048. Throughput: 0: 43888.1. Samples: 406447180. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 20:40:13,245][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:40:14,814][46990] Updated weights for policy 0, policy_version 24800 (0.0026) +[2024-06-10 20:40:17,146][46990] Updated weights for policy 0, policy_version 24810 (0.0030) +[2024-06-10 20:40:18,239][46753] Fps is (10 sec: 49151.7, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 406519808. Throughput: 0: 43824.9. Samples: 406580500. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 20:40:18,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:40:21,969][46990] Updated weights for policy 0, policy_version 24820 (0.0030) +[2024-06-10 20:40:23,240][46753] Fps is (10 sec: 40977.8, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 406683648. Throughput: 0: 43753.7. Samples: 406844740. Policy #0 lag: (min: 0.0, avg: 12.4, max: 22.0) +[2024-06-10 20:40:23,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:40:23,303][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000024823_406700032.pth... +[2024-06-10 20:40:23,369][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000024185_396247040.pth +[2024-06-10 20:40:24,720][46990] Updated weights for policy 0, policy_version 24830 (0.0027) +[2024-06-10 20:40:28,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 406929408. Throughput: 0: 44085.4. Samples: 407107180. Policy #0 lag: (min: 0.0, avg: 12.4, max: 22.0) +[2024-06-10 20:40:28,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:40:29,809][46990] Updated weights for policy 0, policy_version 24840 (0.0044) +[2024-06-10 20:40:32,224][46990] Updated weights for policy 0, policy_version 24850 (0.0055) +[2024-06-10 20:40:33,240][46753] Fps is (10 sec: 50790.5, 60 sec: 44236.8, 300 sec: 43709.2). Total num frames: 407191552. Throughput: 0: 43895.4. Samples: 407236220. Policy #0 lag: (min: 0.0, avg: 12.4, max: 22.0) +[2024-06-10 20:40:33,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:40:37,475][46990] Updated weights for policy 0, policy_version 24860 (0.0034) +[2024-06-10 20:40:38,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43963.7, 300 sec: 43765.4). Total num frames: 407355392. Throughput: 0: 43844.8. Samples: 407499940. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 20:40:38,240][46753] Avg episode reward: [(0, '0.266')] +[2024-06-10 20:40:39,832][46990] Updated weights for policy 0, policy_version 24870 (0.0033) +[2024-06-10 20:40:43,239][46753] Fps is (10 sec: 39322.1, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 407584768. Throughput: 0: 43841.8. Samples: 407757220. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 20:40:43,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:40:44,723][46990] Updated weights for policy 0, policy_version 24880 (0.0042) +[2024-06-10 20:40:47,372][46990] Updated weights for policy 0, policy_version 24890 (0.0022) +[2024-06-10 20:40:48,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 407814144. Throughput: 0: 43658.8. Samples: 407888960. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 20:40:48,240][46753] Avg episode reward: [(0, '0.260')] +[2024-06-10 20:40:51,897][46990] Updated weights for policy 0, policy_version 24900 (0.0047) +[2024-06-10 20:40:53,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 408010752. Throughput: 0: 43664.8. Samples: 408151480. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:40:53,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:40:53,439][46970] Signal inference workers to stop experience collection... (5950 times) +[2024-06-10 20:40:53,439][46970] Signal inference workers to resume experience collection... (5950 times) +[2024-06-10 20:40:53,474][46990] InferenceWorker_p0-w0: stopping experience collection (5950 times) +[2024-06-10 20:40:53,474][46990] InferenceWorker_p0-w0: resuming experience collection (5950 times) +[2024-06-10 20:40:54,895][46990] Updated weights for policy 0, policy_version 24910 (0.0031) +[2024-06-10 20:40:58,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 408240128. Throughput: 0: 43789.7. Samples: 408417520. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:40:58,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:40:59,488][46990] Updated weights for policy 0, policy_version 24920 (0.0034) +[2024-06-10 20:41:02,041][46990] Updated weights for policy 0, policy_version 24930 (0.0044) +[2024-06-10 20:41:03,239][46753] Fps is (10 sec: 49152.7, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 408502272. Throughput: 0: 43708.1. Samples: 408547360. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:41:03,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:41:07,052][46990] Updated weights for policy 0, policy_version 24940 (0.0037) +[2024-06-10 20:41:08,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 408666112. Throughput: 0: 43800.1. Samples: 408815740. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:41:08,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:41:09,798][46990] Updated weights for policy 0, policy_version 24950 (0.0034) +[2024-06-10 20:41:13,240][46753] Fps is (10 sec: 39321.1, 60 sec: 43693.8, 300 sec: 43598.1). Total num frames: 408895488. Throughput: 0: 43768.7. Samples: 409076780. Policy #0 lag: (min: 0.0, avg: 12.4, max: 24.0) +[2024-06-10 20:41:13,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:41:14,344][46990] Updated weights for policy 0, policy_version 24960 (0.0039) +[2024-06-10 20:41:17,336][46990] Updated weights for policy 0, policy_version 24970 (0.0044) +[2024-06-10 20:41:18,240][46753] Fps is (10 sec: 49151.3, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 409157632. Throughput: 0: 43777.8. Samples: 409206220. Policy #0 lag: (min: 0.0, avg: 12.4, max: 24.0) +[2024-06-10 20:41:18,240][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:41:21,647][46990] Updated weights for policy 0, policy_version 24980 (0.0031) +[2024-06-10 20:41:23,239][46753] Fps is (10 sec: 44237.5, 60 sec: 44236.9, 300 sec: 43820.3). Total num frames: 409337856. Throughput: 0: 43698.8. Samples: 409466380. Policy #0 lag: (min: 0.0, avg: 12.4, max: 24.0) +[2024-06-10 20:41:23,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:41:24,874][46990] Updated weights for policy 0, policy_version 24990 (0.0026) +[2024-06-10 20:41:28,239][46753] Fps is (10 sec: 39322.3, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 409550848. Throughput: 0: 43834.7. Samples: 409729780. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 20:41:28,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:41:29,423][46990] Updated weights for policy 0, policy_version 25000 (0.0026) +[2024-06-10 20:41:32,319][46990] Updated weights for policy 0, policy_version 25010 (0.0036) +[2024-06-10 20:41:33,239][46753] Fps is (10 sec: 45874.6, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 409796608. Throughput: 0: 43634.1. Samples: 409852500. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 20:41:33,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:41:36,975][46990] Updated weights for policy 0, policy_version 25020 (0.0040) +[2024-06-10 20:41:38,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 409976832. Throughput: 0: 43616.1. Samples: 410114200. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 20:41:38,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:41:40,293][46990] Updated weights for policy 0, policy_version 25030 (0.0028) +[2024-06-10 20:41:43,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 410189824. Throughput: 0: 43431.5. Samples: 410371940. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 20:41:43,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:41:44,360][46990] Updated weights for policy 0, policy_version 25040 (0.0048) +[2024-06-10 20:41:47,616][46990] Updated weights for policy 0, policy_version 25050 (0.0024) +[2024-06-10 20:41:48,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 410435584. Throughput: 0: 43421.8. Samples: 410501340. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:41:48,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:41:51,994][46990] Updated weights for policy 0, policy_version 25060 (0.0028) +[2024-06-10 20:41:53,240][46753] Fps is (10 sec: 44236.2, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 410632192. Throughput: 0: 43421.2. Samples: 410769700. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:41:53,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:41:55,030][46990] Updated weights for policy 0, policy_version 25070 (0.0031) +[2024-06-10 20:41:58,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 410845184. Throughput: 0: 43509.9. Samples: 411034720. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:41:58,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 20:41:58,368][46970] Saving new best policy, reward=0.289! +[2024-06-10 20:41:59,477][46990] Updated weights for policy 0, policy_version 25080 (0.0023) +[2024-06-10 20:42:02,411][46990] Updated weights for policy 0, policy_version 25090 (0.0037) +[2024-06-10 20:42:03,240][46753] Fps is (10 sec: 47513.5, 60 sec: 43417.5, 300 sec: 43709.1). Total num frames: 411107328. Throughput: 0: 43521.3. Samples: 411164680. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 20:42:03,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:42:06,766][46990] Updated weights for policy 0, policy_version 25100 (0.0041) +[2024-06-10 20:42:08,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 411287552. Throughput: 0: 43361.7. Samples: 411417660. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 20:42:08,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:42:10,172][46990] Updated weights for policy 0, policy_version 25110 (0.0050) +[2024-06-10 20:42:10,912][46970] Signal inference workers to stop experience collection... (6000 times) +[2024-06-10 20:42:10,912][46970] Signal inference workers to resume experience collection... (6000 times) +[2024-06-10 20:42:10,958][46990] InferenceWorker_p0-w0: stopping experience collection (6000 times) +[2024-06-10 20:42:10,958][46990] InferenceWorker_p0-w0: resuming experience collection (6000 times) +[2024-06-10 20:42:13,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 411500544. Throughput: 0: 43430.5. Samples: 411684160. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 20:42:13,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:42:14,216][46990] Updated weights for policy 0, policy_version 25120 (0.0041) +[2024-06-10 20:42:17,661][46990] Updated weights for policy 0, policy_version 25130 (0.0044) +[2024-06-10 20:42:18,239][46753] Fps is (10 sec: 49152.4, 60 sec: 43690.8, 300 sec: 43820.3). Total num frames: 411779072. Throughput: 0: 43643.3. Samples: 411816440. Policy #0 lag: (min: 0.0, avg: 8.4, max: 22.0) +[2024-06-10 20:42:18,240][46753] Avg episode reward: [(0, '0.257')] +[2024-06-10 20:42:21,447][46990] Updated weights for policy 0, policy_version 25140 (0.0025) +[2024-06-10 20:42:23,240][46753] Fps is (10 sec: 45875.3, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 411959296. Throughput: 0: 43755.5. Samples: 412083200. Policy #0 lag: (min: 0.0, avg: 8.4, max: 22.0) +[2024-06-10 20:42:23,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:42:23,254][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000025144_411959296.pth... +[2024-06-10 20:42:23,332][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000024504_401473536.pth +[2024-06-10 20:42:24,895][46990] Updated weights for policy 0, policy_version 25150 (0.0036) +[2024-06-10 20:42:28,240][46753] Fps is (10 sec: 37682.5, 60 sec: 43417.5, 300 sec: 43487.7). Total num frames: 412155904. Throughput: 0: 43786.6. Samples: 412342340. Policy #0 lag: (min: 0.0, avg: 8.4, max: 22.0) +[2024-06-10 20:42:28,249][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:42:29,100][46990] Updated weights for policy 0, policy_version 25160 (0.0041) +[2024-06-10 20:42:32,526][46990] Updated weights for policy 0, policy_version 25170 (0.0035) +[2024-06-10 20:42:33,240][46753] Fps is (10 sec: 47513.4, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 412434432. Throughput: 0: 43802.1. Samples: 412472440. Policy #0 lag: (min: 0.0, avg: 8.4, max: 22.0) +[2024-06-10 20:42:33,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:42:37,033][46990] Updated weights for policy 0, policy_version 25180 (0.0035) +[2024-06-10 20:42:38,240][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 412598272. Throughput: 0: 43548.0. Samples: 412729360. Policy #0 lag: (min: 0.0, avg: 12.2, max: 27.0) +[2024-06-10 20:42:38,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:42:39,987][46990] Updated weights for policy 0, policy_version 25190 (0.0034) +[2024-06-10 20:42:43,240][46753] Fps is (10 sec: 37682.7, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 412811264. Throughput: 0: 43442.4. Samples: 412989640. Policy #0 lag: (min: 0.0, avg: 12.2, max: 27.0) +[2024-06-10 20:42:43,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:42:44,216][46990] Updated weights for policy 0, policy_version 25200 (0.0030) +[2024-06-10 20:42:47,692][46990] Updated weights for policy 0, policy_version 25210 (0.0038) +[2024-06-10 20:42:48,239][46753] Fps is (10 sec: 47514.3, 60 sec: 43963.8, 300 sec: 43820.9). Total num frames: 413073408. Throughput: 0: 43478.0. Samples: 413121180. Policy #0 lag: (min: 0.0, avg: 12.2, max: 27.0) +[2024-06-10 20:42:48,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:42:51,333][46990] Updated weights for policy 0, policy_version 25220 (0.0041) +[2024-06-10 20:42:53,239][46753] Fps is (10 sec: 45876.0, 60 sec: 43963.8, 300 sec: 43653.7). Total num frames: 413270016. Throughput: 0: 43783.5. Samples: 413387920. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 20:42:53,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:42:55,162][46990] Updated weights for policy 0, policy_version 25230 (0.0040) +[2024-06-10 20:42:58,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43963.7, 300 sec: 43653.7). Total num frames: 413483008. Throughput: 0: 43688.5. Samples: 413650140. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 20:42:58,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:42:59,182][46990] Updated weights for policy 0, policy_version 25240 (0.0036) +[2024-06-10 20:43:02,500][46990] Updated weights for policy 0, policy_version 25250 (0.0038) +[2024-06-10 20:43:03,239][46753] Fps is (10 sec: 45875.9, 60 sec: 43690.9, 300 sec: 43764.7). Total num frames: 413728768. Throughput: 0: 43690.7. Samples: 413782520. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 20:43:03,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:43:06,965][46990] Updated weights for policy 0, policy_version 25260 (0.0034) +[2024-06-10 20:43:08,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 413908992. Throughput: 0: 43584.5. Samples: 414044500. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 20:43:08,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:43:09,975][46990] Updated weights for policy 0, policy_version 25270 (0.0033) +[2024-06-10 20:43:13,239][46753] Fps is (10 sec: 40959.5, 60 sec: 43963.8, 300 sec: 43653.7). Total num frames: 414138368. Throughput: 0: 43485.9. Samples: 414299200. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 20:43:13,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:43:14,340][46990] Updated weights for policy 0, policy_version 25280 (0.0042) +[2024-06-10 20:43:17,793][46990] Updated weights for policy 0, policy_version 25290 (0.0031) +[2024-06-10 20:43:18,240][46753] Fps is (10 sec: 45874.6, 60 sec: 43144.4, 300 sec: 43709.8). Total num frames: 414367744. Throughput: 0: 43643.5. Samples: 414436400. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 20:43:18,241][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 20:43:22,081][46990] Updated weights for policy 0, policy_version 25300 (0.0043) +[2024-06-10 20:43:22,108][46970] Signal inference workers to stop experience collection... (6050 times) +[2024-06-10 20:43:22,108][46970] Signal inference workers to resume experience collection... (6050 times) +[2024-06-10 20:43:22,140][46990] InferenceWorker_p0-w0: stopping experience collection (6050 times) +[2024-06-10 20:43:22,140][46990] InferenceWorker_p0-w0: resuming experience collection (6050 times) +[2024-06-10 20:43:23,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 414580736. Throughput: 0: 43848.1. Samples: 414702520. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 20:43:23,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:43:25,166][46990] Updated weights for policy 0, policy_version 25310 (0.0034) +[2024-06-10 20:43:28,239][46753] Fps is (10 sec: 44237.4, 60 sec: 44236.9, 300 sec: 43653.7). Total num frames: 414810112. Throughput: 0: 43594.5. Samples: 414951380. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 20:43:28,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:43:29,322][46990] Updated weights for policy 0, policy_version 25320 (0.0039) +[2024-06-10 20:43:32,634][46990] Updated weights for policy 0, policy_version 25330 (0.0037) +[2024-06-10 20:43:33,239][46753] Fps is (10 sec: 42598.5, 60 sec: 42871.6, 300 sec: 43709.2). Total num frames: 415006720. Throughput: 0: 43728.5. Samples: 415088960. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 20:43:33,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:43:36,793][46990] Updated weights for policy 0, policy_version 25340 (0.0038) +[2024-06-10 20:43:38,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 415219712. Throughput: 0: 43374.7. Samples: 415339780. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 20:43:38,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:43:40,434][46990] Updated weights for policy 0, policy_version 25350 (0.0041) +[2024-06-10 20:43:43,240][46753] Fps is (10 sec: 44235.8, 60 sec: 43963.8, 300 sec: 43653.6). Total num frames: 415449088. Throughput: 0: 43316.7. Samples: 415599400. Policy #0 lag: (min: 1.0, avg: 9.9, max: 22.0) +[2024-06-10 20:43:43,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:43:44,476][46990] Updated weights for policy 0, policy_version 25360 (0.0035) +[2024-06-10 20:43:48,127][46990] Updated weights for policy 0, policy_version 25370 (0.0029) +[2024-06-10 20:43:48,240][46753] Fps is (10 sec: 44236.4, 60 sec: 43144.4, 300 sec: 43709.2). Total num frames: 415662080. Throughput: 0: 43346.1. Samples: 415733100. Policy #0 lag: (min: 1.0, avg: 9.9, max: 22.0) +[2024-06-10 20:43:48,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:43:51,911][46990] Updated weights for policy 0, policy_version 25380 (0.0049) +[2024-06-10 20:43:53,244][46753] Fps is (10 sec: 44217.7, 60 sec: 43687.4, 300 sec: 43653.0). Total num frames: 415891456. Throughput: 0: 43451.2. Samples: 416000000. Policy #0 lag: (min: 1.0, avg: 9.9, max: 22.0) +[2024-06-10 20:43:53,244][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:43:55,543][46990] Updated weights for policy 0, policy_version 25390 (0.0030) +[2024-06-10 20:43:58,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 416120832. Throughput: 0: 43615.6. Samples: 416261900. Policy #0 lag: (min: 1.0, avg: 9.9, max: 22.0) +[2024-06-10 20:43:58,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:43:59,625][46990] Updated weights for policy 0, policy_version 25400 (0.0027) +[2024-06-10 20:44:02,891][46990] Updated weights for policy 0, policy_version 25410 (0.0032) +[2024-06-10 20:44:03,244][46753] Fps is (10 sec: 44236.7, 60 sec: 43414.3, 300 sec: 43819.6). Total num frames: 416333824. Throughput: 0: 43442.0. Samples: 416391480. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 20:44:03,245][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:44:07,395][46990] Updated weights for policy 0, policy_version 25420 (0.0047) +[2024-06-10 20:44:08,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 416530432. Throughput: 0: 43359.9. Samples: 416653720. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 20:44:08,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:44:10,511][46990] Updated weights for policy 0, policy_version 25430 (0.0046) +[2024-06-10 20:44:13,240][46753] Fps is (10 sec: 44256.0, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 416776192. Throughput: 0: 43505.2. Samples: 416909120. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 20:44:13,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:44:14,538][46990] Updated weights for policy 0, policy_version 25440 (0.0028) +[2024-06-10 20:44:18,173][46990] Updated weights for policy 0, policy_version 25450 (0.0033) +[2024-06-10 20:44:18,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 416972800. Throughput: 0: 43642.1. Samples: 417052860. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 20:44:18,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 20:44:21,654][46990] Updated weights for policy 0, policy_version 25460 (0.0037) +[2024-06-10 20:44:23,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 417185792. Throughput: 0: 43946.6. Samples: 417317380. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 20:44:23,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:44:23,250][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000025464_417202176.pth... +[2024-06-10 20:44:23,306][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000024823_406700032.pth +[2024-06-10 20:44:25,440][46990] Updated weights for policy 0, policy_version 25470 (0.0029) +[2024-06-10 20:44:28,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 417431552. Throughput: 0: 43940.1. Samples: 417576700. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 20:44:28,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:44:29,307][46990] Updated weights for policy 0, policy_version 25480 (0.0027) +[2024-06-10 20:44:32,973][46990] Updated weights for policy 0, policy_version 25490 (0.0034) +[2024-06-10 20:44:33,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 417628160. Throughput: 0: 43948.9. Samples: 417710800. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 20:44:33,244][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:44:37,051][46990] Updated weights for policy 0, policy_version 25500 (0.0025) +[2024-06-10 20:44:38,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43963.8, 300 sec: 43653.7). Total num frames: 417857536. Throughput: 0: 43920.9. Samples: 417976240. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 20:44:38,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:44:40,485][46970] Signal inference workers to stop experience collection... (6100 times) +[2024-06-10 20:44:40,485][46970] Signal inference workers to resume experience collection... (6100 times) +[2024-06-10 20:44:40,521][46990] InferenceWorker_p0-w0: stopping experience collection (6100 times) +[2024-06-10 20:44:40,521][46990] InferenceWorker_p0-w0: resuming experience collection (6100 times) +[2024-06-10 20:44:40,624][46990] Updated weights for policy 0, policy_version 25510 (0.0024) +[2024-06-10 20:44:43,239][46753] Fps is (10 sec: 47513.7, 60 sec: 44236.9, 300 sec: 43764.7). Total num frames: 418103296. Throughput: 0: 43699.1. Samples: 418228360. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 20:44:43,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:44:44,283][46990] Updated weights for policy 0, policy_version 25520 (0.0039) +[2024-06-10 20:44:48,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 418267136. Throughput: 0: 43878.7. Samples: 418365820. Policy #0 lag: (min: 0.0, avg: 11.7, max: 23.0) +[2024-06-10 20:44:48,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:44:48,345][46990] Updated weights for policy 0, policy_version 25530 (0.0039) +[2024-06-10 20:44:51,365][46990] Updated weights for policy 0, policy_version 25540 (0.0035) +[2024-06-10 20:44:53,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43693.9, 300 sec: 43709.2). Total num frames: 418512896. Throughput: 0: 43989.3. Samples: 418633240. Policy #0 lag: (min: 0.0, avg: 12.9, max: 24.0) +[2024-06-10 20:44:53,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:44:55,495][46990] Updated weights for policy 0, policy_version 25550 (0.0034) +[2024-06-10 20:44:58,240][46753] Fps is (10 sec: 49151.3, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 418758656. Throughput: 0: 44049.8. Samples: 418891360. Policy #0 lag: (min: 0.0, avg: 12.9, max: 24.0) +[2024-06-10 20:44:58,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:44:59,037][46990] Updated weights for policy 0, policy_version 25560 (0.0028) +[2024-06-10 20:45:02,832][46990] Updated weights for policy 0, policy_version 25570 (0.0032) +[2024-06-10 20:45:03,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43693.9, 300 sec: 43820.3). Total num frames: 418955264. Throughput: 0: 43879.6. Samples: 419027440. Policy #0 lag: (min: 0.0, avg: 12.9, max: 24.0) +[2024-06-10 20:45:03,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:45:06,638][46990] Updated weights for policy 0, policy_version 25580 (0.0028) +[2024-06-10 20:45:08,239][46753] Fps is (10 sec: 39322.3, 60 sec: 43690.7, 300 sec: 43654.3). Total num frames: 419151872. Throughput: 0: 43851.6. Samples: 419290700. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:45:08,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:45:10,791][46990] Updated weights for policy 0, policy_version 25590 (0.0029) +[2024-06-10 20:45:13,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.8, 300 sec: 43653.7). Total num frames: 419397632. Throughput: 0: 43750.3. Samples: 419545460. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:45:13,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:45:13,858][46990] Updated weights for policy 0, policy_version 25600 (0.0050) +[2024-06-10 20:45:18,219][46990] Updated weights for policy 0, policy_version 25610 (0.0035) +[2024-06-10 20:45:18,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.8, 300 sec: 43764.8). Total num frames: 419594240. Throughput: 0: 43860.1. Samples: 419684500. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:45:18,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:45:21,139][46990] Updated weights for policy 0, policy_version 25620 (0.0037) +[2024-06-10 20:45:23,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 419823616. Throughput: 0: 43773.7. Samples: 419946060. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 20:45:23,240][46753] Avg episode reward: [(0, '0.267')] +[2024-06-10 20:45:25,488][46990] Updated weights for policy 0, policy_version 25630 (0.0050) +[2024-06-10 20:45:28,239][46753] Fps is (10 sec: 45874.4, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 420052992. Throughput: 0: 43976.8. Samples: 420207320. Policy #0 lag: (min: 0.0, avg: 8.5, max: 20.0) +[2024-06-10 20:45:28,240][46753] Avg episode reward: [(0, '0.261')] +[2024-06-10 20:45:28,807][46990] Updated weights for policy 0, policy_version 25640 (0.0027) +[2024-06-10 20:45:32,971][46990] Updated weights for policy 0, policy_version 25650 (0.0033) +[2024-06-10 20:45:33,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 420265984. Throughput: 0: 43959.0. Samples: 420343980. Policy #0 lag: (min: 0.0, avg: 8.5, max: 20.0) +[2024-06-10 20:45:33,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:45:36,102][46990] Updated weights for policy 0, policy_version 25660 (0.0037) +[2024-06-10 20:45:38,240][46753] Fps is (10 sec: 40959.8, 60 sec: 43417.4, 300 sec: 43653.6). Total num frames: 420462592. Throughput: 0: 43848.4. Samples: 420606420. Policy #0 lag: (min: 0.0, avg: 8.5, max: 20.0) +[2024-06-10 20:45:38,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 20:45:40,617][46990] Updated weights for policy 0, policy_version 25670 (0.0036) +[2024-06-10 20:45:43,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 420724736. Throughput: 0: 43770.8. Samples: 420861040. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 20:45:43,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:45:43,402][46990] Updated weights for policy 0, policy_version 25680 (0.0037) +[2024-06-10 20:45:47,944][46990] Updated weights for policy 0, policy_version 25690 (0.0038) +[2024-06-10 20:45:48,240][46753] Fps is (10 sec: 45875.5, 60 sec: 44236.7, 300 sec: 43764.7). Total num frames: 420921344. Throughput: 0: 43948.4. Samples: 421005120. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 20:45:48,243][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:45:48,416][46970] Signal inference workers to stop experience collection... (6150 times) +[2024-06-10 20:45:48,455][46990] InferenceWorker_p0-w0: stopping experience collection (6150 times) +[2024-06-10 20:45:48,465][46970] Signal inference workers to resume experience collection... (6150 times) +[2024-06-10 20:45:48,475][46990] InferenceWorker_p0-w0: resuming experience collection (6150 times) +[2024-06-10 20:45:51,054][46990] Updated weights for policy 0, policy_version 25700 (0.0035) +[2024-06-10 20:45:53,239][46753] Fps is (10 sec: 40959.6, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 421134336. Throughput: 0: 43863.9. Samples: 421264580. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 20:45:53,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:45:55,227][46990] Updated weights for policy 0, policy_version 25710 (0.0032) +[2024-06-10 20:45:58,239][46753] Fps is (10 sec: 45875.9, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 421380096. Throughput: 0: 44026.7. Samples: 421526660. Policy #0 lag: (min: 1.0, avg: 10.7, max: 22.0) +[2024-06-10 20:45:58,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:45:58,381][46990] Updated weights for policy 0, policy_version 25720 (0.0028) +[2024-06-10 20:46:02,866][46990] Updated weights for policy 0, policy_version 25730 (0.0039) +[2024-06-10 20:46:03,240][46753] Fps is (10 sec: 44233.8, 60 sec: 43690.2, 300 sec: 43764.6). Total num frames: 421576704. Throughput: 0: 44050.7. Samples: 421666820. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 20:46:03,241][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:46:05,801][46990] Updated weights for policy 0, policy_version 25740 (0.0038) +[2024-06-10 20:46:08,240][46753] Fps is (10 sec: 39321.0, 60 sec: 43690.5, 300 sec: 43653.6). Total num frames: 421773312. Throughput: 0: 43799.0. Samples: 421917020. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 20:46:08,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:46:10,498][46990] Updated weights for policy 0, policy_version 25750 (0.0028) +[2024-06-10 20:46:13,185][46990] Updated weights for policy 0, policy_version 25760 (0.0032) +[2024-06-10 20:46:13,240][46753] Fps is (10 sec: 47516.4, 60 sec: 44236.7, 300 sec: 43709.2). Total num frames: 422051840. Throughput: 0: 43611.9. Samples: 422169860. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 20:46:13,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:46:18,172][46990] Updated weights for policy 0, policy_version 25770 (0.0029) +[2024-06-10 20:46:18,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 422215680. Throughput: 0: 43765.4. Samples: 422313420. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:46:18,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:46:20,555][46990] Updated weights for policy 0, policy_version 25780 (0.0027) +[2024-06-10 20:46:23,239][46753] Fps is (10 sec: 37683.9, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 422428672. Throughput: 0: 43598.4. Samples: 422568340. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:46:23,240][46753] Avg episode reward: [(0, '0.268')] +[2024-06-10 20:46:23,350][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000025784_422445056.pth... +[2024-06-10 20:46:23,398][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000025144_411959296.pth +[2024-06-10 20:46:25,423][46990] Updated weights for policy 0, policy_version 25790 (0.0037) +[2024-06-10 20:46:28,230][46990] Updated weights for policy 0, policy_version 25800 (0.0042) +[2024-06-10 20:46:28,240][46753] Fps is (10 sec: 49150.7, 60 sec: 44236.7, 300 sec: 43764.7). Total num frames: 422707200. Throughput: 0: 43876.1. Samples: 422835480. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:46:28,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 20:46:32,882][46990] Updated weights for policy 0, policy_version 25810 (0.0040) +[2024-06-10 20:46:33,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 422887424. Throughput: 0: 43749.1. Samples: 422973820. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 20:46:33,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:46:35,793][46990] Updated weights for policy 0, policy_version 25820 (0.0039) +[2024-06-10 20:46:38,244][46753] Fps is (10 sec: 37667.3, 60 sec: 43687.5, 300 sec: 43708.5). Total num frames: 423084032. Throughput: 0: 43760.1. Samples: 423233980. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 20:46:38,245][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 20:46:40,680][46990] Updated weights for policy 0, policy_version 25830 (0.0038) +[2024-06-10 20:46:42,998][46990] Updated weights for policy 0, policy_version 25840 (0.0046) +[2024-06-10 20:46:43,239][46753] Fps is (10 sec: 47513.6, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 423362560. Throughput: 0: 43625.8. Samples: 423489820. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 20:46:43,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:46:48,018][46990] Updated weights for policy 0, policy_version 25850 (0.0034) +[2024-06-10 20:46:48,239][46753] Fps is (10 sec: 44257.0, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 423526400. Throughput: 0: 43715.0. Samples: 423633960. Policy #0 lag: (min: 0.0, avg: 10.8, max: 23.0) +[2024-06-10 20:46:48,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:46:50,314][46990] Updated weights for policy 0, policy_version 25860 (0.0035) +[2024-06-10 20:46:53,240][46753] Fps is (10 sec: 39320.7, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 423755776. Throughput: 0: 43903.5. Samples: 423892680. Policy #0 lag: (min: 0.0, avg: 13.6, max: 21.0) +[2024-06-10 20:46:53,240][46753] Avg episode reward: [(0, '0.272')] +[2024-06-10 20:46:55,287][46990] Updated weights for policy 0, policy_version 25870 (0.0040) +[2024-06-10 20:46:58,172][46990] Updated weights for policy 0, policy_version 25880 (0.0040) +[2024-06-10 20:46:58,240][46753] Fps is (10 sec: 49151.0, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 424017920. Throughput: 0: 44084.0. Samples: 424153640. Policy #0 lag: (min: 0.0, avg: 13.6, max: 21.0) +[2024-06-10 20:46:58,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:47:02,847][46990] Updated weights for policy 0, policy_version 25890 (0.0042) +[2024-06-10 20:47:03,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43691.2, 300 sec: 43764.7). Total num frames: 424198144. Throughput: 0: 43917.4. Samples: 424289700. Policy #0 lag: (min: 0.0, avg: 13.6, max: 21.0) +[2024-06-10 20:47:03,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:47:05,491][46990] Updated weights for policy 0, policy_version 25900 (0.0026) +[2024-06-10 20:47:06,475][46970] Signal inference workers to stop experience collection... (6200 times) +[2024-06-10 20:47:06,476][46970] Signal inference workers to resume experience collection... (6200 times) +[2024-06-10 20:47:06,494][46990] InferenceWorker_p0-w0: stopping experience collection (6200 times) +[2024-06-10 20:47:06,494][46990] InferenceWorker_p0-w0: resuming experience collection (6200 times) +[2024-06-10 20:47:08,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 424411136. Throughput: 0: 43843.0. Samples: 424541280. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:47:08,243][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:47:10,455][46990] Updated weights for policy 0, policy_version 25910 (0.0043) +[2024-06-10 20:47:12,737][46990] Updated weights for policy 0, policy_version 25920 (0.0023) +[2024-06-10 20:47:13,239][46753] Fps is (10 sec: 47513.4, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 424673280. Throughput: 0: 43697.1. Samples: 424801840. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:47:13,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:47:18,156][46990] Updated weights for policy 0, policy_version 25930 (0.0032) +[2024-06-10 20:47:18,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 424837120. Throughput: 0: 43651.1. Samples: 424938120. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:47:18,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:47:20,456][46990] Updated weights for policy 0, policy_version 25940 (0.0032) +[2024-06-10 20:47:23,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 425066496. Throughput: 0: 43696.4. Samples: 425200120. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 20:47:23,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 20:47:25,351][46990] Updated weights for policy 0, policy_version 25950 (0.0031) +[2024-06-10 20:47:27,995][46990] Updated weights for policy 0, policy_version 25960 (0.0044) +[2024-06-10 20:47:28,240][46753] Fps is (10 sec: 49151.1, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 425328640. Throughput: 0: 43766.5. Samples: 425459320. Policy #0 lag: (min: 0.0, avg: 7.1, max: 20.0) +[2024-06-10 20:47:28,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:47:32,983][46990] Updated weights for policy 0, policy_version 25970 (0.0037) +[2024-06-10 20:47:33,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 425508864. Throughput: 0: 43734.2. Samples: 425602000. Policy #0 lag: (min: 0.0, avg: 7.1, max: 20.0) +[2024-06-10 20:47:33,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 20:47:35,419][46990] Updated weights for policy 0, policy_version 25980 (0.0035) +[2024-06-10 20:47:38,239][46753] Fps is (10 sec: 40960.2, 60 sec: 44240.1, 300 sec: 43820.3). Total num frames: 425738240. Throughput: 0: 43661.4. Samples: 425857440. Policy #0 lag: (min: 0.0, avg: 7.1, max: 20.0) +[2024-06-10 20:47:38,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:47:40,578][46990] Updated weights for policy 0, policy_version 25990 (0.0031) +[2024-06-10 20:47:42,691][46990] Updated weights for policy 0, policy_version 26000 (0.0035) +[2024-06-10 20:47:43,244][46753] Fps is (10 sec: 49129.4, 60 sec: 43960.4, 300 sec: 43819.6). Total num frames: 426000384. Throughput: 0: 43698.4. Samples: 426120260. Policy #0 lag: (min: 1.0, avg: 12.6, max: 24.0) +[2024-06-10 20:47:43,245][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:47:47,898][46990] Updated weights for policy 0, policy_version 26010 (0.0033) +[2024-06-10 20:47:48,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43690.6, 300 sec: 43653.7). Total num frames: 426147840. Throughput: 0: 43717.3. Samples: 426256980. Policy #0 lag: (min: 1.0, avg: 12.6, max: 24.0) +[2024-06-10 20:47:48,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 20:47:50,289][46990] Updated weights for policy 0, policy_version 26020 (0.0039) +[2024-06-10 20:47:53,240][46753] Fps is (10 sec: 37699.2, 60 sec: 43690.6, 300 sec: 43709.1). Total num frames: 426377216. Throughput: 0: 43903.8. Samples: 426516960. Policy #0 lag: (min: 1.0, avg: 12.6, max: 24.0) +[2024-06-10 20:47:53,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:47:55,506][46990] Updated weights for policy 0, policy_version 26030 (0.0027) +[2024-06-10 20:47:57,993][46990] Updated weights for policy 0, policy_version 26040 (0.0036) +[2024-06-10 20:47:58,240][46753] Fps is (10 sec: 50789.5, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 426655744. Throughput: 0: 43824.8. Samples: 426773960. Policy #0 lag: (min: 1.0, avg: 12.6, max: 24.0) +[2024-06-10 20:47:58,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:48:02,802][46990] Updated weights for policy 0, policy_version 26050 (0.0031) +[2024-06-10 20:48:03,239][46753] Fps is (10 sec: 44238.1, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 426819584. Throughput: 0: 43759.9. Samples: 426907320. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 20:48:03,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:48:05,633][46990] Updated weights for policy 0, policy_version 26060 (0.0024) +[2024-06-10 20:48:08,240][46753] Fps is (10 sec: 39320.2, 60 sec: 43963.4, 300 sec: 43764.7). Total num frames: 427048960. Throughput: 0: 43632.4. Samples: 427163600. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 20:48:08,241][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:48:10,476][46990] Updated weights for policy 0, policy_version 26070 (0.0028) +[2024-06-10 20:48:12,386][46970] Signal inference workers to stop experience collection... (6250 times) +[2024-06-10 20:48:12,430][46990] InferenceWorker_p0-w0: stopping experience collection (6250 times) +[2024-06-10 20:48:12,436][46970] Signal inference workers to resume experience collection... (6250 times) +[2024-06-10 20:48:12,447][46990] InferenceWorker_p0-w0: resuming experience collection (6250 times) +[2024-06-10 20:48:12,919][46990] Updated weights for policy 0, policy_version 26080 (0.0029) +[2024-06-10 20:48:13,239][46753] Fps is (10 sec: 49151.7, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 427311104. Throughput: 0: 43740.5. Samples: 427427640. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 20:48:13,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:48:17,875][46990] Updated weights for policy 0, policy_version 26090 (0.0044) +[2024-06-10 20:48:18,239][46753] Fps is (10 sec: 40961.9, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 427458560. Throughput: 0: 43593.7. Samples: 427563720. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 20:48:18,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 20:48:20,171][46990] Updated weights for policy 0, policy_version 26100 (0.0022) +[2024-06-10 20:48:23,240][46753] Fps is (10 sec: 39321.0, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 427704320. Throughput: 0: 43802.1. Samples: 427828540. Policy #0 lag: (min: 0.0, avg: 8.7, max: 22.0) +[2024-06-10 20:48:23,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:48:23,259][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000026105_427704320.pth... +[2024-06-10 20:48:23,312][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000025464_417202176.pth +[2024-06-10 20:48:25,293][46990] Updated weights for policy 0, policy_version 26110 (0.0037) +[2024-06-10 20:48:27,974][46990] Updated weights for policy 0, policy_version 26120 (0.0040) +[2024-06-10 20:48:28,240][46753] Fps is (10 sec: 50789.7, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 427966464. Throughput: 0: 43696.2. Samples: 428086400. Policy #0 lag: (min: 0.0, avg: 8.7, max: 22.0) +[2024-06-10 20:48:28,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:48:32,629][46990] Updated weights for policy 0, policy_version 26130 (0.0036) +[2024-06-10 20:48:33,240][46753] Fps is (10 sec: 42598.6, 60 sec: 43690.5, 300 sec: 43764.7). Total num frames: 428130304. Throughput: 0: 43642.9. Samples: 428220920. Policy #0 lag: (min: 0.0, avg: 8.7, max: 22.0) +[2024-06-10 20:48:33,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:48:35,673][46990] Updated weights for policy 0, policy_version 26140 (0.0028) +[2024-06-10 20:48:38,240][46753] Fps is (10 sec: 39320.5, 60 sec: 43690.4, 300 sec: 43764.7). Total num frames: 428359680. Throughput: 0: 43502.9. Samples: 428474600. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 20:48:38,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 20:48:40,316][46990] Updated weights for policy 0, policy_version 26150 (0.0028) +[2024-06-10 20:48:43,016][46990] Updated weights for policy 0, policy_version 26160 (0.0037) +[2024-06-10 20:48:43,240][46753] Fps is (10 sec: 49152.3, 60 sec: 43693.9, 300 sec: 43931.3). Total num frames: 428621824. Throughput: 0: 43644.5. Samples: 428737960. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 20:48:43,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:48:47,679][46990] Updated weights for policy 0, policy_version 26170 (0.0051) +[2024-06-10 20:48:48,239][46753] Fps is (10 sec: 42600.0, 60 sec: 43963.7, 300 sec: 43709.8). Total num frames: 428785664. Throughput: 0: 43673.2. Samples: 428872620. Policy #0 lag: (min: 0.0, avg: 11.6, max: 22.0) +[2024-06-10 20:48:48,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:48:50,540][46990] Updated weights for policy 0, policy_version 26180 (0.0037) +[2024-06-10 20:48:53,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43963.9, 300 sec: 43709.2). Total num frames: 429015040. Throughput: 0: 43659.5. Samples: 429128260. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 20:48:53,240][46753] Avg episode reward: [(0, '0.264')] +[2024-06-10 20:48:55,203][46990] Updated weights for policy 0, policy_version 26190 (0.0029) +[2024-06-10 20:48:58,061][46990] Updated weights for policy 0, policy_version 26200 (0.0031) +[2024-06-10 20:48:58,240][46753] Fps is (10 sec: 47513.3, 60 sec: 43417.6, 300 sec: 43820.9). Total num frames: 429260800. Throughput: 0: 43506.6. Samples: 429385440. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 20:48:58,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:49:02,714][46990] Updated weights for policy 0, policy_version 26210 (0.0040) +[2024-06-10 20:49:03,244][46753] Fps is (10 sec: 42579.4, 60 sec: 43687.3, 300 sec: 43764.1). Total num frames: 429441024. Throughput: 0: 43445.4. Samples: 429518960. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 20:49:03,245][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 20:49:05,867][46990] Updated weights for policy 0, policy_version 26220 (0.0028) +[2024-06-10 20:49:08,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43691.0, 300 sec: 43709.2). Total num frames: 429670400. Throughput: 0: 43256.6. Samples: 429775080. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 20:49:08,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:49:10,531][46990] Updated weights for policy 0, policy_version 26230 (0.0024) +[2024-06-10 20:49:13,239][46753] Fps is (10 sec: 45896.2, 60 sec: 43144.6, 300 sec: 43820.3). Total num frames: 429899776. Throughput: 0: 43504.2. Samples: 430044080. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 20:49:13,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:49:13,463][46990] Updated weights for policy 0, policy_version 26240 (0.0034) +[2024-06-10 20:49:17,884][46990] Updated weights for policy 0, policy_version 26250 (0.0037) +[2024-06-10 20:49:18,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 430080000. Throughput: 0: 43572.1. Samples: 430181660. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 20:49:18,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 20:49:20,699][46990] Updated weights for policy 0, policy_version 26260 (0.0031) +[2024-06-10 20:49:23,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 430325760. Throughput: 0: 43702.7. Samples: 430441200. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 20:49:23,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:49:25,476][46990] Updated weights for policy 0, policy_version 26270 (0.0025) +[2024-06-10 20:49:28,162][46990] Updated weights for policy 0, policy_version 26280 (0.0038) +[2024-06-10 20:49:28,239][46753] Fps is (10 sec: 49152.2, 60 sec: 43417.7, 300 sec: 43875.8). Total num frames: 430571520. Throughput: 0: 43536.5. Samples: 430697100. Policy #0 lag: (min: 0.0, avg: 12.2, max: 24.0) +[2024-06-10 20:49:28,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:49:29,097][46970] Signal inference workers to stop experience collection... (6300 times) +[2024-06-10 20:49:29,142][46990] InferenceWorker_p0-w0: stopping experience collection (6300 times) +[2024-06-10 20:49:29,150][46970] Signal inference workers to resume experience collection... (6300 times) +[2024-06-10 20:49:29,158][46990] InferenceWorker_p0-w0: resuming experience collection (6300 times) +[2024-06-10 20:49:33,239][46753] Fps is (10 sec: 39321.4, 60 sec: 43144.6, 300 sec: 43598.1). Total num frames: 430718976. Throughput: 0: 43550.2. Samples: 430832380. Policy #0 lag: (min: 0.0, avg: 12.2, max: 24.0) +[2024-06-10 20:49:33,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:49:33,319][46990] Updated weights for policy 0, policy_version 26290 (0.0038) +[2024-06-10 20:49:35,778][46990] Updated weights for policy 0, policy_version 26300 (0.0031) +[2024-06-10 20:49:38,240][46753] Fps is (10 sec: 40959.4, 60 sec: 43690.9, 300 sec: 43653.6). Total num frames: 430981120. Throughput: 0: 43503.5. Samples: 431085920. Policy #0 lag: (min: 0.0, avg: 12.2, max: 24.0) +[2024-06-10 20:49:38,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:49:40,569][46990] Updated weights for policy 0, policy_version 26310 (0.0028) +[2024-06-10 20:49:43,240][46753] Fps is (10 sec: 49151.3, 60 sec: 43144.5, 300 sec: 43875.8). Total num frames: 431210496. Throughput: 0: 43814.6. Samples: 431357100. Policy #0 lag: (min: 0.0, avg: 12.2, max: 24.0) +[2024-06-10 20:49:43,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 20:49:43,385][46990] Updated weights for policy 0, policy_version 26320 (0.0032) +[2024-06-10 20:49:47,935][46990] Updated weights for policy 0, policy_version 26330 (0.0025) +[2024-06-10 20:49:48,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 431390720. Throughput: 0: 43823.5. Samples: 431490820. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 20:49:48,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 20:49:48,240][46970] Saving new best policy, reward=0.291! +[2024-06-10 20:49:50,588][46990] Updated weights for policy 0, policy_version 26340 (0.0031) +[2024-06-10 20:49:53,240][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 431636480. Throughput: 0: 43897.2. Samples: 431750460. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 20:49:53,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:49:55,689][46990] Updated weights for policy 0, policy_version 26350 (0.0029) +[2024-06-10 20:49:58,205][46990] Updated weights for policy 0, policy_version 26360 (0.0041) +[2024-06-10 20:49:58,244][46753] Fps is (10 sec: 49130.0, 60 sec: 43687.5, 300 sec: 43819.6). Total num frames: 431882240. Throughput: 0: 43594.7. Samples: 432006040. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 20:49:58,244][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:50:03,095][46990] Updated weights for policy 0, policy_version 26370 (0.0029) +[2024-06-10 20:50:03,240][46753] Fps is (10 sec: 40960.2, 60 sec: 43420.8, 300 sec: 43709.2). Total num frames: 432046080. Throughput: 0: 43524.4. Samples: 432140260. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 20:50:03,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:50:06,066][46990] Updated weights for policy 0, policy_version 26380 (0.0029) +[2024-06-10 20:50:08,239][46753] Fps is (10 sec: 40978.2, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 432291840. Throughput: 0: 43386.2. Samples: 432393580. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 20:50:08,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:50:10,864][46990] Updated weights for policy 0, policy_version 26390 (0.0055) +[2024-06-10 20:50:13,239][46753] Fps is (10 sec: 47513.7, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 432521216. Throughput: 0: 43661.2. Samples: 432661860. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 20:50:13,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:50:13,492][46990] Updated weights for policy 0, policy_version 26400 (0.0022) +[2024-06-10 20:50:18,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 432685056. Throughput: 0: 43638.7. Samples: 432796120. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 20:50:18,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:50:18,267][46990] Updated weights for policy 0, policy_version 26410 (0.0029) +[2024-06-10 20:50:20,747][46990] Updated weights for policy 0, policy_version 26420 (0.0040) +[2024-06-10 20:50:23,242][46753] Fps is (10 sec: 44225.1, 60 sec: 43961.7, 300 sec: 43764.3). Total num frames: 432963584. Throughput: 0: 43850.8. Samples: 433059320. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 20:50:23,243][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:50:23,269][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000026426_432963584.pth... +[2024-06-10 20:50:23,322][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000025784_422445056.pth +[2024-06-10 20:50:25,919][46990] Updated weights for policy 0, policy_version 26430 (0.0027) +[2024-06-10 20:50:28,205][46990] Updated weights for policy 0, policy_version 26440 (0.0046) +[2024-06-10 20:50:28,239][46753] Fps is (10 sec: 50790.3, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 433192960. Throughput: 0: 43649.5. Samples: 433321320. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 20:50:28,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:50:33,240][46753] Fps is (10 sec: 37692.5, 60 sec: 43690.5, 300 sec: 43653.6). Total num frames: 433340416. Throughput: 0: 43619.3. Samples: 433453700. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 20:50:33,241][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:50:33,267][46990] Updated weights for policy 0, policy_version 26450 (0.0036) +[2024-06-10 20:50:35,717][46990] Updated weights for policy 0, policy_version 26460 (0.0036) +[2024-06-10 20:50:38,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 433602560. Throughput: 0: 43678.8. Samples: 433716000. Policy #0 lag: (min: 0.0, avg: 12.0, max: 22.0) +[2024-06-10 20:50:38,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:50:40,676][46990] Updated weights for policy 0, policy_version 26470 (0.0040) +[2024-06-10 20:50:43,135][46990] Updated weights for policy 0, policy_version 26480 (0.0034) +[2024-06-10 20:50:43,239][46753] Fps is (10 sec: 50791.7, 60 sec: 43963.8, 300 sec: 43820.3). Total num frames: 433848320. Throughput: 0: 43836.3. Samples: 433978480. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 20:50:43,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:50:48,210][46990] Updated weights for policy 0, policy_version 26490 (0.0034) +[2024-06-10 20:50:48,211][46970] Signal inference workers to stop experience collection... (6350 times) +[2024-06-10 20:50:48,212][46970] Signal inference workers to resume experience collection... (6350 times) +[2024-06-10 20:50:48,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 434012160. Throughput: 0: 43884.1. Samples: 434115040. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 20:50:48,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:50:48,250][46990] InferenceWorker_p0-w0: stopping experience collection (6350 times) +[2024-06-10 20:50:48,250][46990] InferenceWorker_p0-w0: resuming experience collection (6350 times) +[2024-06-10 20:50:51,027][46990] Updated weights for policy 0, policy_version 26500 (0.0030) +[2024-06-10 20:50:53,244][46753] Fps is (10 sec: 40943.3, 60 sec: 43687.8, 300 sec: 43653.0). Total num frames: 434257920. Throughput: 0: 43964.0. Samples: 434372140. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 20:50:53,244][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:50:55,749][46990] Updated weights for policy 0, policy_version 26510 (0.0041) +[2024-06-10 20:50:58,239][46753] Fps is (10 sec: 47513.6, 60 sec: 43420.9, 300 sec: 43764.8). Total num frames: 434487296. Throughput: 0: 43801.0. Samples: 434632900. Policy #0 lag: (min: 0.0, avg: 12.4, max: 24.0) +[2024-06-10 20:50:58,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:50:58,294][46990] Updated weights for policy 0, policy_version 26520 (0.0034) +[2024-06-10 20:51:03,239][46753] Fps is (10 sec: 39337.9, 60 sec: 43417.7, 300 sec: 43653.7). Total num frames: 434651136. Throughput: 0: 43836.9. Samples: 434768780. Policy #0 lag: (min: 0.0, avg: 12.4, max: 24.0) +[2024-06-10 20:51:03,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 20:51:03,465][46990] Updated weights for policy 0, policy_version 26530 (0.0037) +[2024-06-10 20:51:05,720][46990] Updated weights for policy 0, policy_version 26540 (0.0027) +[2024-06-10 20:51:08,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 434913280. Throughput: 0: 43611.1. Samples: 435021700. Policy #0 lag: (min: 0.0, avg: 12.4, max: 24.0) +[2024-06-10 20:51:08,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:51:10,639][46990] Updated weights for policy 0, policy_version 26550 (0.0037) +[2024-06-10 20:51:13,239][46753] Fps is (10 sec: 49151.5, 60 sec: 43690.7, 300 sec: 43820.2). Total num frames: 435142656. Throughput: 0: 43834.6. Samples: 435293880. Policy #0 lag: (min: 0.0, avg: 12.4, max: 24.0) +[2024-06-10 20:51:13,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:51:13,310][46990] Updated weights for policy 0, policy_version 26560 (0.0042) +[2024-06-10 20:51:18,024][46990] Updated weights for policy 0, policy_version 26570 (0.0026) +[2024-06-10 20:51:18,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 435322880. Throughput: 0: 43853.7. Samples: 435427100. Policy #0 lag: (min: 0.0, avg: 12.4, max: 21.0) +[2024-06-10 20:51:18,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:51:20,582][46990] Updated weights for policy 0, policy_version 26580 (0.0032) +[2024-06-10 20:51:23,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43419.5, 300 sec: 43598.1). Total num frames: 435568640. Throughput: 0: 43779.0. Samples: 435686060. Policy #0 lag: (min: 0.0, avg: 12.4, max: 21.0) +[2024-06-10 20:51:23,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:51:25,567][46990] Updated weights for policy 0, policy_version 26590 (0.0025) +[2024-06-10 20:51:28,131][46990] Updated weights for policy 0, policy_version 26600 (0.0032) +[2024-06-10 20:51:28,239][46753] Fps is (10 sec: 49151.4, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 435814400. Throughput: 0: 43639.6. Samples: 435942260. Policy #0 lag: (min: 0.0, avg: 12.4, max: 21.0) +[2024-06-10 20:51:28,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:51:33,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43690.8, 300 sec: 43654.3). Total num frames: 435961856. Throughput: 0: 43605.7. Samples: 436077300. Policy #0 lag: (min: 0.0, avg: 7.9, max: 21.0) +[2024-06-10 20:51:33,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 20:51:33,272][46990] Updated weights for policy 0, policy_version 26610 (0.0048) +[2024-06-10 20:51:35,743][46990] Updated weights for policy 0, policy_version 26620 (0.0032) +[2024-06-10 20:51:38,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 436224000. Throughput: 0: 43464.9. Samples: 436327880. Policy #0 lag: (min: 0.0, avg: 7.9, max: 21.0) +[2024-06-10 20:51:38,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:51:40,619][46990] Updated weights for policy 0, policy_version 26630 (0.0041) +[2024-06-10 20:51:43,220][46990] Updated weights for policy 0, policy_version 26640 (0.0035) +[2024-06-10 20:51:43,239][46753] Fps is (10 sec: 50790.7, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 436469760. Throughput: 0: 43790.7. Samples: 436603480. Policy #0 lag: (min: 0.0, avg: 7.9, max: 21.0) +[2024-06-10 20:51:43,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:51:47,909][46990] Updated weights for policy 0, policy_version 26650 (0.0033) +[2024-06-10 20:51:48,241][46753] Fps is (10 sec: 40951.5, 60 sec: 43689.2, 300 sec: 43653.4). Total num frames: 436633600. Throughput: 0: 43765.1. Samples: 436738300. Policy #0 lag: (min: 0.0, avg: 7.9, max: 21.0) +[2024-06-10 20:51:48,242][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:51:50,630][46990] Updated weights for policy 0, policy_version 26660 (0.0043) +[2024-06-10 20:51:53,240][46753] Fps is (10 sec: 40959.1, 60 sec: 43693.5, 300 sec: 43598.1). Total num frames: 436879360. Throughput: 0: 43834.5. Samples: 436994260. Policy #0 lag: (min: 1.0, avg: 12.1, max: 24.0) +[2024-06-10 20:51:53,244][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:51:55,473][46990] Updated weights for policy 0, policy_version 26670 (0.0034) +[2024-06-10 20:51:58,122][46990] Updated weights for policy 0, policy_version 26680 (0.0022) +[2024-06-10 20:51:58,239][46753] Fps is (10 sec: 49161.8, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 437125120. Throughput: 0: 43464.5. Samples: 437249780. Policy #0 lag: (min: 1.0, avg: 12.1, max: 24.0) +[2024-06-10 20:51:58,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 20:52:03,118][46990] Updated weights for policy 0, policy_version 26690 (0.0046) +[2024-06-10 20:52:03,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43963.7, 300 sec: 43653.6). Total num frames: 437288960. Throughput: 0: 43614.5. Samples: 437389760. Policy #0 lag: (min: 1.0, avg: 12.1, max: 24.0) +[2024-06-10 20:52:03,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:52:06,024][46990] Updated weights for policy 0, policy_version 26700 (0.0029) +[2024-06-10 20:52:08,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 437534720. Throughput: 0: 43495.2. Samples: 437643340. Policy #0 lag: (min: 1.0, avg: 12.1, max: 24.0) +[2024-06-10 20:52:08,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:52:10,036][46970] Signal inference workers to stop experience collection... (6400 times) +[2024-06-10 20:52:10,036][46970] Signal inference workers to resume experience collection... (6400 times) +[2024-06-10 20:52:10,064][46990] InferenceWorker_p0-w0: stopping experience collection (6400 times) +[2024-06-10 20:52:10,064][46990] InferenceWorker_p0-w0: resuming experience collection (6400 times) +[2024-06-10 20:52:10,166][46990] Updated weights for policy 0, policy_version 26710 (0.0039) +[2024-06-10 20:52:13,235][46990] Updated weights for policy 0, policy_version 26720 (0.0028) +[2024-06-10 20:52:13,239][46753] Fps is (10 sec: 49152.4, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 437780480. Throughput: 0: 43904.9. Samples: 437917980. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:52:13,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:52:17,691][46990] Updated weights for policy 0, policy_version 26730 (0.0035) +[2024-06-10 20:52:18,240][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.5, 300 sec: 43653.6). Total num frames: 437944320. Throughput: 0: 43922.1. Samples: 438053800. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:52:18,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:52:20,593][46990] Updated weights for policy 0, policy_version 26740 (0.0030) +[2024-06-10 20:52:23,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43963.9, 300 sec: 43653.7). Total num frames: 438206464. Throughput: 0: 44168.5. Samples: 438315460. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:52:23,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:52:23,271][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000026746_438206464.pth... +[2024-06-10 20:52:23,317][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000026105_427704320.pth +[2024-06-10 20:52:25,089][46990] Updated weights for policy 0, policy_version 26750 (0.0030) +[2024-06-10 20:52:28,240][46753] Fps is (10 sec: 47513.2, 60 sec: 43417.5, 300 sec: 43764.7). Total num frames: 438419456. Throughput: 0: 43762.9. Samples: 438572820. Policy #0 lag: (min: 1.0, avg: 8.8, max: 22.0) +[2024-06-10 20:52:28,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:52:28,430][46990] Updated weights for policy 0, policy_version 26760 (0.0035) +[2024-06-10 20:52:32,544][46990] Updated weights for policy 0, policy_version 26770 (0.0044) +[2024-06-10 20:52:33,239][46753] Fps is (10 sec: 40959.4, 60 sec: 44236.7, 300 sec: 43653.6). Total num frames: 438616064. Throughput: 0: 43794.8. Samples: 438708980. Policy #0 lag: (min: 1.0, avg: 8.8, max: 22.0) +[2024-06-10 20:52:33,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:52:35,971][46990] Updated weights for policy 0, policy_version 26780 (0.0033) +[2024-06-10 20:52:38,240][46753] Fps is (10 sec: 44236.9, 60 sec: 43963.6, 300 sec: 43598.7). Total num frames: 438861824. Throughput: 0: 43645.8. Samples: 438958320. Policy #0 lag: (min: 1.0, avg: 8.8, max: 22.0) +[2024-06-10 20:52:38,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 20:52:40,169][46990] Updated weights for policy 0, policy_version 26790 (0.0042) +[2024-06-10 20:52:43,239][46753] Fps is (10 sec: 45875.9, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 439074816. Throughput: 0: 44033.9. Samples: 439231300. Policy #0 lag: (min: 1.0, avg: 8.8, max: 22.0) +[2024-06-10 20:52:43,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 20:52:43,301][46990] Updated weights for policy 0, policy_version 26800 (0.0022) +[2024-06-10 20:52:47,441][46990] Updated weights for policy 0, policy_version 26810 (0.0041) +[2024-06-10 20:52:48,240][46753] Fps is (10 sec: 39321.7, 60 sec: 43692.1, 300 sec: 43653.7). Total num frames: 439255040. Throughput: 0: 43864.4. Samples: 439363660. Policy #0 lag: (min: 1.0, avg: 13.9, max: 24.0) +[2024-06-10 20:52:48,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:52:50,919][46990] Updated weights for policy 0, policy_version 26820 (0.0041) +[2024-06-10 20:52:53,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 439517184. Throughput: 0: 44001.7. Samples: 439623420. Policy #0 lag: (min: 1.0, avg: 13.9, max: 24.0) +[2024-06-10 20:52:53,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:52:55,095][46990] Updated weights for policy 0, policy_version 26830 (0.0034) +[2024-06-10 20:52:58,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 439730176. Throughput: 0: 43764.3. Samples: 439887380. Policy #0 lag: (min: 1.0, avg: 13.9, max: 24.0) +[2024-06-10 20:52:58,241][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 20:52:58,727][46990] Updated weights for policy 0, policy_version 26840 (0.0025) +[2024-06-10 20:53:02,350][46990] Updated weights for policy 0, policy_version 26850 (0.0045) +[2024-06-10 20:53:03,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43690.7, 300 sec: 43598.2). Total num frames: 439910400. Throughput: 0: 43637.5. Samples: 440017480. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 20:53:03,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:53:05,890][46990] Updated weights for policy 0, policy_version 26860 (0.0036) +[2024-06-10 20:53:08,240][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 440172544. Throughput: 0: 43591.8. Samples: 440277100. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 20:53:08,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:53:08,511][46970] Signal inference workers to stop experience collection... (6450 times) +[2024-06-10 20:53:08,557][46990] InferenceWorker_p0-w0: stopping experience collection (6450 times) +[2024-06-10 20:53:08,567][46970] Signal inference workers to resume experience collection... (6450 times) +[2024-06-10 20:53:08,582][46990] InferenceWorker_p0-w0: resuming experience collection (6450 times) +[2024-06-10 20:53:09,831][46990] Updated weights for policy 0, policy_version 26870 (0.0037) +[2024-06-10 20:53:13,239][46753] Fps is (10 sec: 47513.2, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 440385536. Throughput: 0: 43978.8. Samples: 440551860. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 20:53:13,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:53:13,306][46990] Updated weights for policy 0, policy_version 26880 (0.0030) +[2024-06-10 20:53:17,351][46990] Updated weights for policy 0, policy_version 26890 (0.0031) +[2024-06-10 20:53:18,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43963.8, 300 sec: 43653.7). Total num frames: 440582144. Throughput: 0: 43694.3. Samples: 440675220. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 20:53:18,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:53:20,891][46990] Updated weights for policy 0, policy_version 26900 (0.0029) +[2024-06-10 20:53:23,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 440827904. Throughput: 0: 43975.2. Samples: 440937200. Policy #0 lag: (min: 0.0, avg: 9.3, max: 22.0) +[2024-06-10 20:53:23,240][46753] Avg episode reward: [(0, '0.265')] +[2024-06-10 20:53:24,687][46990] Updated weights for policy 0, policy_version 26910 (0.0029) +[2024-06-10 20:53:28,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 441040896. Throughput: 0: 43796.4. Samples: 441202140. Policy #0 lag: (min: 0.0, avg: 9.3, max: 22.0) +[2024-06-10 20:53:28,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:53:28,489][46990] Updated weights for policy 0, policy_version 26920 (0.0033) +[2024-06-10 20:53:32,070][46990] Updated weights for policy 0, policy_version 26930 (0.0048) +[2024-06-10 20:53:33,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 441253888. Throughput: 0: 43674.8. Samples: 441329020. Policy #0 lag: (min: 0.0, avg: 9.3, max: 22.0) +[2024-06-10 20:53:33,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:53:35,942][46990] Updated weights for policy 0, policy_version 26940 (0.0031) +[2024-06-10 20:53:38,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.9, 300 sec: 43653.7). Total num frames: 441499648. Throughput: 0: 43814.7. Samples: 441595080. Policy #0 lag: (min: 0.0, avg: 12.3, max: 21.0) +[2024-06-10 20:53:38,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 20:53:39,534][46990] Updated weights for policy 0, policy_version 26950 (0.0034) +[2024-06-10 20:53:43,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 441696256. Throughput: 0: 43969.9. Samples: 441866020. Policy #0 lag: (min: 0.0, avg: 12.3, max: 21.0) +[2024-06-10 20:53:43,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:53:43,446][46990] Updated weights for policy 0, policy_version 26960 (0.0038) +[2024-06-10 20:53:47,120][46990] Updated weights for policy 0, policy_version 26970 (0.0039) +[2024-06-10 20:53:48,240][46753] Fps is (10 sec: 42597.8, 60 sec: 44509.9, 300 sec: 43764.7). Total num frames: 441925632. Throughput: 0: 43819.4. Samples: 441989360. Policy #0 lag: (min: 0.0, avg: 12.3, max: 21.0) +[2024-06-10 20:53:48,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:53:51,123][46990] Updated weights for policy 0, policy_version 26980 (0.0035) +[2024-06-10 20:53:53,244][46753] Fps is (10 sec: 45854.4, 60 sec: 43960.5, 300 sec: 43708.5). Total num frames: 442155008. Throughput: 0: 43975.3. Samples: 442256180. Policy #0 lag: (min: 0.0, avg: 12.3, max: 21.0) +[2024-06-10 20:53:53,244][46753] Avg episode reward: [(0, '0.269')] +[2024-06-10 20:53:54,343][46990] Updated weights for policy 0, policy_version 26990 (0.0030) +[2024-06-10 20:53:58,240][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43765.4). Total num frames: 442351616. Throughput: 0: 43627.9. Samples: 442515120. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 20:53:58,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 20:53:58,409][46990] Updated weights for policy 0, policy_version 27000 (0.0032) +[2024-06-10 20:54:01,878][46990] Updated weights for policy 0, policy_version 27010 (0.0027) +[2024-06-10 20:54:03,244][46753] Fps is (10 sec: 42598.4, 60 sec: 44506.5, 300 sec: 43764.1). Total num frames: 442580992. Throughput: 0: 43718.8. Samples: 442642760. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 20:54:03,244][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:54:06,187][46990] Updated weights for policy 0, policy_version 27020 (0.0033) +[2024-06-10 20:54:08,239][46753] Fps is (10 sec: 45876.0, 60 sec: 43963.9, 300 sec: 43764.7). Total num frames: 442810368. Throughput: 0: 43707.2. Samples: 442904020. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 20:54:08,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:54:09,243][46990] Updated weights for policy 0, policy_version 27030 (0.0036) +[2024-06-10 20:54:13,240][46753] Fps is (10 sec: 40978.0, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 442990592. Throughput: 0: 43908.8. Samples: 443178040. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 20:54:13,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:54:13,490][46990] Updated weights for policy 0, policy_version 27040 (0.0033) +[2024-06-10 20:54:16,923][46990] Updated weights for policy 0, policy_version 27050 (0.0033) +[2024-06-10 20:54:18,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43963.9, 300 sec: 43709.2). Total num frames: 443219968. Throughput: 0: 43723.7. Samples: 443296580. Policy #0 lag: (min: 0.0, avg: 9.5, max: 24.0) +[2024-06-10 20:54:18,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 20:54:21,165][46990] Updated weights for policy 0, policy_version 27060 (0.0024) +[2024-06-10 20:54:23,239][46753] Fps is (10 sec: 47514.0, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 443465728. Throughput: 0: 43747.5. Samples: 443563720. Policy #0 lag: (min: 0.0, avg: 9.5, max: 24.0) +[2024-06-10 20:54:23,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:54:23,250][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000027067_443465728.pth... +[2024-06-10 20:54:23,298][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000026426_432963584.pth +[2024-06-10 20:54:24,019][46990] Updated weights for policy 0, policy_version 27070 (0.0031) +[2024-06-10 20:54:28,145][46970] Signal inference workers to stop experience collection... (6500 times) +[2024-06-10 20:54:28,145][46970] Signal inference workers to resume experience collection... (6500 times) +[2024-06-10 20:54:28,169][46990] InferenceWorker_p0-w0: stopping experience collection (6500 times) +[2024-06-10 20:54:28,169][46990] InferenceWorker_p0-w0: resuming experience collection (6500 times) +[2024-06-10 20:54:28,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 443662336. Throughput: 0: 43664.5. Samples: 443830920. Policy #0 lag: (min: 0.0, avg: 9.5, max: 24.0) +[2024-06-10 20:54:28,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:54:28,275][46990] Updated weights for policy 0, policy_version 27080 (0.0028) +[2024-06-10 20:54:31,677][46990] Updated weights for policy 0, policy_version 27090 (0.0028) +[2024-06-10 20:54:33,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43963.8, 300 sec: 43764.7). Total num frames: 443891712. Throughput: 0: 43600.6. Samples: 443951380. Policy #0 lag: (min: 0.0, avg: 11.0, max: 20.0) +[2024-06-10 20:54:33,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:54:36,043][46990] Updated weights for policy 0, policy_version 27100 (0.0029) +[2024-06-10 20:54:38,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43690.7, 300 sec: 43764.8). Total num frames: 444121088. Throughput: 0: 43716.4. Samples: 444223220. Policy #0 lag: (min: 0.0, avg: 11.0, max: 20.0) +[2024-06-10 20:54:38,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 20:54:39,092][46990] Updated weights for policy 0, policy_version 27110 (0.0049) +[2024-06-10 20:54:43,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 444317696. Throughput: 0: 43945.5. Samples: 444492660. Policy #0 lag: (min: 0.0, avg: 11.0, max: 20.0) +[2024-06-10 20:54:43,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:54:43,309][46990] Updated weights for policy 0, policy_version 27120 (0.0039) +[2024-06-10 20:54:46,503][46990] Updated weights for policy 0, policy_version 27130 (0.0033) +[2024-06-10 20:54:48,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 444547072. Throughput: 0: 43857.7. Samples: 444616160. Policy #0 lag: (min: 0.0, avg: 11.0, max: 20.0) +[2024-06-10 20:54:48,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 20:54:50,681][46990] Updated weights for policy 0, policy_version 27140 (0.0037) +[2024-06-10 20:54:53,239][46753] Fps is (10 sec: 47513.4, 60 sec: 43967.0, 300 sec: 43765.4). Total num frames: 444792832. Throughput: 0: 43962.1. Samples: 444882320. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 20:54:53,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:54:54,012][46990] Updated weights for policy 0, policy_version 27150 (0.0055) +[2024-06-10 20:54:57,985][46990] Updated weights for policy 0, policy_version 27160 (0.0037) +[2024-06-10 20:54:58,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43963.9, 300 sec: 43875.8). Total num frames: 444989440. Throughput: 0: 43734.8. Samples: 445146100. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 20:54:58,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:55:01,461][46990] Updated weights for policy 0, policy_version 27170 (0.0030) +[2024-06-10 20:55:03,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43693.9, 300 sec: 43764.7). Total num frames: 445202432. Throughput: 0: 43963.9. Samples: 445274960. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 20:55:03,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 20:55:05,810][46990] Updated weights for policy 0, policy_version 27180 (0.0039) +[2024-06-10 20:55:08,244][46753] Fps is (10 sec: 45854.5, 60 sec: 43960.4, 300 sec: 43819.6). Total num frames: 445448192. Throughput: 0: 43876.1. Samples: 445538340. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 20:55:08,245][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:55:09,018][46990] Updated weights for policy 0, policy_version 27190 (0.0035) +[2024-06-10 20:55:12,978][46990] Updated weights for policy 0, policy_version 27200 (0.0032) +[2024-06-10 20:55:13,240][46753] Fps is (10 sec: 44236.4, 60 sec: 44236.8, 300 sec: 43931.3). Total num frames: 445644800. Throughput: 0: 43990.5. Samples: 445810500. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 20:55:13,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:55:16,394][46990] Updated weights for policy 0, policy_version 27210 (0.0031) +[2024-06-10 20:55:18,239][46753] Fps is (10 sec: 40978.2, 60 sec: 43963.6, 300 sec: 43709.6). Total num frames: 445857792. Throughput: 0: 44158.6. Samples: 445938520. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 20:55:18,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:55:20,347][46990] Updated weights for policy 0, policy_version 27220 (0.0037) +[2024-06-10 20:55:23,240][46753] Fps is (10 sec: 45875.3, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 446103552. Throughput: 0: 44003.4. Samples: 446203380. Policy #0 lag: (min: 0.0, avg: 10.7, max: 21.0) +[2024-06-10 20:55:23,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:55:23,554][46990] Updated weights for policy 0, policy_version 27230 (0.0041) +[2024-06-10 20:55:27,861][46990] Updated weights for policy 0, policy_version 27240 (0.0043) +[2024-06-10 20:55:28,239][46753] Fps is (10 sec: 45875.7, 60 sec: 44236.8, 300 sec: 43986.9). Total num frames: 446316544. Throughput: 0: 43866.7. Samples: 446466660. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:55:28,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:55:31,169][46990] Updated weights for policy 0, policy_version 27250 (0.0030) +[2024-06-10 20:55:33,244][46753] Fps is (10 sec: 40941.7, 60 sec: 43687.3, 300 sec: 43764.0). Total num frames: 446513152. Throughput: 0: 44023.5. Samples: 446597420. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:55:33,244][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:55:35,545][46990] Updated weights for policy 0, policy_version 27260 (0.0037) +[2024-06-10 20:55:38,240][46753] Fps is (10 sec: 44236.2, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 446758912. Throughput: 0: 43976.9. Samples: 446861280. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:55:38,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:55:38,827][46990] Updated weights for policy 0, policy_version 27270 (0.0036) +[2024-06-10 20:55:42,862][46990] Updated weights for policy 0, policy_version 27280 (0.0037) +[2024-06-10 20:55:43,239][46753] Fps is (10 sec: 44256.6, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 446955520. Throughput: 0: 43947.9. Samples: 447123760. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 20:55:43,242][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:55:46,022][46970] Signal inference workers to stop experience collection... (6550 times) +[2024-06-10 20:55:46,022][46970] Signal inference workers to resume experience collection... (6550 times) +[2024-06-10 20:55:46,034][46990] InferenceWorker_p0-w0: stopping experience collection (6550 times) +[2024-06-10 20:55:46,034][46990] InferenceWorker_p0-w0: resuming experience collection (6550 times) +[2024-06-10 20:55:46,166][46990] Updated weights for policy 0, policy_version 27290 (0.0032) +[2024-06-10 20:55:48,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43690.7, 300 sec: 43765.3). Total num frames: 447168512. Throughput: 0: 44038.7. Samples: 447256700. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 20:55:48,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:55:50,243][46990] Updated weights for policy 0, policy_version 27300 (0.0033) +[2024-06-10 20:55:53,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 447430656. Throughput: 0: 44017.7. Samples: 447518940. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 20:55:53,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:55:53,824][46990] Updated weights for policy 0, policy_version 27310 (0.0043) +[2024-06-10 20:55:57,984][46990] Updated weights for policy 0, policy_version 27320 (0.0050) +[2024-06-10 20:55:58,244][46753] Fps is (10 sec: 44217.0, 60 sec: 43687.4, 300 sec: 43930.7). Total num frames: 447610880. Throughput: 0: 43780.6. Samples: 447780820. Policy #0 lag: (min: 0.0, avg: 11.4, max: 22.0) +[2024-06-10 20:55:58,245][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 20:56:01,096][46990] Updated weights for policy 0, policy_version 27330 (0.0030) +[2024-06-10 20:56:03,244][46753] Fps is (10 sec: 39304.3, 60 sec: 43687.4, 300 sec: 43764.1). Total num frames: 447823872. Throughput: 0: 43778.8. Samples: 447908760. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 20:56:03,244][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:56:05,490][46990] Updated weights for policy 0, policy_version 27340 (0.0029) +[2024-06-10 20:56:08,239][46753] Fps is (10 sec: 45896.0, 60 sec: 43694.0, 300 sec: 43820.3). Total num frames: 448069632. Throughput: 0: 43800.6. Samples: 448174400. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 20:56:08,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:56:08,904][46990] Updated weights for policy 0, policy_version 27350 (0.0035) +[2024-06-10 20:56:12,860][46990] Updated weights for policy 0, policy_version 27360 (0.0043) +[2024-06-10 20:56:13,240][46753] Fps is (10 sec: 45895.1, 60 sec: 43963.7, 300 sec: 43931.3). Total num frames: 448282624. Throughput: 0: 43716.7. Samples: 448433920. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 20:56:13,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:56:16,266][46990] Updated weights for policy 0, policy_version 27370 (0.0037) +[2024-06-10 20:56:18,243][46753] Fps is (10 sec: 40947.0, 60 sec: 43688.4, 300 sec: 43764.3). Total num frames: 448479232. Throughput: 0: 43732.1. Samples: 448565300. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 20:56:18,243][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 20:56:20,465][46990] Updated weights for policy 0, policy_version 27380 (0.0040) +[2024-06-10 20:56:23,240][46753] Fps is (10 sec: 44232.6, 60 sec: 43689.9, 300 sec: 43764.6). Total num frames: 448724992. Throughput: 0: 43699.9. Samples: 448827820. Policy #0 lag: (min: 0.0, avg: 9.6, max: 23.0) +[2024-06-10 20:56:23,241][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 20:56:23,262][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000027388_448724992.pth... +[2024-06-10 20:56:23,325][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000026746_438206464.pth +[2024-06-10 20:56:23,761][46990] Updated weights for policy 0, policy_version 27390 (0.0052) +[2024-06-10 20:56:28,162][46990] Updated weights for policy 0, policy_version 27400 (0.0046) +[2024-06-10 20:56:28,240][46753] Fps is (10 sec: 44250.0, 60 sec: 43417.5, 300 sec: 43931.3). Total num frames: 448921600. Throughput: 0: 43778.2. Samples: 449093780. Policy #0 lag: (min: 0.0, avg: 9.6, max: 23.0) +[2024-06-10 20:56:28,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 20:56:31,162][46990] Updated weights for policy 0, policy_version 27410 (0.0036) +[2024-06-10 20:56:33,239][46753] Fps is (10 sec: 40964.6, 60 sec: 43694.0, 300 sec: 43764.7). Total num frames: 449134592. Throughput: 0: 43512.9. Samples: 449214780. Policy #0 lag: (min: 0.0, avg: 9.6, max: 23.0) +[2024-06-10 20:56:33,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 20:56:35,706][46990] Updated weights for policy 0, policy_version 27420 (0.0028) +[2024-06-10 20:56:38,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 449380352. Throughput: 0: 43729.4. Samples: 449486760. Policy #0 lag: (min: 0.0, avg: 9.6, max: 23.0) +[2024-06-10 20:56:38,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:56:38,474][46990] Updated weights for policy 0, policy_version 27430 (0.0034) +[2024-06-10 20:56:42,877][46990] Updated weights for policy 0, policy_version 27440 (0.0042) +[2024-06-10 20:56:43,240][46753] Fps is (10 sec: 45874.5, 60 sec: 43963.7, 300 sec: 43931.6). Total num frames: 449593344. Throughput: 0: 43858.5. Samples: 449754260. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 20:56:43,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 20:56:46,117][46990] Updated weights for policy 0, policy_version 27450 (0.0034) +[2024-06-10 20:56:48,244][46753] Fps is (10 sec: 40941.8, 60 sec: 43687.4, 300 sec: 43764.1). Total num frames: 449789952. Throughput: 0: 43849.3. Samples: 449881980. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 20:56:48,245][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 20:56:50,487][46990] Updated weights for policy 0, policy_version 27460 (0.0044) +[2024-06-10 20:56:53,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 450035712. Throughput: 0: 43796.4. Samples: 450145240. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 20:56:53,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 20:56:53,462][46990] Updated weights for policy 0, policy_version 27470 (0.0036) +[2024-06-10 20:56:58,171][46990] Updated weights for policy 0, policy_version 27480 (0.0041) +[2024-06-10 20:56:58,239][46753] Fps is (10 sec: 44256.8, 60 sec: 43693.9, 300 sec: 43875.8). Total num frames: 450232320. Throughput: 0: 44097.9. Samples: 450418320. Policy #0 lag: (min: 0.0, avg: 9.0, max: 23.0) +[2024-06-10 20:56:58,248][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 20:57:01,044][46990] Updated weights for policy 0, policy_version 27490 (0.0028) +[2024-06-10 20:57:03,244][46753] Fps is (10 sec: 42578.9, 60 sec: 43963.7, 300 sec: 43819.6). Total num frames: 450461696. Throughput: 0: 43859.0. Samples: 450539020. Policy #0 lag: (min: 0.0, avg: 9.0, max: 23.0) +[2024-06-10 20:57:03,253][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:57:05,526][46990] Updated weights for policy 0, policy_version 27500 (0.0034) +[2024-06-10 20:57:05,533][46970] Signal inference workers to stop experience collection... (6600 times) +[2024-06-10 20:57:05,533][46970] Signal inference workers to resume experience collection... (6600 times) +[2024-06-10 20:57:05,563][46990] InferenceWorker_p0-w0: stopping experience collection (6600 times) +[2024-06-10 20:57:05,563][46990] InferenceWorker_p0-w0: resuming experience collection (6600 times) +[2024-06-10 20:57:08,239][46753] Fps is (10 sec: 47513.3, 60 sec: 43963.7, 300 sec: 43820.2). Total num frames: 450707456. Throughput: 0: 44006.8. Samples: 450808080. Policy #0 lag: (min: 0.0, avg: 9.0, max: 23.0) +[2024-06-10 20:57:08,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:57:08,287][46990] Updated weights for policy 0, policy_version 27510 (0.0032) +[2024-06-10 20:57:12,801][46990] Updated weights for policy 0, policy_version 27520 (0.0025) +[2024-06-10 20:57:13,239][46753] Fps is (10 sec: 44256.8, 60 sec: 43690.7, 300 sec: 43931.3). Total num frames: 450904064. Throughput: 0: 43942.3. Samples: 451071180. Policy #0 lag: (min: 0.0, avg: 9.0, max: 23.0) +[2024-06-10 20:57:13,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:57:15,964][46990] Updated weights for policy 0, policy_version 27530 (0.0043) +[2024-06-10 20:57:18,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43692.9, 300 sec: 43709.2). Total num frames: 451100672. Throughput: 0: 44119.5. Samples: 451200160. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 20:57:18,240][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 20:57:20,425][46990] Updated weights for policy 0, policy_version 27540 (0.0048) +[2024-06-10 20:57:23,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43964.5, 300 sec: 43875.8). Total num frames: 451362816. Throughput: 0: 43714.7. Samples: 451453920. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 20:57:23,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 20:57:23,651][46990] Updated weights for policy 0, policy_version 27550 (0.0033) +[2024-06-10 20:57:28,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43417.7, 300 sec: 43764.7). Total num frames: 451526656. Throughput: 0: 43760.6. Samples: 451723480. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 20:57:28,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 20:57:28,306][46990] Updated weights for policy 0, policy_version 27560 (0.0033) +[2024-06-10 20:57:31,037][46990] Updated weights for policy 0, policy_version 27570 (0.0028) +[2024-06-10 20:57:33,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 451772416. Throughput: 0: 43645.2. Samples: 451845820. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 20:57:33,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 20:57:35,725][46990] Updated weights for policy 0, policy_version 27580 (0.0033) +[2024-06-10 20:57:38,239][46753] Fps is (10 sec: 49151.6, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 452018176. Throughput: 0: 43724.8. Samples: 452112860. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 20:57:38,241][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:57:38,386][46990] Updated weights for policy 0, policy_version 27590 (0.0036) +[2024-06-10 20:57:42,866][46990] Updated weights for policy 0, policy_version 27600 (0.0026) +[2024-06-10 20:57:43,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.8, 300 sec: 43931.4). Total num frames: 452214784. Throughput: 0: 43603.6. Samples: 452380480. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 20:57:43,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:57:46,118][46990] Updated weights for policy 0, policy_version 27610 (0.0034) +[2024-06-10 20:57:48,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43967.1, 300 sec: 43764.7). Total num frames: 452427776. Throughput: 0: 43902.8. Samples: 452514440. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 20:57:48,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:57:50,476][46990] Updated weights for policy 0, policy_version 27620 (0.0031) +[2024-06-10 20:57:53,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 452657152. Throughput: 0: 43722.3. Samples: 452775580. Policy #0 lag: (min: 1.0, avg: 11.2, max: 21.0) +[2024-06-10 20:57:53,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 20:57:53,585][46990] Updated weights for policy 0, policy_version 27630 (0.0037) +[2024-06-10 20:57:58,111][46990] Updated weights for policy 0, policy_version 27640 (0.0030) +[2024-06-10 20:57:58,239][46753] Fps is (10 sec: 42597.8, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 452853760. Throughput: 0: 43821.3. Samples: 453043140. Policy #0 lag: (min: 1.0, avg: 11.2, max: 21.0) +[2024-06-10 20:57:58,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 20:58:00,857][46990] Updated weights for policy 0, policy_version 27650 (0.0031) +[2024-06-10 20:58:03,240][46753] Fps is (10 sec: 42597.8, 60 sec: 43693.9, 300 sec: 43764.7). Total num frames: 453083136. Throughput: 0: 43562.5. Samples: 453160480. Policy #0 lag: (min: 1.0, avg: 11.2, max: 21.0) +[2024-06-10 20:58:03,240][46753] Avg episode reward: [(0, '0.270')] +[2024-06-10 20:58:05,503][46990] Updated weights for policy 0, policy_version 27660 (0.0041) +[2024-06-10 20:58:08,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 453312512. Throughput: 0: 43870.2. Samples: 453428080. Policy #0 lag: (min: 1.0, avg: 11.2, max: 21.0) +[2024-06-10 20:58:08,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 20:58:08,522][46990] Updated weights for policy 0, policy_version 27670 (0.0048) +[2024-06-10 20:58:12,815][46990] Updated weights for policy 0, policy_version 27680 (0.0034) +[2024-06-10 20:58:13,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 453525504. Throughput: 0: 43752.4. Samples: 453692340. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:58:13,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 20:58:16,263][46990] Updated weights for policy 0, policy_version 27690 (0.0028) +[2024-06-10 20:58:18,240][46753] Fps is (10 sec: 44236.2, 60 sec: 44236.7, 300 sec: 43820.3). Total num frames: 453754880. Throughput: 0: 44007.9. Samples: 453826180. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:58:18,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:58:20,426][46990] Updated weights for policy 0, policy_version 27700 (0.0031) +[2024-06-10 20:58:23,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 453967872. Throughput: 0: 43754.7. Samples: 454081820. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:58:23,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 20:58:23,252][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000027708_453967872.pth... +[2024-06-10 20:58:23,308][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000027067_443465728.pth +[2024-06-10 20:58:23,641][46990] Updated weights for policy 0, policy_version 27710 (0.0027) +[2024-06-10 20:58:28,225][46990] Updated weights for policy 0, policy_version 27720 (0.0035) +[2024-06-10 20:58:28,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 454164480. Throughput: 0: 43709.2. Samples: 454347400. Policy #0 lag: (min: 0.0, avg: 10.9, max: 21.0) +[2024-06-10 20:58:28,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 20:58:30,929][46990] Updated weights for policy 0, policy_version 27730 (0.0035) +[2024-06-10 20:58:33,240][46753] Fps is (10 sec: 44236.1, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 454410240. Throughput: 0: 43502.0. Samples: 454472040. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 20:58:33,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 20:58:35,455][46990] Updated weights for policy 0, policy_version 27740 (0.0032) +[2024-06-10 20:58:36,359][46970] Signal inference workers to stop experience collection... (6650 times) +[2024-06-10 20:58:36,359][46970] Signal inference workers to resume experience collection... (6650 times) +[2024-06-10 20:58:36,408][46990] InferenceWorker_p0-w0: stopping experience collection (6650 times) +[2024-06-10 20:58:36,408][46990] InferenceWorker_p0-w0: resuming experience collection (6650 times) +[2024-06-10 20:58:38,239][46753] Fps is (10 sec: 47513.6, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 454639616. Throughput: 0: 43636.8. Samples: 454739240. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 20:58:38,249][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 20:58:38,712][46990] Updated weights for policy 0, policy_version 27750 (0.0037) +[2024-06-10 20:58:42,791][46990] Updated weights for policy 0, policy_version 27760 (0.0025) +[2024-06-10 20:58:43,240][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 454836224. Throughput: 0: 43516.8. Samples: 455001400. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 20:58:43,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 20:58:46,291][46990] Updated weights for policy 0, policy_version 27770 (0.0038) +[2024-06-10 20:58:48,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43963.6, 300 sec: 43765.4). Total num frames: 455065600. Throughput: 0: 43763.6. Samples: 455129840. Policy #0 lag: (min: 1.0, avg: 10.9, max: 23.0) +[2024-06-10 20:58:48,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 20:58:50,100][46990] Updated weights for policy 0, policy_version 27780 (0.0039) +[2024-06-10 20:58:53,240][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.5, 300 sec: 43820.3). Total num frames: 455278592. Throughput: 0: 43724.3. Samples: 455395680. Policy #0 lag: (min: 1.0, avg: 10.9, max: 23.0) +[2024-06-10 20:58:53,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 20:58:53,507][46990] Updated weights for policy 0, policy_version 27790 (0.0032) +[2024-06-10 20:58:57,882][46990] Updated weights for policy 0, policy_version 27800 (0.0043) +[2024-06-10 20:58:58,240][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.6, 300 sec: 43709.8). Total num frames: 455475200. Throughput: 0: 43655.9. Samples: 455656860. Policy #0 lag: (min: 1.0, avg: 10.9, max: 23.0) +[2024-06-10 20:58:58,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 20:59:00,844][46990] Updated weights for policy 0, policy_version 27810 (0.0028) +[2024-06-10 20:59:03,240][46753] Fps is (10 sec: 44236.7, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 455720960. Throughput: 0: 43512.0. Samples: 455784220. Policy #0 lag: (min: 1.0, avg: 10.9, max: 23.0) +[2024-06-10 20:59:03,240][46753] Avg episode reward: [(0, '0.275')] +[2024-06-10 20:59:05,301][46990] Updated weights for policy 0, policy_version 27820 (0.0039) +[2024-06-10 20:59:08,238][46990] Updated weights for policy 0, policy_version 27830 (0.0032) +[2024-06-10 20:59:08,239][46753] Fps is (10 sec: 49152.1, 60 sec: 44236.7, 300 sec: 43986.9). Total num frames: 455966720. Throughput: 0: 43879.0. Samples: 456056380. Policy #0 lag: (min: 0.0, avg: 8.7, max: 20.0) +[2024-06-10 20:59:08,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:59:12,597][46990] Updated weights for policy 0, policy_version 27840 (0.0032) +[2024-06-10 20:59:13,240][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.7, 300 sec: 43875.8). Total num frames: 456163328. Throughput: 0: 43763.1. Samples: 456316740. Policy #0 lag: (min: 0.0, avg: 8.7, max: 20.0) +[2024-06-10 20:59:13,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 20:59:15,959][46990] Updated weights for policy 0, policy_version 27850 (0.0042) +[2024-06-10 20:59:18,240][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 456376320. Throughput: 0: 43768.9. Samples: 456441640. Policy #0 lag: (min: 0.0, avg: 8.7, max: 20.0) +[2024-06-10 20:59:18,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:59:19,904][46990] Updated weights for policy 0, policy_version 27860 (0.0053) +[2024-06-10 20:59:23,240][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.6, 300 sec: 43875.8). Total num frames: 456605696. Throughput: 0: 43746.6. Samples: 456707840. Policy #0 lag: (min: 0.0, avg: 8.7, max: 20.0) +[2024-06-10 20:59:23,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 20:59:23,525][46990] Updated weights for policy 0, policy_version 27870 (0.0037) +[2024-06-10 20:59:27,374][46990] Updated weights for policy 0, policy_version 27880 (0.0039) +[2024-06-10 20:59:28,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 456785920. Throughput: 0: 43849.8. Samples: 456974640. Policy #0 lag: (min: 0.0, avg: 9.8, max: 20.0) +[2024-06-10 20:59:28,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 20:59:31,016][46990] Updated weights for policy 0, policy_version 27890 (0.0035) +[2024-06-10 20:59:33,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 457031680. Throughput: 0: 43895.5. Samples: 457105140. Policy #0 lag: (min: 0.0, avg: 9.8, max: 20.0) +[2024-06-10 20:59:33,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 20:59:35,045][46990] Updated weights for policy 0, policy_version 27900 (0.0046) +[2024-06-10 20:59:38,244][46753] Fps is (10 sec: 47492.4, 60 sec: 43687.4, 300 sec: 43875.1). Total num frames: 457261056. Throughput: 0: 43846.4. Samples: 457368960. Policy #0 lag: (min: 0.0, avg: 9.8, max: 20.0) +[2024-06-10 20:59:38,245][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:59:38,288][46990] Updated weights for policy 0, policy_version 27910 (0.0035) +[2024-06-10 20:59:42,416][46990] Updated weights for policy 0, policy_version 27920 (0.0032) +[2024-06-10 20:59:43,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 457457664. Throughput: 0: 43840.6. Samples: 457629680. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 20:59:43,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 20:59:45,865][46990] Updated weights for policy 0, policy_version 27930 (0.0030) +[2024-06-10 20:59:48,239][46753] Fps is (10 sec: 42617.7, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 457687040. Throughput: 0: 43767.3. Samples: 457753740. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 20:59:48,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 20:59:49,973][46990] Updated weights for policy 0, policy_version 27940 (0.0040) +[2024-06-10 20:59:53,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 457900032. Throughput: 0: 43616.4. Samples: 458019120. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 20:59:53,241][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 20:59:53,689][46990] Updated weights for policy 0, policy_version 27950 (0.0041) +[2024-06-10 20:59:57,442][46990] Updated weights for policy 0, policy_version 27960 (0.0023) +[2024-06-10 20:59:58,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 458096640. Throughput: 0: 43750.4. Samples: 458285500. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 20:59:58,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 20:59:58,607][46970] Signal inference workers to stop experience collection... (6700 times) +[2024-06-10 20:59:58,639][46990] InferenceWorker_p0-w0: stopping experience collection (6700 times) +[2024-06-10 20:59:58,652][46970] Signal inference workers to resume experience collection... (6700 times) +[2024-06-10 20:59:58,657][46990] InferenceWorker_p0-w0: resuming experience collection (6700 times) +[2024-06-10 21:00:01,299][46990] Updated weights for policy 0, policy_version 27970 (0.0039) +[2024-06-10 21:00:03,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43963.8, 300 sec: 43765.4). Total num frames: 458358784. Throughput: 0: 43868.0. Samples: 458415700. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 21:00:03,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:00:05,153][46990] Updated weights for policy 0, policy_version 27980 (0.0033) +[2024-06-10 21:00:08,240][46753] Fps is (10 sec: 47512.8, 60 sec: 43417.6, 300 sec: 43820.3). Total num frames: 458571776. Throughput: 0: 43824.9. Samples: 458679960. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 21:00:08,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 21:00:08,510][46990] Updated weights for policy 0, policy_version 27990 (0.0026) +[2024-06-10 21:00:12,365][46990] Updated weights for policy 0, policy_version 28000 (0.0033) +[2024-06-10 21:00:13,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.6, 300 sec: 43820.2). Total num frames: 458784768. Throughput: 0: 43717.7. Samples: 458941940. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 21:00:13,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:00:13,255][46970] Saving new best policy, reward=0.294! +[2024-06-10 21:00:15,818][46990] Updated weights for policy 0, policy_version 28010 (0.0034) +[2024-06-10 21:00:18,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43963.9, 300 sec: 43764.7). Total num frames: 459014144. Throughput: 0: 43639.2. Samples: 459068900. Policy #0 lag: (min: 0.0, avg: 9.8, max: 23.0) +[2024-06-10 21:00:18,240][46753] Avg episode reward: [(0, '0.276')] +[2024-06-10 21:00:19,953][46990] Updated weights for policy 0, policy_version 28020 (0.0039) +[2024-06-10 21:00:23,239][46753] Fps is (10 sec: 42599.1, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 459210752. Throughput: 0: 43609.3. Samples: 459331180. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 21:00:23,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:00:23,352][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000028029_459227136.pth... +[2024-06-10 21:00:23,411][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000027388_448724992.pth +[2024-06-10 21:00:23,602][46990] Updated weights for policy 0, policy_version 28030 (0.0030) +[2024-06-10 21:00:27,443][46990] Updated weights for policy 0, policy_version 28040 (0.0032) +[2024-06-10 21:00:28,239][46753] Fps is (10 sec: 39321.5, 60 sec: 43690.7, 300 sec: 43709.9). Total num frames: 459407360. Throughput: 0: 43856.4. Samples: 459603220. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 21:00:28,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:00:31,075][46990] Updated weights for policy 0, policy_version 28050 (0.0035) +[2024-06-10 21:00:33,240][46753] Fps is (10 sec: 45872.9, 60 sec: 43963.4, 300 sec: 43764.7). Total num frames: 459669504. Throughput: 0: 43902.2. Samples: 459729360. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 21:00:33,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:00:35,041][46990] Updated weights for policy 0, policy_version 28060 (0.0030) +[2024-06-10 21:00:38,244][46753] Fps is (10 sec: 47492.3, 60 sec: 43690.7, 300 sec: 43819.6). Total num frames: 459882496. Throughput: 0: 44028.1. Samples: 460000580. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 21:00:38,244][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:00:38,525][46990] Updated weights for policy 0, policy_version 28070 (0.0029) +[2024-06-10 21:00:42,571][46990] Updated weights for policy 0, policy_version 28080 (0.0036) +[2024-06-10 21:00:43,240][46753] Fps is (10 sec: 42599.6, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 460095488. Throughput: 0: 43886.9. Samples: 460260420. Policy #0 lag: (min: 0.0, avg: 10.9, max: 23.0) +[2024-06-10 21:00:43,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:00:45,770][46990] Updated weights for policy 0, policy_version 28090 (0.0032) +[2024-06-10 21:00:48,240][46753] Fps is (10 sec: 42617.0, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 460308480. Throughput: 0: 43841.3. Samples: 460388560. Policy #0 lag: (min: 0.0, avg: 10.9, max: 23.0) +[2024-06-10 21:00:48,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:00:50,113][46990] Updated weights for policy 0, policy_version 28100 (0.0042) +[2024-06-10 21:00:53,240][46753] Fps is (10 sec: 42597.2, 60 sec: 43690.4, 300 sec: 43765.3). Total num frames: 460521472. Throughput: 0: 43720.5. Samples: 460647400. Policy #0 lag: (min: 0.0, avg: 10.9, max: 23.0) +[2024-06-10 21:00:53,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:00:53,255][46970] Saving new best policy, reward=0.298! +[2024-06-10 21:00:53,718][46990] Updated weights for policy 0, policy_version 28110 (0.0041) +[2024-06-10 21:00:57,727][46990] Updated weights for policy 0, policy_version 28120 (0.0029) +[2024-06-10 21:00:58,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43963.7, 300 sec: 43765.4). Total num frames: 460734464. Throughput: 0: 43881.5. Samples: 460916600. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 21:00:58,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:01:01,067][46990] Updated weights for policy 0, policy_version 28130 (0.0039) +[2024-06-10 21:01:03,239][46753] Fps is (10 sec: 45877.2, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 460980224. Throughput: 0: 43847.5. Samples: 461042040. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 21:01:03,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:01:05,266][46990] Updated weights for policy 0, policy_version 28140 (0.0036) +[2024-06-10 21:01:08,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 461193216. Throughput: 0: 43924.5. Samples: 461307780. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 21:01:08,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:01:08,494][46990] Updated weights for policy 0, policy_version 28150 (0.0033) +[2024-06-10 21:01:12,822][46990] Updated weights for policy 0, policy_version 28160 (0.0030) +[2024-06-10 21:01:13,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43417.7, 300 sec: 43765.2). Total num frames: 461389824. Throughput: 0: 43733.4. Samples: 461571220. Policy #0 lag: (min: 0.0, avg: 8.4, max: 23.0) +[2024-06-10 21:01:13,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 21:01:13,446][46970] Signal inference workers to stop experience collection... (6750 times) +[2024-06-10 21:01:13,494][46970] Signal inference workers to resume experience collection... (6750 times) +[2024-06-10 21:01:13,506][46990] InferenceWorker_p0-w0: stopping experience collection (6750 times) +[2024-06-10 21:01:13,541][46990] InferenceWorker_p0-w0: resuming experience collection (6750 times) +[2024-06-10 21:01:15,806][46990] Updated weights for policy 0, policy_version 28170 (0.0034) +[2024-06-10 21:01:18,240][46753] Fps is (10 sec: 42597.6, 60 sec: 43417.5, 300 sec: 43709.3). Total num frames: 461619200. Throughput: 0: 43753.2. Samples: 461698240. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 21:01:18,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:01:20,429][46990] Updated weights for policy 0, policy_version 28180 (0.0043) +[2024-06-10 21:01:23,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 461832192. Throughput: 0: 43526.6. Samples: 461959080. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 21:01:23,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:01:23,554][46990] Updated weights for policy 0, policy_version 28190 (0.0028) +[2024-06-10 21:01:27,652][46990] Updated weights for policy 0, policy_version 28200 (0.0028) +[2024-06-10 21:01:28,240][46753] Fps is (10 sec: 42598.5, 60 sec: 43963.6, 300 sec: 43764.7). Total num frames: 462045184. Throughput: 0: 43699.6. Samples: 462226900. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 21:01:28,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:01:30,979][46990] Updated weights for policy 0, policy_version 28210 (0.0032) +[2024-06-10 21:01:33,240][46753] Fps is (10 sec: 45872.7, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 462290944. Throughput: 0: 43614.3. Samples: 462351220. Policy #0 lag: (min: 0.0, avg: 11.5, max: 22.0) +[2024-06-10 21:01:33,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:01:35,252][46990] Updated weights for policy 0, policy_version 28220 (0.0038) +[2024-06-10 21:01:38,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43420.9, 300 sec: 43709.2). Total num frames: 462487552. Throughput: 0: 43787.6. Samples: 462617820. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 21:01:38,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:01:38,565][46990] Updated weights for policy 0, policy_version 28230 (0.0047) +[2024-06-10 21:01:42,868][46990] Updated weights for policy 0, policy_version 28240 (0.0033) +[2024-06-10 21:01:43,239][46753] Fps is (10 sec: 39323.6, 60 sec: 43144.7, 300 sec: 43709.8). Total num frames: 462684160. Throughput: 0: 43627.6. Samples: 462879840. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 21:01:43,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:01:45,838][46990] Updated weights for policy 0, policy_version 28250 (0.0027) +[2024-06-10 21:01:48,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 462929920. Throughput: 0: 43626.2. Samples: 463005220. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 21:01:48,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:01:50,422][46990] Updated weights for policy 0, policy_version 28260 (0.0033) +[2024-06-10 21:01:53,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43691.0, 300 sec: 43764.7). Total num frames: 463142912. Throughput: 0: 43493.7. Samples: 463265000. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 21:01:53,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:01:53,628][46990] Updated weights for policy 0, policy_version 28270 (0.0034) +[2024-06-10 21:01:57,907][46990] Updated weights for policy 0, policy_version 28280 (0.0026) +[2024-06-10 21:01:58,240][46753] Fps is (10 sec: 40958.2, 60 sec: 43417.3, 300 sec: 43654.2). Total num frames: 463339520. Throughput: 0: 43703.5. Samples: 463537900. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 21:01:58,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:02:00,939][46990] Updated weights for policy 0, policy_version 28290 (0.0029) +[2024-06-10 21:02:03,244][46753] Fps is (10 sec: 45854.6, 60 sec: 43687.4, 300 sec: 43708.5). Total num frames: 463601664. Throughput: 0: 43661.1. Samples: 463663180. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 21:02:03,245][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:02:05,559][46990] Updated weights for policy 0, policy_version 28300 (0.0039) +[2024-06-10 21:02:08,131][46990] Updated weights for policy 0, policy_version 28310 (0.0029) +[2024-06-10 21:02:08,239][46753] Fps is (10 sec: 49154.4, 60 sec: 43963.7, 300 sec: 43820.3). Total num frames: 463831040. Throughput: 0: 43748.8. Samples: 463927780. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 21:02:08,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:02:13,058][46990] Updated weights for policy 0, policy_version 28320 (0.0040) +[2024-06-10 21:02:13,239][46753] Fps is (10 sec: 39339.0, 60 sec: 43417.5, 300 sec: 43709.2). Total num frames: 463994880. Throughput: 0: 43757.8. Samples: 464196000. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:02:13,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:02:13,655][46970] Signal inference workers to stop experience collection... (6800 times) +[2024-06-10 21:02:13,711][46970] Signal inference workers to resume experience collection... (6800 times) +[2024-06-10 21:02:13,712][46990] InferenceWorker_p0-w0: stopping experience collection (6800 times) +[2024-06-10 21:02:13,742][46990] InferenceWorker_p0-w0: resuming experience collection (6800 times) +[2024-06-10 21:02:15,933][46990] Updated weights for policy 0, policy_version 28330 (0.0053) +[2024-06-10 21:02:18,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 464240640. Throughput: 0: 43597.8. Samples: 464313100. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:02:18,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:02:20,804][46990] Updated weights for policy 0, policy_version 28340 (0.0049) +[2024-06-10 21:02:23,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 464453632. Throughput: 0: 43569.3. Samples: 464578440. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:02:23,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:02:23,377][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000028350_464486400.pth... +[2024-06-10 21:02:23,383][46990] Updated weights for policy 0, policy_version 28350 (0.0033) +[2024-06-10 21:02:23,428][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000027708_453967872.pth +[2024-06-10 21:02:28,105][46990] Updated weights for policy 0, policy_version 28360 (0.0031) +[2024-06-10 21:02:28,244][46753] Fps is (10 sec: 40943.2, 60 sec: 43414.7, 300 sec: 43653.0). Total num frames: 464650240. Throughput: 0: 43761.7. Samples: 464849300. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:02:28,244][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 21:02:30,757][46990] Updated weights for policy 0, policy_version 28370 (0.0034) +[2024-06-10 21:02:33,240][46753] Fps is (10 sec: 45874.4, 60 sec: 43690.9, 300 sec: 43709.2). Total num frames: 464912384. Throughput: 0: 43625.2. Samples: 464968360. Policy #0 lag: (min: 1.0, avg: 11.3, max: 21.0) +[2024-06-10 21:02:33,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:02:35,725][46990] Updated weights for policy 0, policy_version 28380 (0.0036) +[2024-06-10 21:02:38,244][46753] Fps is (10 sec: 49150.2, 60 sec: 44233.4, 300 sec: 43819.6). Total num frames: 465141760. Throughput: 0: 43825.4. Samples: 465237340. Policy #0 lag: (min: 1.0, avg: 11.3, max: 21.0) +[2024-06-10 21:02:38,245][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 21:02:38,245][46990] Updated weights for policy 0, policy_version 28390 (0.0037) +[2024-06-10 21:02:43,088][46990] Updated weights for policy 0, policy_version 28400 (0.0027) +[2024-06-10 21:02:43,243][46753] Fps is (10 sec: 39309.0, 60 sec: 43688.2, 300 sec: 43653.1). Total num frames: 465305600. Throughput: 0: 43684.3. Samples: 465503820. Policy #0 lag: (min: 1.0, avg: 11.3, max: 21.0) +[2024-06-10 21:02:43,243][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:02:45,717][46990] Updated weights for policy 0, policy_version 28410 (0.0037) +[2024-06-10 21:02:48,239][46753] Fps is (10 sec: 42617.3, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 465567744. Throughput: 0: 43517.2. Samples: 465621260. Policy #0 lag: (min: 1.0, avg: 11.3, max: 21.0) +[2024-06-10 21:02:48,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:02:50,842][46990] Updated weights for policy 0, policy_version 28420 (0.0041) +[2024-06-10 21:02:53,134][46990] Updated weights for policy 0, policy_version 28430 (0.0026) +[2024-06-10 21:02:53,239][46753] Fps is (10 sec: 49168.7, 60 sec: 44236.9, 300 sec: 43875.8). Total num frames: 465797120. Throughput: 0: 43776.0. Samples: 465897700. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:02:53,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:02:58,239][46753] Fps is (10 sec: 37683.3, 60 sec: 43417.9, 300 sec: 43598.1). Total num frames: 465944576. Throughput: 0: 43700.9. Samples: 466162540. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:02:58,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 21:02:58,517][46990] Updated weights for policy 0, policy_version 28440 (0.0027) +[2024-06-10 21:03:00,653][46990] Updated weights for policy 0, policy_version 28450 (0.0037) +[2024-06-10 21:03:03,239][46753] Fps is (10 sec: 42597.8, 60 sec: 43693.9, 300 sec: 43764.7). Total num frames: 466223104. Throughput: 0: 43598.6. Samples: 466275040. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:03:03,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 21:03:06,021][46990] Updated weights for policy 0, policy_version 28460 (0.0039) +[2024-06-10 21:03:06,798][46970] Signal inference workers to stop experience collection... (6850 times) +[2024-06-10 21:03:06,853][46970] Signal inference workers to resume experience collection... (6850 times) +[2024-06-10 21:03:06,853][46990] InferenceWorker_p0-w0: stopping experience collection (6850 times) +[2024-06-10 21:03:06,866][46990] InferenceWorker_p0-w0: resuming experience collection (6850 times) +[2024-06-10 21:03:08,235][46990] Updated weights for policy 0, policy_version 28470 (0.0046) +[2024-06-10 21:03:08,239][46753] Fps is (10 sec: 50790.8, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 466452480. Throughput: 0: 43708.5. Samples: 466545320. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 21:03:08,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:03:13,194][46990] Updated weights for policy 0, policy_version 28480 (0.0037) +[2024-06-10 21:03:13,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 466616320. Throughput: 0: 43664.4. Samples: 466814020. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 21:03:13,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:03:15,628][46990] Updated weights for policy 0, policy_version 28490 (0.0042) +[2024-06-10 21:03:18,239][46753] Fps is (10 sec: 42597.8, 60 sec: 43963.7, 300 sec: 43764.7). Total num frames: 466878464. Throughput: 0: 43590.7. Samples: 466929940. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 21:03:18,249][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:03:20,983][46990] Updated weights for policy 0, policy_version 28500 (0.0021) +[2024-06-10 21:03:23,146][46990] Updated weights for policy 0, policy_version 28510 (0.0043) +[2024-06-10 21:03:23,239][46753] Fps is (10 sec: 49152.3, 60 sec: 44236.8, 300 sec: 43875.8). Total num frames: 467107840. Throughput: 0: 43790.2. Samples: 467207700. Policy #0 lag: (min: 0.0, avg: 11.1, max: 23.0) +[2024-06-10 21:03:23,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:03:28,239][46753] Fps is (10 sec: 36045.2, 60 sec: 43147.5, 300 sec: 43487.0). Total num frames: 467238912. Throughput: 0: 43701.9. Samples: 467470260. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 21:03:28,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:03:28,538][46990] Updated weights for policy 0, policy_version 28520 (0.0040) +[2024-06-10 21:03:30,617][46990] Updated weights for policy 0, policy_version 28530 (0.0034) +[2024-06-10 21:03:33,240][46753] Fps is (10 sec: 42597.9, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 467533824. Throughput: 0: 43759.1. Samples: 467590420. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 21:03:33,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:03:35,768][46990] Updated weights for policy 0, policy_version 28540 (0.0031) +[2024-06-10 21:03:38,238][46990] Updated weights for policy 0, policy_version 28550 (0.0029) +[2024-06-10 21:03:38,239][46753] Fps is (10 sec: 52428.4, 60 sec: 43693.9, 300 sec: 43820.3). Total num frames: 467763200. Throughput: 0: 43561.7. Samples: 467857980. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 21:03:38,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:03:42,917][46990] Updated weights for policy 0, policy_version 28560 (0.0036) +[2024-06-10 21:03:43,244][46753] Fps is (10 sec: 39304.2, 60 sec: 43689.8, 300 sec: 43597.4). Total num frames: 467927040. Throughput: 0: 43494.7. Samples: 468120000. Policy #0 lag: (min: 0.0, avg: 11.3, max: 21.0) +[2024-06-10 21:03:43,245][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:03:45,841][46990] Updated weights for policy 0, policy_version 28570 (0.0024) +[2024-06-10 21:03:48,240][46753] Fps is (10 sec: 42597.9, 60 sec: 43690.6, 300 sec: 43764.7). Total num frames: 468189184. Throughput: 0: 43770.1. Samples: 468244700. Policy #0 lag: (min: 0.0, avg: 11.5, max: 24.0) +[2024-06-10 21:03:48,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:03:50,537][46990] Updated weights for policy 0, policy_version 28580 (0.0040) +[2024-06-10 21:03:53,095][46990] Updated weights for policy 0, policy_version 28590 (0.0028) +[2024-06-10 21:03:53,239][46753] Fps is (10 sec: 49174.0, 60 sec: 43690.6, 300 sec: 43875.8). Total num frames: 468418560. Throughput: 0: 43857.2. Samples: 468518900. Policy #0 lag: (min: 0.0, avg: 11.5, max: 24.0) +[2024-06-10 21:03:53,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:03:58,244][46753] Fps is (10 sec: 37666.9, 60 sec: 43687.4, 300 sec: 43541.9). Total num frames: 468566016. Throughput: 0: 43661.0. Samples: 468778960. Policy #0 lag: (min: 0.0, avg: 11.5, max: 24.0) +[2024-06-10 21:03:58,245][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:03:58,433][46990] Updated weights for policy 0, policy_version 28600 (0.0027) +[2024-06-10 21:04:00,764][46990] Updated weights for policy 0, policy_version 28610 (0.0053) +[2024-06-10 21:04:03,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 468860928. Throughput: 0: 43756.1. Samples: 468898960. Policy #0 lag: (min: 0.0, avg: 11.5, max: 24.0) +[2024-06-10 21:04:03,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:04:05,821][46990] Updated weights for policy 0, policy_version 28620 (0.0032) +[2024-06-10 21:04:08,243][46753] Fps is (10 sec: 49154.6, 60 sec: 43414.7, 300 sec: 43708.6). Total num frames: 469057536. Throughput: 0: 43529.5. Samples: 469166700. Policy #0 lag: (min: 0.0, avg: 7.3, max: 21.0) +[2024-06-10 21:04:08,244][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 21:04:08,297][46990] Updated weights for policy 0, policy_version 28630 (0.0043) +[2024-06-10 21:04:13,069][46990] Updated weights for policy 0, policy_version 28640 (0.0021) +[2024-06-10 21:04:13,239][46753] Fps is (10 sec: 37683.4, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 469237760. Throughput: 0: 43591.1. Samples: 469431860. Policy #0 lag: (min: 0.0, avg: 7.3, max: 21.0) +[2024-06-10 21:04:13,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:04:14,013][46970] Signal inference workers to stop experience collection... (6900 times) +[2024-06-10 21:04:14,050][46990] InferenceWorker_p0-w0: stopping experience collection (6900 times) +[2024-06-10 21:04:14,067][46970] Signal inference workers to resume experience collection... (6900 times) +[2024-06-10 21:04:14,069][46990] InferenceWorker_p0-w0: resuming experience collection (6900 times) +[2024-06-10 21:04:15,920][46990] Updated weights for policy 0, policy_version 28650 (0.0040) +[2024-06-10 21:04:18,239][46753] Fps is (10 sec: 44254.6, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 469499904. Throughput: 0: 43662.4. Samples: 469555220. Policy #0 lag: (min: 0.0, avg: 7.3, max: 21.0) +[2024-06-10 21:04:18,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:04:20,655][46990] Updated weights for policy 0, policy_version 28660 (0.0032) +[2024-06-10 21:04:23,240][46753] Fps is (10 sec: 47513.1, 60 sec: 43417.5, 300 sec: 43820.3). Total num frames: 469712896. Throughput: 0: 43647.1. Samples: 469822100. Policy #0 lag: (min: 0.0, avg: 7.3, max: 21.0) +[2024-06-10 21:04:23,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:04:23,325][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000028670_469729280.pth... +[2024-06-10 21:04:23,338][46990] Updated weights for policy 0, policy_version 28670 (0.0030) +[2024-06-10 21:04:23,376][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000028029_459227136.pth +[2024-06-10 21:04:28,240][46753] Fps is (10 sec: 37682.4, 60 sec: 43963.6, 300 sec: 43542.6). Total num frames: 469876736. Throughput: 0: 43859.8. Samples: 470093500. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 21:04:28,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:04:28,376][46990] Updated weights for policy 0, policy_version 28680 (0.0032) +[2024-06-10 21:04:30,831][46990] Updated weights for policy 0, policy_version 28690 (0.0027) +[2024-06-10 21:04:33,240][46753] Fps is (10 sec: 44235.1, 60 sec: 43690.4, 300 sec: 43709.8). Total num frames: 470155264. Throughput: 0: 43731.3. Samples: 470212620. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 21:04:33,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:04:35,659][46990] Updated weights for policy 0, policy_version 28700 (0.0034) +[2024-06-10 21:04:38,240][46753] Fps is (10 sec: 47513.8, 60 sec: 43144.5, 300 sec: 43709.2). Total num frames: 470351872. Throughput: 0: 43387.1. Samples: 470471320. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 21:04:38,240][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 21:04:38,535][46990] Updated weights for policy 0, policy_version 28710 (0.0028) +[2024-06-10 21:04:43,112][46990] Updated weights for policy 0, policy_version 28720 (0.0041) +[2024-06-10 21:04:43,240][46753] Fps is (10 sec: 39322.9, 60 sec: 43693.9, 300 sec: 43598.1). Total num frames: 470548480. Throughput: 0: 43543.3. Samples: 470738220. Policy #0 lag: (min: 0.0, avg: 7.7, max: 20.0) +[2024-06-10 21:04:43,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:04:45,922][46990] Updated weights for policy 0, policy_version 28730 (0.0045) +[2024-06-10 21:04:48,240][46753] Fps is (10 sec: 45875.5, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 470810624. Throughput: 0: 43601.3. Samples: 470861020. Policy #0 lag: (min: 0.0, avg: 7.7, max: 20.0) +[2024-06-10 21:04:48,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:04:50,796][46990] Updated weights for policy 0, policy_version 28740 (0.0040) +[2024-06-10 21:04:53,239][46753] Fps is (10 sec: 47514.2, 60 sec: 43417.6, 300 sec: 43820.2). Total num frames: 471023616. Throughput: 0: 43570.5. Samples: 471127200. Policy #0 lag: (min: 0.0, avg: 7.7, max: 20.0) +[2024-06-10 21:04:53,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:04:53,451][46990] Updated weights for policy 0, policy_version 28750 (0.0034) +[2024-06-10 21:04:57,906][46990] Updated weights for policy 0, policy_version 28760 (0.0038) +[2024-06-10 21:04:58,239][46753] Fps is (10 sec: 39322.2, 60 sec: 43967.1, 300 sec: 43542.6). Total num frames: 471203840. Throughput: 0: 43583.6. Samples: 471393120. Policy #0 lag: (min: 0.0, avg: 7.7, max: 20.0) +[2024-06-10 21:04:58,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:05:00,833][46990] Updated weights for policy 0, policy_version 28770 (0.0032) +[2024-06-10 21:05:03,244][46753] Fps is (10 sec: 44217.1, 60 sec: 43414.4, 300 sec: 43708.5). Total num frames: 471465984. Throughput: 0: 43647.2. Samples: 471519540. Policy #0 lag: (min: 1.0, avg: 11.0, max: 21.0) +[2024-06-10 21:05:03,245][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 21:05:05,333][46990] Updated weights for policy 0, policy_version 28780 (0.0041) +[2024-06-10 21:05:08,239][46753] Fps is (10 sec: 47513.2, 60 sec: 43693.6, 300 sec: 43709.2). Total num frames: 471678976. Throughput: 0: 43729.4. Samples: 471789920. Policy #0 lag: (min: 1.0, avg: 11.0, max: 21.0) +[2024-06-10 21:05:08,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:05:08,343][46990] Updated weights for policy 0, policy_version 28790 (0.0038) +[2024-06-10 21:05:12,908][46990] Updated weights for policy 0, policy_version 28800 (0.0033) +[2024-06-10 21:05:13,240][46753] Fps is (10 sec: 40977.9, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 471875584. Throughput: 0: 43568.0. Samples: 472054060. Policy #0 lag: (min: 1.0, avg: 11.0, max: 21.0) +[2024-06-10 21:05:13,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:05:15,906][46990] Updated weights for policy 0, policy_version 28810 (0.0032) +[2024-06-10 21:05:18,240][46753] Fps is (10 sec: 44236.1, 60 sec: 43690.5, 300 sec: 43764.7). Total num frames: 472121344. Throughput: 0: 43715.9. Samples: 472179820. Policy #0 lag: (min: 1.0, avg: 11.0, max: 21.0) +[2024-06-10 21:05:18,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:05:19,999][46990] Updated weights for policy 0, policy_version 28820 (0.0042) +[2024-06-10 21:05:22,869][46970] Signal inference workers to stop experience collection... (6950 times) +[2024-06-10 21:05:22,915][46990] InferenceWorker_p0-w0: stopping experience collection (6950 times) +[2024-06-10 21:05:22,920][46970] Signal inference workers to resume experience collection... (6950 times) +[2024-06-10 21:05:22,927][46990] InferenceWorker_p0-w0: resuming experience collection (6950 times) +[2024-06-10 21:05:23,063][46990] Updated weights for policy 0, policy_version 28830 (0.0033) +[2024-06-10 21:05:23,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43963.8, 300 sec: 43875.8). Total num frames: 472350720. Throughput: 0: 43953.8. Samples: 472449240. Policy #0 lag: (min: 0.0, avg: 9.2, max: 23.0) +[2024-06-10 21:05:23,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:05:27,333][46990] Updated weights for policy 0, policy_version 28840 (0.0049) +[2024-06-10 21:05:28,239][46753] Fps is (10 sec: 40961.0, 60 sec: 44237.0, 300 sec: 43598.2). Total num frames: 472530944. Throughput: 0: 43865.1. Samples: 472712140. Policy #0 lag: (min: 0.0, avg: 9.2, max: 23.0) +[2024-06-10 21:05:28,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:05:30,780][46990] Updated weights for policy 0, policy_version 28850 (0.0045) +[2024-06-10 21:05:33,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43691.0, 300 sec: 43709.8). Total num frames: 472776704. Throughput: 0: 43900.1. Samples: 472836520. Policy #0 lag: (min: 0.0, avg: 9.2, max: 23.0) +[2024-06-10 21:05:33,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:05:34,765][46990] Updated weights for policy 0, policy_version 28860 (0.0044) +[2024-06-10 21:05:38,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.9, 300 sec: 43709.2). Total num frames: 472989696. Throughput: 0: 43856.5. Samples: 473100740. Policy #0 lag: (min: 0.0, avg: 9.2, max: 23.0) +[2024-06-10 21:05:38,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:05:38,495][46990] Updated weights for policy 0, policy_version 28870 (0.0039) +[2024-06-10 21:05:42,356][46990] Updated weights for policy 0, policy_version 28880 (0.0035) +[2024-06-10 21:05:43,244][46753] Fps is (10 sec: 40941.4, 60 sec: 43960.5, 300 sec: 43653.0). Total num frames: 473186304. Throughput: 0: 43629.7. Samples: 473356660. Policy #0 lag: (min: 0.0, avg: 10.3, max: 20.0) +[2024-06-10 21:05:43,251][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:05:45,830][46990] Updated weights for policy 0, policy_version 28890 (0.0041) +[2024-06-10 21:05:48,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 473415680. Throughput: 0: 43606.5. Samples: 473481640. Policy #0 lag: (min: 0.0, avg: 10.3, max: 20.0) +[2024-06-10 21:05:48,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:05:49,874][46990] Updated weights for policy 0, policy_version 28900 (0.0041) +[2024-06-10 21:05:53,239][46753] Fps is (10 sec: 45896.2, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 473645056. Throughput: 0: 43617.8. Samples: 473752720. Policy #0 lag: (min: 0.0, avg: 10.3, max: 20.0) +[2024-06-10 21:05:53,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:05:53,275][46990] Updated weights for policy 0, policy_version 28910 (0.0043) +[2024-06-10 21:05:57,353][46990] Updated weights for policy 0, policy_version 28920 (0.0037) +[2024-06-10 21:05:58,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 473841664. Throughput: 0: 43474.3. Samples: 474010400. Policy #0 lag: (min: 0.0, avg: 10.3, max: 20.0) +[2024-06-10 21:05:58,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:06:00,802][46990] Updated weights for policy 0, policy_version 28930 (0.0023) +[2024-06-10 21:06:03,240][46753] Fps is (10 sec: 44236.2, 60 sec: 43693.9, 300 sec: 43709.2). Total num frames: 474087424. Throughput: 0: 43586.3. Samples: 474141200. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:06:03,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:06:04,874][46990] Updated weights for policy 0, policy_version 28940 (0.0032) +[2024-06-10 21:06:08,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43417.6, 300 sec: 43709.2). Total num frames: 474284032. Throughput: 0: 43681.9. Samples: 474414920. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:06:08,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:06:08,410][46990] Updated weights for policy 0, policy_version 28950 (0.0031) +[2024-06-10 21:06:12,391][46990] Updated weights for policy 0, policy_version 28960 (0.0039) +[2024-06-10 21:06:13,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43690.8, 300 sec: 43653.7). Total num frames: 474497024. Throughput: 0: 43429.7. Samples: 474666480. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:06:13,248][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:06:15,952][46990] Updated weights for policy 0, policy_version 28970 (0.0042) +[2024-06-10 21:06:18,240][46753] Fps is (10 sec: 44235.9, 60 sec: 43417.6, 300 sec: 43709.1). Total num frames: 474726400. Throughput: 0: 43493.6. Samples: 474793740. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:06:18,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:06:19,862][46990] Updated weights for policy 0, policy_version 28980 (0.0040) +[2024-06-10 21:06:23,240][46753] Fps is (10 sec: 45874.4, 60 sec: 43417.5, 300 sec: 43764.7). Total num frames: 474955776. Throughput: 0: 43656.7. Samples: 475065300. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 21:06:23,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:06:23,259][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000028989_474955776.pth... +[2024-06-10 21:06:23,312][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000028350_464486400.pth +[2024-06-10 21:06:23,485][46990] Updated weights for policy 0, policy_version 28990 (0.0027) +[2024-06-10 21:06:27,258][46990] Updated weights for policy 0, policy_version 29000 (0.0025) +[2024-06-10 21:06:28,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.6, 300 sec: 43653.7). Total num frames: 475168768. Throughput: 0: 43773.7. Samples: 475326280. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 21:06:28,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:06:29,633][46970] Signal inference workers to stop experience collection... (7000 times) +[2024-06-10 21:06:29,634][46970] Signal inference workers to resume experience collection... (7000 times) +[2024-06-10 21:06:29,650][46990] InferenceWorker_p0-w0: stopping experience collection (7000 times) +[2024-06-10 21:06:29,650][46990] InferenceWorker_p0-w0: resuming experience collection (7000 times) +[2024-06-10 21:06:30,880][46990] Updated weights for policy 0, policy_version 29010 (0.0038) +[2024-06-10 21:06:33,239][46753] Fps is (10 sec: 42599.4, 60 sec: 43417.7, 300 sec: 43709.2). Total num frames: 475381760. Throughput: 0: 43801.0. Samples: 475452680. Policy #0 lag: (min: 0.0, avg: 10.3, max: 22.0) +[2024-06-10 21:06:33,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:06:34,819][46990] Updated weights for policy 0, policy_version 29020 (0.0046) +[2024-06-10 21:06:38,239][46753] Fps is (10 sec: 42599.1, 60 sec: 43417.6, 300 sec: 43764.7). Total num frames: 475594752. Throughput: 0: 43723.2. Samples: 475720260. Policy #0 lag: (min: 0.0, avg: 11.1, max: 24.0) +[2024-06-10 21:06:38,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:06:38,464][46990] Updated weights for policy 0, policy_version 29030 (0.0032) +[2024-06-10 21:06:42,157][46990] Updated weights for policy 0, policy_version 29040 (0.0050) +[2024-06-10 21:06:43,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43967.0, 300 sec: 43709.2). Total num frames: 475824128. Throughput: 0: 43863.5. Samples: 475984260. Policy #0 lag: (min: 0.0, avg: 11.1, max: 24.0) +[2024-06-10 21:06:43,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 21:06:45,923][46990] Updated weights for policy 0, policy_version 29050 (0.0030) +[2024-06-10 21:06:48,239][46753] Fps is (10 sec: 42597.5, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 476020736. Throughput: 0: 43695.1. Samples: 476107480. Policy #0 lag: (min: 0.0, avg: 11.1, max: 24.0) +[2024-06-10 21:06:48,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:06:49,682][46990] Updated weights for policy 0, policy_version 29060 (0.0035) +[2024-06-10 21:06:53,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43690.7, 300 sec: 43820.3). Total num frames: 476266496. Throughput: 0: 43531.1. Samples: 476373820. Policy #0 lag: (min: 0.0, avg: 11.1, max: 24.0) +[2024-06-10 21:06:53,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:06:53,289][46990] Updated weights for policy 0, policy_version 29070 (0.0051) +[2024-06-10 21:06:57,190][46990] Updated weights for policy 0, policy_version 29080 (0.0030) +[2024-06-10 21:06:58,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43963.8, 300 sec: 43654.3). Total num frames: 476479488. Throughput: 0: 43833.8. Samples: 476639000. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 21:06:58,248][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:07:00,785][46990] Updated weights for policy 0, policy_version 29090 (0.0023) +[2024-06-10 21:07:03,240][46753] Fps is (10 sec: 42597.8, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 476692480. Throughput: 0: 43855.6. Samples: 476767240. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 21:07:03,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:07:04,319][46990] Updated weights for policy 0, policy_version 29100 (0.0031) +[2024-06-10 21:07:08,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43690.5, 300 sec: 43764.7). Total num frames: 476905472. Throughput: 0: 43764.5. Samples: 477034700. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 21:07:08,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:07:08,619][46990] Updated weights for policy 0, policy_version 29110 (0.0033) +[2024-06-10 21:07:12,239][46990] Updated weights for policy 0, policy_version 29120 (0.0038) +[2024-06-10 21:07:13,240][46753] Fps is (10 sec: 44236.4, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 477134848. Throughput: 0: 43732.8. Samples: 477294260. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 21:07:13,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 21:07:15,958][46990] Updated weights for policy 0, policy_version 29130 (0.0035) +[2024-06-10 21:07:18,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 477347840. Throughput: 0: 43768.0. Samples: 477422240. Policy #0 lag: (min: 1.0, avg: 10.0, max: 23.0) +[2024-06-10 21:07:18,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:07:19,582][46990] Updated weights for policy 0, policy_version 29140 (0.0047) +[2024-06-10 21:07:23,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43417.7, 300 sec: 43765.3). Total num frames: 477560832. Throughput: 0: 43656.3. Samples: 477684800. Policy #0 lag: (min: 1.0, avg: 10.0, max: 23.0) +[2024-06-10 21:07:23,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:07:23,614][46990] Updated weights for policy 0, policy_version 29150 (0.0036) +[2024-06-10 21:07:27,287][46990] Updated weights for policy 0, policy_version 29160 (0.0035) +[2024-06-10 21:07:28,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 477773824. Throughput: 0: 43526.3. Samples: 477942940. Policy #0 lag: (min: 1.0, avg: 10.0, max: 23.0) +[2024-06-10 21:07:28,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:07:31,031][46990] Updated weights for policy 0, policy_version 29170 (0.0027) +[2024-06-10 21:07:33,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43417.5, 300 sec: 43543.2). Total num frames: 477986816. Throughput: 0: 43718.7. Samples: 478074820. Policy #0 lag: (min: 1.0, avg: 10.0, max: 23.0) +[2024-06-10 21:07:33,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:07:34,724][46990] Updated weights for policy 0, policy_version 29180 (0.0041) +[2024-06-10 21:07:38,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.5, 300 sec: 43765.2). Total num frames: 478216192. Throughput: 0: 43725.2. Samples: 478341460. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 21:07:38,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:07:38,668][46990] Updated weights for policy 0, policy_version 29190 (0.0040) +[2024-06-10 21:07:42,269][46990] Updated weights for policy 0, policy_version 29200 (0.0042) +[2024-06-10 21:07:43,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 478445568. Throughput: 0: 43550.6. Samples: 478598780. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 21:07:43,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:07:46,347][46990] Updated weights for policy 0, policy_version 29210 (0.0034) +[2024-06-10 21:07:48,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43690.8, 300 sec: 43542.6). Total num frames: 478642176. Throughput: 0: 43682.8. Samples: 478732960. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 21:07:48,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:07:49,618][46990] Updated weights for policy 0, policy_version 29220 (0.0045) +[2024-06-10 21:07:53,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43417.5, 300 sec: 43820.2). Total num frames: 478871552. Throughput: 0: 43624.9. Samples: 478997820. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 21:07:53,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:07:53,785][46990] Updated weights for policy 0, policy_version 29230 (0.0047) +[2024-06-10 21:07:56,949][46990] Updated weights for policy 0, policy_version 29240 (0.0036) +[2024-06-10 21:07:58,239][46753] Fps is (10 sec: 45874.6, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 479100928. Throughput: 0: 43626.8. Samples: 479257460. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:07:58,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 21:08:01,125][46990] Updated weights for policy 0, policy_version 29250 (0.0038) +[2024-06-10 21:08:03,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 479313920. Throughput: 0: 43830.6. Samples: 479394620. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:08:03,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:08:04,420][46990] Updated weights for policy 0, policy_version 29260 (0.0032) +[2024-06-10 21:08:08,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43690.8, 300 sec: 43764.7). Total num frames: 479526912. Throughput: 0: 43837.4. Samples: 479657480. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:08:08,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:08:08,627][46990] Updated weights for policy 0, policy_version 29270 (0.0043) +[2024-06-10 21:08:11,928][46970] Signal inference workers to stop experience collection... (7050 times) +[2024-06-10 21:08:11,929][46970] Signal inference workers to resume experience collection... (7050 times) +[2024-06-10 21:08:11,941][46990] InferenceWorker_p0-w0: stopping experience collection (7050 times) +[2024-06-10 21:08:11,941][46990] InferenceWorker_p0-w0: resuming experience collection (7050 times) +[2024-06-10 21:08:12,074][46990] Updated weights for policy 0, policy_version 29280 (0.0032) +[2024-06-10 21:08:13,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43963.8, 300 sec: 43709.2). Total num frames: 479772672. Throughput: 0: 43818.6. Samples: 479914780. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:08:13,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:08:16,438][46990] Updated weights for policy 0, policy_version 29290 (0.0027) +[2024-06-10 21:08:18,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 479952896. Throughput: 0: 43891.6. Samples: 480049940. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 21:08:18,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:08:19,656][46990] Updated weights for policy 0, policy_version 29300 (0.0050) +[2024-06-10 21:08:23,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43690.7, 300 sec: 43875.8). Total num frames: 480182272. Throughput: 0: 43757.9. Samples: 480310560. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 21:08:23,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:08:23,248][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000029308_480182272.pth... +[2024-06-10 21:08:23,302][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000028670_469729280.pth +[2024-06-10 21:08:23,740][46990] Updated weights for policy 0, policy_version 29310 (0.0034) +[2024-06-10 21:08:26,812][46990] Updated weights for policy 0, policy_version 29320 (0.0034) +[2024-06-10 21:08:28,239][46753] Fps is (10 sec: 47513.7, 60 sec: 44236.8, 300 sec: 43709.2). Total num frames: 480428032. Throughput: 0: 43841.4. Samples: 480571640. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 21:08:28,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:08:31,004][46990] Updated weights for policy 0, policy_version 29330 (0.0033) +[2024-06-10 21:08:33,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 480624640. Throughput: 0: 43990.9. Samples: 480712560. Policy #0 lag: (min: 0.0, avg: 11.4, max: 23.0) +[2024-06-10 21:08:33,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:08:34,338][46990] Updated weights for policy 0, policy_version 29340 (0.0032) +[2024-06-10 21:08:38,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43690.8, 300 sec: 43765.4). Total num frames: 480837632. Throughput: 0: 43967.3. Samples: 480976340. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 21:08:38,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:08:38,527][46990] Updated weights for policy 0, policy_version 29350 (0.0039) +[2024-06-10 21:08:41,766][46990] Updated weights for policy 0, policy_version 29360 (0.0028) +[2024-06-10 21:08:43,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 481083392. Throughput: 0: 43812.5. Samples: 481229020. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 21:08:43,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:08:46,151][46990] Updated weights for policy 0, policy_version 29370 (0.0037) +[2024-06-10 21:08:48,240][46753] Fps is (10 sec: 44235.8, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 481280000. Throughput: 0: 43911.0. Samples: 481370620. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 21:08:48,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:08:49,492][46990] Updated weights for policy 0, policy_version 29380 (0.0038) +[2024-06-10 21:08:53,240][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.6, 300 sec: 43820.9). Total num frames: 481492992. Throughput: 0: 43865.2. Samples: 481631420. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 21:08:53,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:08:53,733][46990] Updated weights for policy 0, policy_version 29390 (0.0040) +[2024-06-10 21:08:56,908][46990] Updated weights for policy 0, policy_version 29400 (0.0047) +[2024-06-10 21:08:58,240][46753] Fps is (10 sec: 47513.6, 60 sec: 44236.7, 300 sec: 43709.2). Total num frames: 481755136. Throughput: 0: 43775.5. Samples: 481884680. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 21:08:58,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:09:01,150][46990] Updated weights for policy 0, policy_version 29410 (0.0036) +[2024-06-10 21:09:03,240][46753] Fps is (10 sec: 44236.6, 60 sec: 43690.6, 300 sec: 43654.2). Total num frames: 481935360. Throughput: 0: 43866.5. Samples: 482023940. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 21:09:03,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:09:04,217][46990] Updated weights for policy 0, policy_version 29420 (0.0035) +[2024-06-10 21:09:08,239][46753] Fps is (10 sec: 39322.4, 60 sec: 43690.7, 300 sec: 43764.7). Total num frames: 482148352. Throughput: 0: 43971.5. Samples: 482289280. Policy #0 lag: (min: 0.0, avg: 10.5, max: 22.0) +[2024-06-10 21:09:08,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:09:08,520][46990] Updated weights for policy 0, policy_version 29430 (0.0039) +[2024-06-10 21:09:11,916][46990] Updated weights for policy 0, policy_version 29440 (0.0041) +[2024-06-10 21:09:13,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 482394112. Throughput: 0: 43644.8. Samples: 482535660. Policy #0 lag: (min: 1.0, avg: 11.9, max: 24.0) +[2024-06-10 21:09:13,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:09:15,944][46990] Updated weights for policy 0, policy_version 29450 (0.0031) +[2024-06-10 21:09:18,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43963.7, 300 sec: 43653.7). Total num frames: 482590720. Throughput: 0: 43466.8. Samples: 482668560. Policy #0 lag: (min: 1.0, avg: 11.9, max: 24.0) +[2024-06-10 21:09:18,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 21:09:19,564][46990] Updated weights for policy 0, policy_version 29460 (0.0041) +[2024-06-10 21:09:23,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.6, 300 sec: 43820.3). Total num frames: 482803712. Throughput: 0: 43473.2. Samples: 482932640. Policy #0 lag: (min: 1.0, avg: 11.9, max: 24.0) +[2024-06-10 21:09:23,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:09:23,553][46990] Updated weights for policy 0, policy_version 29470 (0.0037) +[2024-06-10 21:09:26,953][46970] Signal inference workers to stop experience collection... (7100 times) +[2024-06-10 21:09:26,953][46970] Signal inference workers to resume experience collection... (7100 times) +[2024-06-10 21:09:26,999][46990] InferenceWorker_p0-w0: stopping experience collection (7100 times) +[2024-06-10 21:09:26,999][46990] InferenceWorker_p0-w0: resuming experience collection (7100 times) +[2024-06-10 21:09:27,087][46990] Updated weights for policy 0, policy_version 29480 (0.0031) +[2024-06-10 21:09:28,240][46753] Fps is (10 sec: 45874.8, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 483049472. Throughput: 0: 43531.9. Samples: 483187960. Policy #0 lag: (min: 1.0, avg: 11.9, max: 24.0) +[2024-06-10 21:09:28,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:09:31,167][46990] Updated weights for policy 0, policy_version 29490 (0.0037) +[2024-06-10 21:09:33,244][46753] Fps is (10 sec: 44216.9, 60 sec: 43687.4, 300 sec: 43708.5). Total num frames: 483246080. Throughput: 0: 43445.5. Samples: 483325860. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 21:09:33,245][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 21:09:34,471][46990] Updated weights for policy 0, policy_version 29500 (0.0040) +[2024-06-10 21:09:38,239][46753] Fps is (10 sec: 40960.8, 60 sec: 43690.7, 300 sec: 43764.8). Total num frames: 483459072. Throughput: 0: 43502.4. Samples: 483589020. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 21:09:38,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 21:09:38,403][46990] Updated weights for policy 0, policy_version 29510 (0.0030) +[2024-06-10 21:09:42,166][46990] Updated weights for policy 0, policy_version 29520 (0.0038) +[2024-06-10 21:09:43,245][46753] Fps is (10 sec: 45872.2, 60 sec: 43686.9, 300 sec: 43708.4). Total num frames: 483704832. Throughput: 0: 43530.2. Samples: 483843760. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 21:09:43,245][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:09:45,571][46990] Updated weights for policy 0, policy_version 29530 (0.0029) +[2024-06-10 21:09:48,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 483901440. Throughput: 0: 43399.3. Samples: 483976900. Policy #0 lag: (min: 0.0, avg: 10.7, max: 22.0) +[2024-06-10 21:09:48,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:09:49,588][46990] Updated weights for policy 0, policy_version 29540 (0.0035) +[2024-06-10 21:09:53,240][46753] Fps is (10 sec: 42619.4, 60 sec: 43963.6, 300 sec: 43820.2). Total num frames: 484130816. Throughput: 0: 43357.5. Samples: 484240380. Policy #0 lag: (min: 0.0, avg: 12.7, max: 25.0) +[2024-06-10 21:09:53,243][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:09:53,383][46990] Updated weights for policy 0, policy_version 29550 (0.0037) +[2024-06-10 21:09:57,242][46990] Updated weights for policy 0, policy_version 29560 (0.0023) +[2024-06-10 21:09:58,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43417.7, 300 sec: 43709.8). Total num frames: 484360192. Throughput: 0: 43614.7. Samples: 484498320. Policy #0 lag: (min: 0.0, avg: 12.7, max: 25.0) +[2024-06-10 21:09:58,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:10:01,177][46990] Updated weights for policy 0, policy_version 29570 (0.0024) +[2024-06-10 21:10:03,240][46753] Fps is (10 sec: 42599.0, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 484556800. Throughput: 0: 43614.1. Samples: 484631200. Policy #0 lag: (min: 0.0, avg: 12.7, max: 25.0) +[2024-06-10 21:10:03,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:10:04,570][46990] Updated weights for policy 0, policy_version 29580 (0.0024) +[2024-06-10 21:10:08,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 484769792. Throughput: 0: 43469.9. Samples: 484888780. Policy #0 lag: (min: 0.0, avg: 12.7, max: 25.0) +[2024-06-10 21:10:08,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:10:08,718][46990] Updated weights for policy 0, policy_version 29590 (0.0029) +[2024-06-10 21:10:12,333][46990] Updated weights for policy 0, policy_version 29600 (0.0032) +[2024-06-10 21:10:13,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43417.6, 300 sec: 43653.7). Total num frames: 484999168. Throughput: 0: 43607.2. Samples: 485150280. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:10:13,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:10:16,014][46990] Updated weights for policy 0, policy_version 29610 (0.0058) +[2024-06-10 21:10:18,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 485195776. Throughput: 0: 43393.3. Samples: 485278360. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:10:18,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:10:19,707][46990] Updated weights for policy 0, policy_version 29620 (0.0049) +[2024-06-10 21:10:23,244][46753] Fps is (10 sec: 44216.5, 60 sec: 43960.4, 300 sec: 43764.0). Total num frames: 485441536. Throughput: 0: 43399.0. Samples: 485542180. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:10:23,245][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:10:23,260][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000029629_485441536.pth... +[2024-06-10 21:10:23,308][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000028989_474955776.pth +[2024-06-10 21:10:23,539][46990] Updated weights for policy 0, policy_version 29630 (0.0029) +[2024-06-10 21:10:27,481][46990] Updated weights for policy 0, policy_version 29640 (0.0036) +[2024-06-10 21:10:28,239][46753] Fps is (10 sec: 45874.8, 60 sec: 43417.6, 300 sec: 43653.6). Total num frames: 485654528. Throughput: 0: 43495.2. Samples: 485800820. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:10:28,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:10:31,392][46990] Updated weights for policy 0, policy_version 29650 (0.0030) +[2024-06-10 21:10:33,244][46753] Fps is (10 sec: 42598.7, 60 sec: 43690.7, 300 sec: 43653.0). Total num frames: 485867520. Throughput: 0: 43398.7. Samples: 485930040. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:10:33,245][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:10:34,990][46990] Updated weights for policy 0, policy_version 29660 (0.0037) +[2024-06-10 21:10:38,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.5, 300 sec: 43709.8). Total num frames: 486080512. Throughput: 0: 43333.1. Samples: 486190360. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:10:38,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:10:38,241][46970] Saving new best policy, reward=0.302! +[2024-06-10 21:10:38,963][46990] Updated weights for policy 0, policy_version 29670 (0.0039) +[2024-06-10 21:10:42,431][46990] Updated weights for policy 0, policy_version 29680 (0.0037) +[2024-06-10 21:10:43,239][46753] Fps is (10 sec: 42617.6, 60 sec: 43148.3, 300 sec: 43653.6). Total num frames: 486293504. Throughput: 0: 43392.9. Samples: 486451000. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:10:43,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:10:46,085][46970] Signal inference workers to stop experience collection... (7150 times) +[2024-06-10 21:10:46,086][46970] Signal inference workers to resume experience collection... (7150 times) +[2024-06-10 21:10:46,126][46990] InferenceWorker_p0-w0: stopping experience collection (7150 times) +[2024-06-10 21:10:46,126][46990] InferenceWorker_p0-w0: resuming experience collection (7150 times) +[2024-06-10 21:10:46,219][46990] Updated weights for policy 0, policy_version 29690 (0.0041) +[2024-06-10 21:10:48,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 486506496. Throughput: 0: 43320.5. Samples: 486580620. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:10:48,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:10:50,053][46990] Updated weights for policy 0, policy_version 29700 (0.0038) +[2024-06-10 21:10:53,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43144.7, 300 sec: 43653.7). Total num frames: 486719488. Throughput: 0: 43395.9. Samples: 486841600. Policy #0 lag: (min: 1.0, avg: 10.0, max: 21.0) +[2024-06-10 21:10:53,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:10:53,872][46990] Updated weights for policy 0, policy_version 29710 (0.0047) +[2024-06-10 21:10:57,459][46990] Updated weights for policy 0, policy_version 29720 (0.0038) +[2024-06-10 21:10:58,240][46753] Fps is (10 sec: 44236.7, 60 sec: 43144.5, 300 sec: 43598.1). Total num frames: 486948864. Throughput: 0: 43439.0. Samples: 487105040. Policy #0 lag: (min: 1.0, avg: 10.0, max: 21.0) +[2024-06-10 21:10:58,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:11:01,684][46990] Updated weights for policy 0, policy_version 29730 (0.0029) +[2024-06-10 21:11:03,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43417.7, 300 sec: 43653.6). Total num frames: 487161856. Throughput: 0: 43497.8. Samples: 487235760. Policy #0 lag: (min: 1.0, avg: 10.0, max: 21.0) +[2024-06-10 21:11:03,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:11:05,133][46990] Updated weights for policy 0, policy_version 29740 (0.0041) +[2024-06-10 21:11:08,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43417.5, 300 sec: 43653.6). Total num frames: 487374848. Throughput: 0: 43412.9. Samples: 487495560. Policy #0 lag: (min: 1.0, avg: 10.0, max: 21.0) +[2024-06-10 21:11:08,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:11:09,189][46990] Updated weights for policy 0, policy_version 29750 (0.0033) +[2024-06-10 21:11:12,601][46990] Updated weights for policy 0, policy_version 29760 (0.0045) +[2024-06-10 21:11:13,240][46753] Fps is (10 sec: 42597.6, 60 sec: 43144.4, 300 sec: 43598.1). Total num frames: 487587840. Throughput: 0: 43375.9. Samples: 487752740. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:11:13,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:11:16,719][46990] Updated weights for policy 0, policy_version 29770 (0.0043) +[2024-06-10 21:11:18,240][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 487817216. Throughput: 0: 43411.9. Samples: 487883380. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:11:18,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:11:20,274][46990] Updated weights for policy 0, policy_version 29780 (0.0032) +[2024-06-10 21:11:23,239][46753] Fps is (10 sec: 42599.1, 60 sec: 42874.8, 300 sec: 43542.6). Total num frames: 488013824. Throughput: 0: 43568.5. Samples: 488150940. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:11:23,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:11:24,368][46990] Updated weights for policy 0, policy_version 29790 (0.0028) +[2024-06-10 21:11:27,618][46990] Updated weights for policy 0, policy_version 29800 (0.0026) +[2024-06-10 21:11:28,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43144.5, 300 sec: 43598.1). Total num frames: 488243200. Throughput: 0: 43520.9. Samples: 488409440. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:11:28,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:11:31,792][46990] Updated weights for policy 0, policy_version 29810 (0.0041) +[2024-06-10 21:11:33,240][46753] Fps is (10 sec: 47512.9, 60 sec: 43693.9, 300 sec: 43709.1). Total num frames: 488488960. Throughput: 0: 43628.4. Samples: 488543900. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:11:33,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 21:11:35,353][46990] Updated weights for policy 0, policy_version 29820 (0.0048) +[2024-06-10 21:11:38,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43144.5, 300 sec: 43542.6). Total num frames: 488669184. Throughput: 0: 43617.2. Samples: 488804380. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:11:38,242][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:11:39,488][46990] Updated weights for policy 0, policy_version 29830 (0.0037) +[2024-06-10 21:11:42,835][46990] Updated weights for policy 0, policy_version 29840 (0.0029) +[2024-06-10 21:11:43,244][46753] Fps is (10 sec: 40941.9, 60 sec: 43414.3, 300 sec: 43653.0). Total num frames: 488898560. Throughput: 0: 43439.7. Samples: 489060020. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:11:43,245][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:11:46,839][46990] Updated weights for policy 0, policy_version 29850 (0.0044) +[2024-06-10 21:11:48,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 489127936. Throughput: 0: 43551.6. Samples: 489195580. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 21:11:48,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:11:50,537][46990] Updated weights for policy 0, policy_version 29860 (0.0042) +[2024-06-10 21:11:53,240][46753] Fps is (10 sec: 42616.9, 60 sec: 43417.4, 300 sec: 43542.5). Total num frames: 489324544. Throughput: 0: 43517.2. Samples: 489453840. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 21:11:53,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:11:54,321][46990] Updated weights for policy 0, policy_version 29870 (0.0045) +[2024-06-10 21:11:54,989][46970] Signal inference workers to stop experience collection... (7200 times) +[2024-06-10 21:11:55,043][46970] Signal inference workers to resume experience collection... (7200 times) +[2024-06-10 21:11:55,044][46990] InferenceWorker_p0-w0: stopping experience collection (7200 times) +[2024-06-10 21:11:55,078][46990] InferenceWorker_p0-w0: resuming experience collection (7200 times) +[2024-06-10 21:11:57,983][46990] Updated weights for policy 0, policy_version 29880 (0.0027) +[2024-06-10 21:11:58,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 489553920. Throughput: 0: 43642.7. Samples: 489716660. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 21:11:58,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:12:01,898][46990] Updated weights for policy 0, policy_version 29890 (0.0037) +[2024-06-10 21:12:03,239][46753] Fps is (10 sec: 47514.3, 60 sec: 43963.7, 300 sec: 43709.2). Total num frames: 489799680. Throughput: 0: 43652.0. Samples: 489847720. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 21:12:03,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:12:05,858][46990] Updated weights for policy 0, policy_version 29900 (0.0029) +[2024-06-10 21:12:08,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 489979904. Throughput: 0: 43399.1. Samples: 490103900. Policy #0 lag: (min: 0.0, avg: 9.7, max: 20.0) +[2024-06-10 21:12:08,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:12:09,624][46990] Updated weights for policy 0, policy_version 29910 (0.0039) +[2024-06-10 21:12:13,239][46753] Fps is (10 sec: 37683.7, 60 sec: 43144.7, 300 sec: 43487.0). Total num frames: 490176512. Throughput: 0: 43506.8. Samples: 490367240. Policy #0 lag: (min: 0.0, avg: 9.7, max: 20.0) +[2024-06-10 21:12:13,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 21:12:13,469][46990] Updated weights for policy 0, policy_version 29920 (0.0037) +[2024-06-10 21:12:16,948][46990] Updated weights for policy 0, policy_version 29930 (0.0028) +[2024-06-10 21:12:18,240][46753] Fps is (10 sec: 45874.3, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 490438656. Throughput: 0: 43346.6. Samples: 490494500. Policy #0 lag: (min: 0.0, avg: 9.7, max: 20.0) +[2024-06-10 21:12:18,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:12:20,924][46990] Updated weights for policy 0, policy_version 29940 (0.0030) +[2024-06-10 21:12:23,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 490635264. Throughput: 0: 43404.1. Samples: 490757560. Policy #0 lag: (min: 0.0, avg: 9.7, max: 20.0) +[2024-06-10 21:12:23,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:12:23,253][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000029946_490635264.pth... +[2024-06-10 21:12:23,309][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000029308_480182272.pth +[2024-06-10 21:12:24,548][46990] Updated weights for policy 0, policy_version 29950 (0.0048) +[2024-06-10 21:12:28,239][46753] Fps is (10 sec: 39322.3, 60 sec: 43144.6, 300 sec: 43542.6). Total num frames: 490831872. Throughput: 0: 43692.0. Samples: 491025960. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:12:28,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 21:12:28,813][46990] Updated weights for policy 0, policy_version 29960 (0.0027) +[2024-06-10 21:12:32,025][46990] Updated weights for policy 0, policy_version 29970 (0.0035) +[2024-06-10 21:12:33,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43690.8, 300 sec: 43709.2). Total num frames: 491110400. Throughput: 0: 43492.0. Samples: 491152720. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:12:33,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:12:36,111][46990] Updated weights for policy 0, policy_version 29980 (0.0031) +[2024-06-10 21:12:38,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43417.7, 300 sec: 43487.0). Total num frames: 491274240. Throughput: 0: 43465.2. Samples: 491409760. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:12:38,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:12:39,660][46990] Updated weights for policy 0, policy_version 29990 (0.0040) +[2024-06-10 21:12:43,239][46753] Fps is (10 sec: 36044.6, 60 sec: 42874.7, 300 sec: 43487.0). Total num frames: 491470848. Throughput: 0: 43555.6. Samples: 491676660. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:12:43,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:12:43,720][46990] Updated weights for policy 0, policy_version 30000 (0.0040) +[2024-06-10 21:12:46,925][46990] Updated weights for policy 0, policy_version 30010 (0.0040) +[2024-06-10 21:12:48,240][46753] Fps is (10 sec: 49151.1, 60 sec: 43963.6, 300 sec: 43709.2). Total num frames: 491765760. Throughput: 0: 43464.9. Samples: 491803640. Policy #0 lag: (min: 1.0, avg: 11.0, max: 22.0) +[2024-06-10 21:12:48,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:12:51,211][46990] Updated weights for policy 0, policy_version 30020 (0.0040) +[2024-06-10 21:12:53,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43690.8, 300 sec: 43542.6). Total num frames: 491945984. Throughput: 0: 43734.7. Samples: 492071960. Policy #0 lag: (min: 1.0, avg: 11.0, max: 22.0) +[2024-06-10 21:12:53,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 21:12:54,534][46990] Updated weights for policy 0, policy_version 30030 (0.0043) +[2024-06-10 21:12:58,240][46753] Fps is (10 sec: 37683.3, 60 sec: 43144.5, 300 sec: 43487.0). Total num frames: 492142592. Throughput: 0: 43748.8. Samples: 492335940. Policy #0 lag: (min: 1.0, avg: 11.0, max: 22.0) +[2024-06-10 21:12:58,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:12:58,893][46990] Updated weights for policy 0, policy_version 30040 (0.0045) +[2024-06-10 21:13:01,985][46990] Updated weights for policy 0, policy_version 30050 (0.0041) +[2024-06-10 21:13:03,239][46753] Fps is (10 sec: 47513.2, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 492421120. Throughput: 0: 43658.8. Samples: 492459140. Policy #0 lag: (min: 1.0, avg: 11.0, max: 22.0) +[2024-06-10 21:13:03,242][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:13:06,404][46990] Updated weights for policy 0, policy_version 30060 (0.0036) +[2024-06-10 21:13:08,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 492584960. Throughput: 0: 43669.8. Samples: 492722700. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 21:13:08,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:13:09,671][46990] Updated weights for policy 0, policy_version 30070 (0.0044) +[2024-06-10 21:13:10,643][46970] Signal inference workers to stop experience collection... (7250 times) +[2024-06-10 21:13:10,644][46970] Signal inference workers to resume experience collection... (7250 times) +[2024-06-10 21:13:10,691][46990] InferenceWorker_p0-w0: stopping experience collection (7250 times) +[2024-06-10 21:13:10,691][46990] InferenceWorker_p0-w0: resuming experience collection (7250 times) +[2024-06-10 21:13:13,244][46753] Fps is (10 sec: 37666.4, 60 sec: 43687.3, 300 sec: 43541.9). Total num frames: 492797952. Throughput: 0: 43455.2. Samples: 492981640. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 21:13:13,245][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:13:14,073][46990] Updated weights for policy 0, policy_version 30080 (0.0035) +[2024-06-10 21:13:16,865][46990] Updated weights for policy 0, policy_version 30090 (0.0032) +[2024-06-10 21:13:18,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 493060096. Throughput: 0: 43484.5. Samples: 493109520. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 21:13:18,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:13:21,573][46990] Updated weights for policy 0, policy_version 30100 (0.0041) +[2024-06-10 21:13:23,239][46753] Fps is (10 sec: 44256.5, 60 sec: 43417.5, 300 sec: 43431.5). Total num frames: 493240320. Throughput: 0: 43671.9. Samples: 493375000. Policy #0 lag: (min: 0.0, avg: 10.7, max: 23.0) +[2024-06-10 21:13:23,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:13:24,517][46990] Updated weights for policy 0, policy_version 30110 (0.0024) +[2024-06-10 21:13:28,239][46753] Fps is (10 sec: 39321.3, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 493453312. Throughput: 0: 43651.1. Samples: 493640960. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 21:13:28,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:13:29,064][46990] Updated weights for policy 0, policy_version 30120 (0.0048) +[2024-06-10 21:13:32,103][46990] Updated weights for policy 0, policy_version 30130 (0.0046) +[2024-06-10 21:13:33,239][46753] Fps is (10 sec: 49152.3, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 493731840. Throughput: 0: 43578.8. Samples: 493764680. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 21:13:33,240][46753] Avg episode reward: [(0, '0.271')] +[2024-06-10 21:13:36,365][46990] Updated weights for policy 0, policy_version 30140 (0.0044) +[2024-06-10 21:13:38,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.5, 300 sec: 43376.0). Total num frames: 493879296. Throughput: 0: 43424.4. Samples: 494026060. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 21:13:38,240][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 21:13:39,701][46990] Updated weights for policy 0, policy_version 30150 (0.0037) +[2024-06-10 21:13:43,239][46753] Fps is (10 sec: 36045.0, 60 sec: 43690.7, 300 sec: 43431.5). Total num frames: 494092288. Throughput: 0: 43280.1. Samples: 494283540. Policy #0 lag: (min: 0.0, avg: 11.7, max: 21.0) +[2024-06-10 21:13:43,244][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:13:44,263][46990] Updated weights for policy 0, policy_version 30160 (0.0042) +[2024-06-10 21:13:47,052][46990] Updated weights for policy 0, policy_version 30170 (0.0028) +[2024-06-10 21:13:48,239][46753] Fps is (10 sec: 50790.5, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 494387200. Throughput: 0: 43504.1. Samples: 494416820. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 21:13:48,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:13:51,784][46990] Updated weights for policy 0, policy_version 30180 (0.0038) +[2024-06-10 21:13:53,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43417.6, 300 sec: 43376.0). Total num frames: 494551040. Throughput: 0: 43556.8. Samples: 494682760. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 21:13:53,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:13:54,752][46990] Updated weights for policy 0, policy_version 30190 (0.0041) +[2024-06-10 21:13:58,239][46753] Fps is (10 sec: 37683.1, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 494764032. Throughput: 0: 43543.9. Samples: 494940920. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 21:13:58,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:13:59,187][46990] Updated weights for policy 0, policy_version 30200 (0.0034) +[2024-06-10 21:14:02,184][46990] Updated weights for policy 0, policy_version 30210 (0.0031) +[2024-06-10 21:14:03,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43417.7, 300 sec: 43653.6). Total num frames: 495026176. Throughput: 0: 43506.2. Samples: 495067300. Policy #0 lag: (min: 0.0, avg: 9.5, max: 20.0) +[2024-06-10 21:14:03,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:14:06,549][46990] Updated weights for policy 0, policy_version 30220 (0.0033) +[2024-06-10 21:14:08,240][46753] Fps is (10 sec: 40959.7, 60 sec: 43144.4, 300 sec: 43320.4). Total num frames: 495173632. Throughput: 0: 43551.5. Samples: 495334820. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:14:08,242][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:14:09,687][46990] Updated weights for policy 0, policy_version 30230 (0.0042) +[2024-06-10 21:14:13,240][46753] Fps is (10 sec: 37682.7, 60 sec: 43420.8, 300 sec: 43431.5). Total num frames: 495403008. Throughput: 0: 43303.4. Samples: 495589620. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:14:13,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:14:14,378][46990] Updated weights for policy 0, policy_version 30240 (0.0023) +[2024-06-10 21:14:17,229][46990] Updated weights for policy 0, policy_version 30250 (0.0032) +[2024-06-10 21:14:18,239][46753] Fps is (10 sec: 50790.5, 60 sec: 43690.5, 300 sec: 43653.6). Total num frames: 495681536. Throughput: 0: 43506.1. Samples: 495722460. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:14:18,242][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:14:21,844][46990] Updated weights for policy 0, policy_version 30260 (0.0038) +[2024-06-10 21:14:23,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43144.6, 300 sec: 43320.4). Total num frames: 495828992. Throughput: 0: 43464.5. Samples: 495981960. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:14:23,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:14:23,316][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000030264_495845376.pth... +[2024-06-10 21:14:23,366][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000029629_485441536.pth +[2024-06-10 21:14:23,824][46970] Signal inference workers to stop experience collection... (7300 times) +[2024-06-10 21:14:23,824][46970] Signal inference workers to resume experience collection... (7300 times) +[2024-06-10 21:14:23,837][46990] InferenceWorker_p0-w0: stopping experience collection (7300 times) +[2024-06-10 21:14:23,837][46990] InferenceWorker_p0-w0: resuming experience collection (7300 times) +[2024-06-10 21:14:24,676][46990] Updated weights for policy 0, policy_version 30270 (0.0028) +[2024-06-10 21:14:28,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43690.6, 300 sec: 43487.7). Total num frames: 496074752. Throughput: 0: 43572.4. Samples: 496244300. Policy #0 lag: (min: 0.0, avg: 12.7, max: 23.0) +[2024-06-10 21:14:28,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 21:14:29,426][46990] Updated weights for policy 0, policy_version 30280 (0.0033) +[2024-06-10 21:14:32,234][46990] Updated weights for policy 0, policy_version 30290 (0.0038) +[2024-06-10 21:14:33,239][46753] Fps is (10 sec: 49151.5, 60 sec: 43144.5, 300 sec: 43598.1). Total num frames: 496320512. Throughput: 0: 43544.4. Samples: 496376320. Policy #0 lag: (min: 0.0, avg: 12.7, max: 23.0) +[2024-06-10 21:14:33,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:14:36,702][46990] Updated weights for policy 0, policy_version 30300 (0.0035) +[2024-06-10 21:14:38,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43417.5, 300 sec: 43321.2). Total num frames: 496484352. Throughput: 0: 43268.4. Samples: 496629840. Policy #0 lag: (min: 0.0, avg: 12.7, max: 23.0) +[2024-06-10 21:14:38,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:14:40,035][46990] Updated weights for policy 0, policy_version 30310 (0.0038) +[2024-06-10 21:14:43,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 496713728. Throughput: 0: 43285.3. Samples: 496888760. Policy #0 lag: (min: 0.0, avg: 12.7, max: 23.0) +[2024-06-10 21:14:43,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:14:44,530][46990] Updated weights for policy 0, policy_version 30320 (0.0037) +[2024-06-10 21:14:47,631][46990] Updated weights for policy 0, policy_version 30330 (0.0032) +[2024-06-10 21:14:48,239][46753] Fps is (10 sec: 49152.4, 60 sec: 43144.5, 300 sec: 43542.6). Total num frames: 496975872. Throughput: 0: 43485.8. Samples: 497024160. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 21:14:48,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:14:51,852][46990] Updated weights for policy 0, policy_version 30340 (0.0046) +[2024-06-10 21:14:53,244][46753] Fps is (10 sec: 40941.6, 60 sec: 42868.3, 300 sec: 43264.2). Total num frames: 497123328. Throughput: 0: 43141.1. Samples: 497276360. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 21:14:53,244][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:14:55,038][46990] Updated weights for policy 0, policy_version 30350 (0.0035) +[2024-06-10 21:14:58,240][46753] Fps is (10 sec: 40959.3, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 497385472. Throughput: 0: 43279.5. Samples: 497537200. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 21:14:58,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 21:14:59,407][46990] Updated weights for policy 0, policy_version 30360 (0.0030) +[2024-06-10 21:15:02,593][46990] Updated weights for policy 0, policy_version 30370 (0.0041) +[2024-06-10 21:15:03,239][46753] Fps is (10 sec: 47534.9, 60 sec: 42871.4, 300 sec: 43487.0). Total num frames: 497598464. Throughput: 0: 43452.9. Samples: 497677840. Policy #0 lag: (min: 0.0, avg: 10.9, max: 22.0) +[2024-06-10 21:15:03,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:15:06,970][46990] Updated weights for policy 0, policy_version 30380 (0.0036) +[2024-06-10 21:15:08,240][46753] Fps is (10 sec: 39321.8, 60 sec: 43417.6, 300 sec: 43320.4). Total num frames: 497778688. Throughput: 0: 43334.9. Samples: 497932040. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 21:15:08,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:15:10,318][46990] Updated weights for policy 0, policy_version 30390 (0.0033) +[2024-06-10 21:15:13,240][46753] Fps is (10 sec: 44236.4, 60 sec: 43963.7, 300 sec: 43542.5). Total num frames: 498040832. Throughput: 0: 43099.9. Samples: 498183800. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 21:15:13,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:15:14,141][46990] Updated weights for policy 0, policy_version 30400 (0.0039) +[2024-06-10 21:15:17,734][46990] Updated weights for policy 0, policy_version 30410 (0.0024) +[2024-06-10 21:15:18,239][46753] Fps is (10 sec: 49152.9, 60 sec: 43144.6, 300 sec: 43487.7). Total num frames: 498270208. Throughput: 0: 43465.0. Samples: 498332240. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 21:15:18,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:15:21,272][46990] Updated weights for policy 0, policy_version 30420 (0.0039) +[2024-06-10 21:15:23,240][46753] Fps is (10 sec: 39321.5, 60 sec: 43417.5, 300 sec: 43320.4). Total num frames: 498434048. Throughput: 0: 43419.5. Samples: 498583720. Policy #0 lag: (min: 0.0, avg: 8.7, max: 21.0) +[2024-06-10 21:15:23,242][46753] Avg episode reward: [(0, '0.277')] +[2024-06-10 21:15:25,144][46990] Updated weights for policy 0, policy_version 30430 (0.0031) +[2024-06-10 21:15:28,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.7, 300 sec: 43487.7). Total num frames: 498696192. Throughput: 0: 43365.8. Samples: 498840220. Policy #0 lag: (min: 0.0, avg: 12.2, max: 23.0) +[2024-06-10 21:15:28,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:15:28,839][46990] Updated weights for policy 0, policy_version 30440 (0.0044) +[2024-06-10 21:15:32,609][46990] Updated weights for policy 0, policy_version 30450 (0.0037) +[2024-06-10 21:15:33,239][46753] Fps is (10 sec: 49152.3, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 498925568. Throughput: 0: 43534.6. Samples: 498983220. Policy #0 lag: (min: 0.0, avg: 12.2, max: 23.0) +[2024-06-10 21:15:33,243][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:15:34,585][46970] Signal inference workers to stop experience collection... (7350 times) +[2024-06-10 21:15:34,614][46990] InferenceWorker_p0-w0: stopping experience collection (7350 times) +[2024-06-10 21:15:34,642][46970] Signal inference workers to resume experience collection... (7350 times) +[2024-06-10 21:15:34,643][46990] InferenceWorker_p0-w0: resuming experience collection (7350 times) +[2024-06-10 21:15:36,461][46990] Updated weights for policy 0, policy_version 30460 (0.0036) +[2024-06-10 21:15:38,240][46753] Fps is (10 sec: 37682.7, 60 sec: 43144.5, 300 sec: 43320.4). Total num frames: 499073024. Throughput: 0: 43563.8. Samples: 499236540. Policy #0 lag: (min: 0.0, avg: 12.2, max: 23.0) +[2024-06-10 21:15:38,249][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:15:40,486][46990] Updated weights for policy 0, policy_version 30470 (0.0034) +[2024-06-10 21:15:43,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43963.7, 300 sec: 43542.6). Total num frames: 499351552. Throughput: 0: 43294.8. Samples: 499485460. Policy #0 lag: (min: 0.0, avg: 12.2, max: 23.0) +[2024-06-10 21:15:43,252][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:15:44,328][46990] Updated weights for policy 0, policy_version 30480 (0.0046) +[2024-06-10 21:15:48,075][46990] Updated weights for policy 0, policy_version 30490 (0.0040) +[2024-06-10 21:15:48,239][46753] Fps is (10 sec: 49152.3, 60 sec: 43144.5, 300 sec: 43542.6). Total num frames: 499564544. Throughput: 0: 43541.3. Samples: 499637200. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:15:48,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:15:51,687][46990] Updated weights for policy 0, policy_version 30500 (0.0035) +[2024-06-10 21:15:53,240][46753] Fps is (10 sec: 39321.3, 60 sec: 43693.9, 300 sec: 43375.9). Total num frames: 499744768. Throughput: 0: 43512.5. Samples: 499890100. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:15:53,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:15:55,403][46990] Updated weights for policy 0, policy_version 30510 (0.0048) +[2024-06-10 21:15:58,240][46753] Fps is (10 sec: 44236.4, 60 sec: 43690.7, 300 sec: 43542.5). Total num frames: 500006912. Throughput: 0: 43661.8. Samples: 500148580. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:15:58,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:15:58,933][46990] Updated weights for policy 0, policy_version 30520 (0.0036) +[2024-06-10 21:16:02,838][46990] Updated weights for policy 0, policy_version 30530 (0.0029) +[2024-06-10 21:16:03,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 500219904. Throughput: 0: 43476.4. Samples: 500288680. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:16:03,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:16:06,667][46990] Updated weights for policy 0, policy_version 30540 (0.0049) +[2024-06-10 21:16:08,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43963.8, 300 sec: 43487.0). Total num frames: 500416512. Throughput: 0: 43522.3. Samples: 500542220. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 21:16:08,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:16:10,558][46990] Updated weights for policy 0, policy_version 30550 (0.0043) +[2024-06-10 21:16:13,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 500662272. Throughput: 0: 43517.7. Samples: 500798520. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 21:16:13,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:16:14,011][46990] Updated weights for policy 0, policy_version 30560 (0.0024) +[2024-06-10 21:16:17,995][46990] Updated weights for policy 0, policy_version 30570 (0.0044) +[2024-06-10 21:16:18,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 500875264. Throughput: 0: 43577.0. Samples: 500944180. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 21:16:18,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:16:21,764][46990] Updated weights for policy 0, policy_version 30580 (0.0032) +[2024-06-10 21:16:23,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43963.9, 300 sec: 43487.0). Total num frames: 501071872. Throughput: 0: 43717.5. Samples: 501203820. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 21:16:23,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:16:23,297][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000030584_501088256.pth... +[2024-06-10 21:16:23,366][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000029946_490635264.pth +[2024-06-10 21:16:25,285][46990] Updated weights for policy 0, policy_version 30590 (0.0042) +[2024-06-10 21:16:28,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 501317632. Throughput: 0: 43895.2. Samples: 501460740. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 21:16:28,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:16:29,302][46990] Updated weights for policy 0, policy_version 30600 (0.0035) +[2024-06-10 21:16:32,835][46990] Updated weights for policy 0, policy_version 30610 (0.0028) +[2024-06-10 21:16:33,240][46753] Fps is (10 sec: 45874.4, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 501530624. Throughput: 0: 43531.0. Samples: 501596100. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 21:16:33,249][46753] Avg episode reward: [(0, '0.273')] +[2024-06-10 21:16:36,652][46990] Updated weights for policy 0, policy_version 30620 (0.0038) +[2024-06-10 21:16:38,239][46753] Fps is (10 sec: 40960.2, 60 sec: 44236.9, 300 sec: 43487.7). Total num frames: 501727232. Throughput: 0: 43552.6. Samples: 501849960. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 21:16:38,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:16:40,791][46990] Updated weights for policy 0, policy_version 30630 (0.0046) +[2024-06-10 21:16:43,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 501972992. Throughput: 0: 43585.5. Samples: 502109920. Policy #0 lag: (min: 0.0, avg: 11.6, max: 21.0) +[2024-06-10 21:16:43,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:16:43,823][46990] Updated weights for policy 0, policy_version 30640 (0.0032) +[2024-06-10 21:16:47,852][46970] Signal inference workers to stop experience collection... (7400 times) +[2024-06-10 21:16:47,904][46990] InferenceWorker_p0-w0: stopping experience collection (7400 times) +[2024-06-10 21:16:47,973][46970] Signal inference workers to resume experience collection... (7400 times) +[2024-06-10 21:16:47,973][46990] InferenceWorker_p0-w0: resuming experience collection (7400 times) +[2024-06-10 21:16:48,239][46753] Fps is (10 sec: 42597.8, 60 sec: 43144.5, 300 sec: 43487.0). Total num frames: 502153216. Throughput: 0: 43593.7. Samples: 502250400. Policy #0 lag: (min: 1.0, avg: 7.3, max: 21.0) +[2024-06-10 21:16:48,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:16:48,307][46990] Updated weights for policy 0, policy_version 30650 (0.0036) +[2024-06-10 21:16:51,529][46990] Updated weights for policy 0, policy_version 30660 (0.0022) +[2024-06-10 21:16:53,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43963.8, 300 sec: 43487.0). Total num frames: 502382592. Throughput: 0: 43823.6. Samples: 502514280. Policy #0 lag: (min: 1.0, avg: 7.3, max: 21.0) +[2024-06-10 21:16:53,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:16:55,558][46990] Updated weights for policy 0, policy_version 30670 (0.0023) +[2024-06-10 21:16:58,239][46753] Fps is (10 sec: 47513.6, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 502628352. Throughput: 0: 43841.7. Samples: 502771400. Policy #0 lag: (min: 1.0, avg: 7.3, max: 21.0) +[2024-06-10 21:16:58,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:16:59,175][46990] Updated weights for policy 0, policy_version 30680 (0.0043) +[2024-06-10 21:17:03,161][46990] Updated weights for policy 0, policy_version 30690 (0.0033) +[2024-06-10 21:17:03,240][46753] Fps is (10 sec: 44235.7, 60 sec: 43417.5, 300 sec: 43542.5). Total num frames: 502824960. Throughput: 0: 43729.9. Samples: 502912040. Policy #0 lag: (min: 1.0, avg: 7.3, max: 21.0) +[2024-06-10 21:17:03,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:17:06,549][46990] Updated weights for policy 0, policy_version 30700 (0.0037) +[2024-06-10 21:17:08,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 503037952. Throughput: 0: 43614.1. Samples: 503166460. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 21:17:08,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:17:10,604][46990] Updated weights for policy 0, policy_version 30710 (0.0040) +[2024-06-10 21:17:13,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43690.6, 300 sec: 43542.6). Total num frames: 503283712. Throughput: 0: 43771.9. Samples: 503430480. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 21:17:13,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:17:14,024][46990] Updated weights for policy 0, policy_version 30720 (0.0038) +[2024-06-10 21:17:18,076][46990] Updated weights for policy 0, policy_version 30730 (0.0048) +[2024-06-10 21:17:18,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43417.5, 300 sec: 43542.6). Total num frames: 503480320. Throughput: 0: 43752.1. Samples: 503564940. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 21:17:18,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:17:21,742][46990] Updated weights for policy 0, policy_version 30740 (0.0026) +[2024-06-10 21:17:23,244][46753] Fps is (10 sec: 44217.4, 60 sec: 44233.5, 300 sec: 43708.5). Total num frames: 503726080. Throughput: 0: 43980.5. Samples: 503829280. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 21:17:23,244][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:17:25,661][46990] Updated weights for policy 0, policy_version 30750 (0.0040) +[2024-06-10 21:17:28,240][46753] Fps is (10 sec: 45874.8, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 503939072. Throughput: 0: 43844.8. Samples: 504082940. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 21:17:28,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:17:29,101][46990] Updated weights for policy 0, policy_version 30760 (0.0039) +[2024-06-10 21:17:33,155][46990] Updated weights for policy 0, policy_version 30770 (0.0037) +[2024-06-10 21:17:33,244][46753] Fps is (10 sec: 40959.7, 60 sec: 43414.4, 300 sec: 43597.4). Total num frames: 504135680. Throughput: 0: 43853.0. Samples: 504223980. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 21:17:33,245][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:17:36,354][46990] Updated weights for policy 0, policy_version 30780 (0.0047) +[2024-06-10 21:17:38,240][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.5, 300 sec: 43653.6). Total num frames: 504348672. Throughput: 0: 43657.2. Samples: 504478860. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 21:17:38,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:17:40,706][46990] Updated weights for policy 0, policy_version 30790 (0.0034) +[2024-06-10 21:17:43,240][46753] Fps is (10 sec: 45895.6, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 504594432. Throughput: 0: 43741.3. Samples: 504739760. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 21:17:43,241][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:17:43,530][46990] Updated weights for policy 0, policy_version 30800 (0.0037) +[2024-06-10 21:17:48,183][46990] Updated weights for policy 0, policy_version 30810 (0.0041) +[2024-06-10 21:17:48,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.8, 300 sec: 43542.6). Total num frames: 504791040. Throughput: 0: 43742.4. Samples: 504880440. Policy #0 lag: (min: 0.0, avg: 12.2, max: 21.0) +[2024-06-10 21:17:48,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:17:51,387][46990] Updated weights for policy 0, policy_version 30820 (0.0028) +[2024-06-10 21:17:53,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 505004032. Throughput: 0: 43800.4. Samples: 505137480. Policy #0 lag: (min: 1.0, avg: 9.3, max: 20.0) +[2024-06-10 21:17:53,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:17:55,439][46990] Updated weights for policy 0, policy_version 30830 (0.0045) +[2024-06-10 21:17:56,221][46970] Signal inference workers to stop experience collection... (7450 times) +[2024-06-10 21:17:56,221][46970] Signal inference workers to resume experience collection... (7450 times) +[2024-06-10 21:17:56,252][46990] InferenceWorker_p0-w0: stopping experience collection (7450 times) +[2024-06-10 21:17:56,252][46990] InferenceWorker_p0-w0: resuming experience collection (7450 times) +[2024-06-10 21:17:58,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 505233408. Throughput: 0: 43768.0. Samples: 505400040. Policy #0 lag: (min: 1.0, avg: 9.3, max: 20.0) +[2024-06-10 21:17:58,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:17:58,760][46990] Updated weights for policy 0, policy_version 30840 (0.0047) +[2024-06-10 21:18:02,860][46990] Updated weights for policy 0, policy_version 30850 (0.0032) +[2024-06-10 21:18:03,244][46753] Fps is (10 sec: 44216.4, 60 sec: 43687.4, 300 sec: 43597.4). Total num frames: 505446400. Throughput: 0: 43966.1. Samples: 505543620. Policy #0 lag: (min: 1.0, avg: 9.3, max: 20.0) +[2024-06-10 21:18:03,245][46753] Avg episode reward: [(0, '0.274')] +[2024-06-10 21:18:06,297][46990] Updated weights for policy 0, policy_version 30860 (0.0034) +[2024-06-10 21:18:08,240][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43598.8). Total num frames: 505659392. Throughput: 0: 43620.3. Samples: 505792000. Policy #0 lag: (min: 1.0, avg: 9.3, max: 20.0) +[2024-06-10 21:18:08,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:18:10,557][46990] Updated weights for policy 0, policy_version 30870 (0.0036) +[2024-06-10 21:18:13,239][46753] Fps is (10 sec: 45896.6, 60 sec: 43690.7, 300 sec: 43542.5). Total num frames: 505905152. Throughput: 0: 43856.5. Samples: 506056480. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:18:13,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:18:13,823][46990] Updated weights for policy 0, policy_version 30880 (0.0023) +[2024-06-10 21:18:17,971][46990] Updated weights for policy 0, policy_version 30890 (0.0027) +[2024-06-10 21:18:18,244][46753] Fps is (10 sec: 45854.8, 60 sec: 43960.4, 300 sec: 43653.0). Total num frames: 506118144. Throughput: 0: 43732.0. Samples: 506191920. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:18:18,245][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:18:21,547][46990] Updated weights for policy 0, policy_version 30900 (0.0033) +[2024-06-10 21:18:23,240][46753] Fps is (10 sec: 40959.4, 60 sec: 43147.6, 300 sec: 43598.1). Total num frames: 506314752. Throughput: 0: 43736.8. Samples: 506447020. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:18:23,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:18:23,247][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000030903_506314752.pth... +[2024-06-10 21:18:23,309][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000030264_495845376.pth +[2024-06-10 21:18:25,603][46990] Updated weights for policy 0, policy_version 30910 (0.0029) +[2024-06-10 21:18:28,240][46753] Fps is (10 sec: 40978.2, 60 sec: 43144.6, 300 sec: 43375.9). Total num frames: 506527744. Throughput: 0: 43628.9. Samples: 506703060. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:18:28,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:18:28,999][46990] Updated weights for policy 0, policy_version 30920 (0.0045) +[2024-06-10 21:18:33,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43420.9, 300 sec: 43598.1). Total num frames: 506740736. Throughput: 0: 43494.7. Samples: 506837700. Policy #0 lag: (min: 1.0, avg: 11.3, max: 22.0) +[2024-06-10 21:18:33,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:18:33,296][46990] Updated weights for policy 0, policy_version 30930 (0.0040) +[2024-06-10 21:18:36,558][46990] Updated weights for policy 0, policy_version 30940 (0.0028) +[2024-06-10 21:18:38,240][46753] Fps is (10 sec: 44235.9, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 506970112. Throughput: 0: 43292.3. Samples: 507085640. Policy #0 lag: (min: 1.0, avg: 11.3, max: 22.0) +[2024-06-10 21:18:38,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:18:40,811][46990] Updated weights for policy 0, policy_version 30950 (0.0031) +[2024-06-10 21:18:43,241][46753] Fps is (10 sec: 44229.4, 60 sec: 43143.4, 300 sec: 43375.7). Total num frames: 507183104. Throughput: 0: 43363.3. Samples: 507351460. Policy #0 lag: (min: 1.0, avg: 11.3, max: 22.0) +[2024-06-10 21:18:43,242][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:18:44,138][46990] Updated weights for policy 0, policy_version 30960 (0.0030) +[2024-06-10 21:18:48,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 507396096. Throughput: 0: 43196.9. Samples: 507487280. Policy #0 lag: (min: 1.0, avg: 11.3, max: 22.0) +[2024-06-10 21:18:48,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:18:48,458][46990] Updated weights for policy 0, policy_version 30970 (0.0028) +[2024-06-10 21:18:51,546][46990] Updated weights for policy 0, policy_version 30980 (0.0031) +[2024-06-10 21:18:53,240][46753] Fps is (10 sec: 44243.5, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 507625472. Throughput: 0: 43370.5. Samples: 507743680. Policy #0 lag: (min: 1.0, avg: 11.1, max: 22.0) +[2024-06-10 21:18:53,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:18:55,937][46990] Updated weights for policy 0, policy_version 30990 (0.0039) +[2024-06-10 21:18:58,240][46753] Fps is (10 sec: 44236.5, 60 sec: 43417.5, 300 sec: 43431.5). Total num frames: 507838464. Throughput: 0: 43382.1. Samples: 508008680. Policy #0 lag: (min: 1.0, avg: 11.1, max: 22.0) +[2024-06-10 21:18:58,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:18:58,835][46990] Updated weights for policy 0, policy_version 31000 (0.0035) +[2024-06-10 21:19:03,239][46753] Fps is (10 sec: 42599.4, 60 sec: 43421.0, 300 sec: 43653.7). Total num frames: 508051456. Throughput: 0: 43358.6. Samples: 508142860. Policy #0 lag: (min: 1.0, avg: 11.1, max: 22.0) +[2024-06-10 21:19:03,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:19:03,263][46990] Updated weights for policy 0, policy_version 31010 (0.0034) +[2024-06-10 21:19:06,512][46990] Updated weights for policy 0, policy_version 31020 (0.0032) +[2024-06-10 21:19:08,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 508264448. Throughput: 0: 43175.7. Samples: 508389920. Policy #0 lag: (min: 1.0, avg: 11.1, max: 22.0) +[2024-06-10 21:19:08,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:19:11,041][46990] Updated weights for policy 0, policy_version 31030 (0.0034) +[2024-06-10 21:19:13,240][46753] Fps is (10 sec: 45874.6, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 508510208. Throughput: 0: 43394.2. Samples: 508655800. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 21:19:13,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:19:13,818][46990] Updated weights for policy 0, policy_version 31040 (0.0033) +[2024-06-10 21:19:18,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43147.8, 300 sec: 43653.6). Total num frames: 508706816. Throughput: 0: 43435.2. Samples: 508792280. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 21:19:18,244][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:19:18,556][46990] Updated weights for policy 0, policy_version 31050 (0.0031) +[2024-06-10 21:19:21,320][46990] Updated weights for policy 0, policy_version 31060 (0.0032) +[2024-06-10 21:19:22,937][46970] Signal inference workers to stop experience collection... (7500 times) +[2024-06-10 21:19:22,937][46970] Signal inference workers to resume experience collection... (7500 times) +[2024-06-10 21:19:22,988][46990] InferenceWorker_p0-w0: stopping experience collection (7500 times) +[2024-06-10 21:19:22,988][46990] InferenceWorker_p0-w0: resuming experience collection (7500 times) +[2024-06-10 21:19:23,240][46753] Fps is (10 sec: 42598.2, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 508936192. Throughput: 0: 43650.8. Samples: 509049920. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 21:19:23,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:19:26,184][46990] Updated weights for policy 0, policy_version 31070 (0.0039) +[2024-06-10 21:19:28,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 509149184. Throughput: 0: 43548.7. Samples: 509311080. Policy #0 lag: (min: 0.0, avg: 10.1, max: 21.0) +[2024-06-10 21:19:28,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:19:28,719][46990] Updated weights for policy 0, policy_version 31080 (0.0024) +[2024-06-10 21:19:33,239][46753] Fps is (10 sec: 40960.7, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 509345792. Throughput: 0: 43442.8. Samples: 509442200. Policy #0 lag: (min: 0.0, avg: 12.5, max: 21.0) +[2024-06-10 21:19:33,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:19:33,546][46990] Updated weights for policy 0, policy_version 31090 (0.0028) +[2024-06-10 21:19:36,439][46990] Updated weights for policy 0, policy_version 31100 (0.0033) +[2024-06-10 21:19:38,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43690.8, 300 sec: 43653.6). Total num frames: 509591552. Throughput: 0: 43336.6. Samples: 509693820. Policy #0 lag: (min: 0.0, avg: 12.5, max: 21.0) +[2024-06-10 21:19:38,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:19:41,353][46990] Updated weights for policy 0, policy_version 31110 (0.0031) +[2024-06-10 21:19:43,244][46753] Fps is (10 sec: 49129.4, 60 sec: 44234.7, 300 sec: 43597.4). Total num frames: 509837312. Throughput: 0: 43426.4. Samples: 509963060. Policy #0 lag: (min: 0.0, avg: 12.5, max: 21.0) +[2024-06-10 21:19:43,245][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:19:43,785][46990] Updated weights for policy 0, policy_version 31120 (0.0038) +[2024-06-10 21:19:48,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43417.6, 300 sec: 43654.3). Total num frames: 510001152. Throughput: 0: 43485.7. Samples: 510099720. Policy #0 lag: (min: 0.0, avg: 12.5, max: 21.0) +[2024-06-10 21:19:48,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:19:48,825][46990] Updated weights for policy 0, policy_version 31130 (0.0032) +[2024-06-10 21:19:51,268][46990] Updated weights for policy 0, policy_version 31140 (0.0022) +[2024-06-10 21:19:53,239][46753] Fps is (10 sec: 40978.8, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 510246912. Throughput: 0: 43695.2. Samples: 510356200. Policy #0 lag: (min: 0.0, avg: 12.4, max: 25.0) +[2024-06-10 21:19:53,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:19:56,144][46990] Updated weights for policy 0, policy_version 31150 (0.0040) +[2024-06-10 21:19:58,239][46753] Fps is (10 sec: 49151.7, 60 sec: 44236.9, 300 sec: 43709.2). Total num frames: 510492672. Throughput: 0: 43656.9. Samples: 510620360. Policy #0 lag: (min: 0.0, avg: 12.4, max: 25.0) +[2024-06-10 21:19:58,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:19:58,792][46990] Updated weights for policy 0, policy_version 31160 (0.0024) +[2024-06-10 21:20:03,239][46753] Fps is (10 sec: 40959.5, 60 sec: 43417.5, 300 sec: 43653.6). Total num frames: 510656512. Throughput: 0: 43545.7. Samples: 510751840. Policy #0 lag: (min: 0.0, avg: 12.4, max: 25.0) +[2024-06-10 21:20:03,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:20:03,661][46990] Updated weights for policy 0, policy_version 31170 (0.0047) +[2024-06-10 21:20:06,214][46990] Updated weights for policy 0, policy_version 31180 (0.0045) +[2024-06-10 21:20:08,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 510902272. Throughput: 0: 43484.5. Samples: 511006720. Policy #0 lag: (min: 0.0, avg: 12.4, max: 25.0) +[2024-06-10 21:20:08,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:20:11,592][46990] Updated weights for policy 0, policy_version 31190 (0.0046) +[2024-06-10 21:20:13,240][46753] Fps is (10 sec: 47513.4, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 511131648. Throughput: 0: 43604.3. Samples: 511273280. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:20:13,243][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:20:14,091][46990] Updated weights for policy 0, policy_version 31200 (0.0034) +[2024-06-10 21:20:18,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43417.5, 300 sec: 43653.7). Total num frames: 511311872. Throughput: 0: 43545.2. Samples: 511401740. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:20:18,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:20:19,154][46990] Updated weights for policy 0, policy_version 31210 (0.0029) +[2024-06-10 21:20:21,498][46990] Updated weights for policy 0, policy_version 31220 (0.0034) +[2024-06-10 21:20:22,284][46970] Signal inference workers to stop experience collection... (7550 times) +[2024-06-10 21:20:22,285][46970] Signal inference workers to resume experience collection... (7550 times) +[2024-06-10 21:20:22,317][46990] InferenceWorker_p0-w0: stopping experience collection (7550 times) +[2024-06-10 21:20:22,318][46990] InferenceWorker_p0-w0: resuming experience collection (7550 times) +[2024-06-10 21:20:23,244][46753] Fps is (10 sec: 42579.7, 60 sec: 43687.5, 300 sec: 43597.4). Total num frames: 511557632. Throughput: 0: 43699.7. Samples: 511660500. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:20:23,245][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:20:23,260][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000031223_511557632.pth... +[2024-06-10 21:20:23,320][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000030584_501088256.pth +[2024-06-10 21:20:26,371][46990] Updated weights for policy 0, policy_version 31230 (0.0033) +[2024-06-10 21:20:28,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 511770624. Throughput: 0: 43524.8. Samples: 511921480. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:20:28,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:20:29,234][46990] Updated weights for policy 0, policy_version 31240 (0.0034) +[2024-06-10 21:20:33,239][46753] Fps is (10 sec: 39339.0, 60 sec: 43417.5, 300 sec: 43653.6). Total num frames: 511950848. Throughput: 0: 43254.6. Samples: 512046180. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 21:20:33,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:20:34,224][46990] Updated weights for policy 0, policy_version 31250 (0.0030) +[2024-06-10 21:20:36,564][46990] Updated weights for policy 0, policy_version 31260 (0.0024) +[2024-06-10 21:20:38,245][46753] Fps is (10 sec: 44213.9, 60 sec: 43686.9, 300 sec: 43597.3). Total num frames: 512212992. Throughput: 0: 43301.2. Samples: 512304980. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 21:20:38,245][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:20:41,922][46990] Updated weights for policy 0, policy_version 31270 (0.0036) +[2024-06-10 21:20:43,240][46753] Fps is (10 sec: 47513.5, 60 sec: 43147.7, 300 sec: 43598.1). Total num frames: 512425984. Throughput: 0: 43427.5. Samples: 512574600. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 21:20:43,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:20:44,227][46990] Updated weights for policy 0, policy_version 31280 (0.0036) +[2024-06-10 21:20:48,239][46753] Fps is (10 sec: 40981.3, 60 sec: 43690.7, 300 sec: 43653.7). Total num frames: 512622592. Throughput: 0: 43404.0. Samples: 512705020. Policy #0 lag: (min: 0.0, avg: 10.5, max: 21.0) +[2024-06-10 21:20:48,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:20:49,408][46990] Updated weights for policy 0, policy_version 31290 (0.0039) +[2024-06-10 21:20:51,406][46990] Updated weights for policy 0, policy_version 31300 (0.0032) +[2024-06-10 21:20:53,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 512868352. Throughput: 0: 43671.7. Samples: 512971940. Policy #0 lag: (min: 0.0, avg: 12.6, max: 22.0) +[2024-06-10 21:20:53,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:20:56,788][46990] Updated weights for policy 0, policy_version 31310 (0.0028) +[2024-06-10 21:20:58,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43144.6, 300 sec: 43598.1). Total num frames: 513081344. Throughput: 0: 43549.0. Samples: 513232980. Policy #0 lag: (min: 0.0, avg: 12.6, max: 22.0) +[2024-06-10 21:20:58,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 21:20:59,406][46990] Updated weights for policy 0, policy_version 31320 (0.0036) +[2024-06-10 21:21:03,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 513277952. Throughput: 0: 43562.3. Samples: 513362040. Policy #0 lag: (min: 0.0, avg: 12.6, max: 22.0) +[2024-06-10 21:21:03,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:21:03,948][46990] Updated weights for policy 0, policy_version 31330 (0.0035) +[2024-06-10 21:21:06,639][46990] Updated weights for policy 0, policy_version 31340 (0.0046) +[2024-06-10 21:21:08,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 513523712. Throughput: 0: 43600.4. Samples: 513622320. Policy #0 lag: (min: 0.0, avg: 12.6, max: 22.0) +[2024-06-10 21:21:08,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:21:11,740][46990] Updated weights for policy 0, policy_version 31350 (0.0025) +[2024-06-10 21:21:13,240][46753] Fps is (10 sec: 45874.6, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 513736704. Throughput: 0: 43651.5. Samples: 513885800. Policy #0 lag: (min: 1.0, avg: 8.9, max: 21.0) +[2024-06-10 21:21:13,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:21:14,393][46990] Updated weights for policy 0, policy_version 31360 (0.0036) +[2024-06-10 21:21:18,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43417.7, 300 sec: 43542.6). Total num frames: 513916928. Throughput: 0: 43680.6. Samples: 514011800. Policy #0 lag: (min: 1.0, avg: 8.9, max: 21.0) +[2024-06-10 21:21:18,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:21:19,354][46990] Updated weights for policy 0, policy_version 31370 (0.0030) +[2024-06-10 21:21:21,779][46990] Updated weights for policy 0, policy_version 31380 (0.0036) +[2024-06-10 21:21:23,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43694.0, 300 sec: 43598.1). Total num frames: 514179072. Throughput: 0: 43736.2. Samples: 514272880. Policy #0 lag: (min: 1.0, avg: 8.9, max: 21.0) +[2024-06-10 21:21:23,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:21:26,752][46990] Updated weights for policy 0, policy_version 31390 (0.0041) +[2024-06-10 21:21:28,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 514375680. Throughput: 0: 43728.6. Samples: 514542380. Policy #0 lag: (min: 1.0, avg: 8.9, max: 21.0) +[2024-06-10 21:21:28,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:21:28,645][46970] Signal inference workers to stop experience collection... (7600 times) +[2024-06-10 21:21:28,646][46970] Signal inference workers to resume experience collection... (7600 times) +[2024-06-10 21:21:28,661][46990] InferenceWorker_p0-w0: stopping experience collection (7600 times) +[2024-06-10 21:21:28,661][46990] InferenceWorker_p0-w0: resuming experience collection (7600 times) +[2024-06-10 21:21:29,478][46990] Updated weights for policy 0, policy_version 31400 (0.0032) +[2024-06-10 21:21:33,240][46753] Fps is (10 sec: 39320.6, 60 sec: 43690.6, 300 sec: 43542.5). Total num frames: 514572288. Throughput: 0: 43630.9. Samples: 514668420. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 21:21:33,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:21:34,071][46990] Updated weights for policy 0, policy_version 31410 (0.0039) +[2024-06-10 21:21:36,834][46990] Updated weights for policy 0, policy_version 31420 (0.0027) +[2024-06-10 21:21:38,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43694.4, 300 sec: 43598.1). Total num frames: 514834432. Throughput: 0: 43424.8. Samples: 514926060. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 21:21:38,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:21:41,733][46990] Updated weights for policy 0, policy_version 31430 (0.0038) +[2024-06-10 21:21:43,239][46753] Fps is (10 sec: 47514.4, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 515047424. Throughput: 0: 43722.6. Samples: 515200500. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 21:21:43,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:21:44,406][46990] Updated weights for policy 0, policy_version 31440 (0.0035) +[2024-06-10 21:21:48,239][46753] Fps is (10 sec: 39321.4, 60 sec: 43417.6, 300 sec: 43542.5). Total num frames: 515227648. Throughput: 0: 43616.8. Samples: 515324800. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 21:21:48,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:21:49,077][46990] Updated weights for policy 0, policy_version 31450 (0.0045) +[2024-06-10 21:21:51,759][46990] Updated weights for policy 0, policy_version 31460 (0.0037) +[2024-06-10 21:21:53,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43690.5, 300 sec: 43598.1). Total num frames: 515489792. Throughput: 0: 43668.8. Samples: 515587420. Policy #0 lag: (min: 0.0, avg: 11.1, max: 21.0) +[2024-06-10 21:21:53,248][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:21:56,595][46990] Updated weights for policy 0, policy_version 31470 (0.0040) +[2024-06-10 21:21:58,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 515686400. Throughput: 0: 43774.8. Samples: 515855660. Policy #0 lag: (min: 0.0, avg: 11.7, max: 20.0) +[2024-06-10 21:21:58,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:21:59,671][46990] Updated weights for policy 0, policy_version 31480 (0.0038) +[2024-06-10 21:22:03,239][46753] Fps is (10 sec: 37683.2, 60 sec: 43144.5, 300 sec: 43487.0). Total num frames: 515866624. Throughput: 0: 43543.0. Samples: 515971240. Policy #0 lag: (min: 0.0, avg: 11.7, max: 20.0) +[2024-06-10 21:22:03,244][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:22:04,102][46990] Updated weights for policy 0, policy_version 31490 (0.0027) +[2024-06-10 21:22:06,945][46990] Updated weights for policy 0, policy_version 31500 (0.0037) +[2024-06-10 21:22:08,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 516145152. Throughput: 0: 43534.7. Samples: 516231940. Policy #0 lag: (min: 0.0, avg: 11.7, max: 20.0) +[2024-06-10 21:22:08,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:22:11,772][46990] Updated weights for policy 0, policy_version 31510 (0.0035) +[2024-06-10 21:22:13,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43144.6, 300 sec: 43542.6). Total num frames: 516325376. Throughput: 0: 43621.3. Samples: 516505340. Policy #0 lag: (min: 0.0, avg: 11.7, max: 20.0) +[2024-06-10 21:22:13,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:22:14,746][46990] Updated weights for policy 0, policy_version 31520 (0.0041) +[2024-06-10 21:22:18,239][46753] Fps is (10 sec: 37682.7, 60 sec: 43417.5, 300 sec: 43376.6). Total num frames: 516521984. Throughput: 0: 43518.4. Samples: 516626740. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:22:18,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:22:19,041][46990] Updated weights for policy 0, policy_version 31530 (0.0036) +[2024-06-10 21:22:21,987][46990] Updated weights for policy 0, policy_version 31540 (0.0037) +[2024-06-10 21:22:23,239][46753] Fps is (10 sec: 47513.6, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 516800512. Throughput: 0: 43683.6. Samples: 516891820. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:22:23,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:22:23,257][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000031543_516800512.pth... +[2024-06-10 21:22:23,310][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000030903_506314752.pth +[2024-06-10 21:22:26,558][46990] Updated weights for policy 0, policy_version 31550 (0.0032) +[2024-06-10 21:22:28,240][46753] Fps is (10 sec: 45875.2, 60 sec: 43417.5, 300 sec: 43543.2). Total num frames: 516980736. Throughput: 0: 43562.6. Samples: 517160820. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:22:28,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:22:29,710][46990] Updated weights for policy 0, policy_version 31560 (0.0038) +[2024-06-10 21:22:30,912][46970] Signal inference workers to stop experience collection... (7650 times) +[2024-06-10 21:22:30,913][46970] Signal inference workers to resume experience collection... (7650 times) +[2024-06-10 21:22:30,926][46990] InferenceWorker_p0-w0: stopping experience collection (7650 times) +[2024-06-10 21:22:30,945][46990] InferenceWorker_p0-w0: resuming experience collection (7650 times) +[2024-06-10 21:22:33,240][46753] Fps is (10 sec: 37682.5, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 517177344. Throughput: 0: 43438.6. Samples: 517279540. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:22:33,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:22:34,099][46990] Updated weights for policy 0, policy_version 31570 (0.0041) +[2024-06-10 21:22:37,192][46990] Updated weights for policy 0, policy_version 31580 (0.0033) +[2024-06-10 21:22:38,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 517455872. Throughput: 0: 43505.9. Samples: 517545180. Policy #0 lag: (min: 0.0, avg: 9.0, max: 19.0) +[2024-06-10 21:22:38,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:22:41,821][46990] Updated weights for policy 0, policy_version 31590 (0.0028) +[2024-06-10 21:22:43,240][46753] Fps is (10 sec: 47513.3, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 517652480. Throughput: 0: 43484.2. Samples: 517812460. Policy #0 lag: (min: 0.0, avg: 9.0, max: 19.0) +[2024-06-10 21:22:43,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:22:44,461][46990] Updated weights for policy 0, policy_version 31600 (0.0041) +[2024-06-10 21:22:48,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43690.8, 300 sec: 43542.6). Total num frames: 517849088. Throughput: 0: 43680.6. Samples: 517936860. Policy #0 lag: (min: 0.0, avg: 9.0, max: 19.0) +[2024-06-10 21:22:48,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:22:49,102][46990] Updated weights for policy 0, policy_version 31610 (0.0042) +[2024-06-10 21:22:52,323][46990] Updated weights for policy 0, policy_version 31620 (0.0034) +[2024-06-10 21:22:53,239][46753] Fps is (10 sec: 45875.9, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 518111232. Throughput: 0: 43758.5. Samples: 518201080. Policy #0 lag: (min: 0.0, avg: 9.0, max: 19.0) +[2024-06-10 21:22:53,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:22:56,366][46990] Updated weights for policy 0, policy_version 31630 (0.0031) +[2024-06-10 21:22:58,240][46753] Fps is (10 sec: 44234.4, 60 sec: 43417.2, 300 sec: 43543.2). Total num frames: 518291456. Throughput: 0: 43566.2. Samples: 518465840. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:22:58,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:22:59,766][46990] Updated weights for policy 0, policy_version 31640 (0.0031) +[2024-06-10 21:23:03,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43963.8, 300 sec: 43542.6). Total num frames: 518504448. Throughput: 0: 43537.9. Samples: 518585940. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:23:03,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:23:04,163][46990] Updated weights for policy 0, policy_version 31650 (0.0025) +[2024-06-10 21:23:07,309][46990] Updated weights for policy 0, policy_version 31660 (0.0040) +[2024-06-10 21:23:08,239][46753] Fps is (10 sec: 47516.1, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 518766592. Throughput: 0: 43516.9. Samples: 518850080. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:23:08,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:23:12,045][46990] Updated weights for policy 0, policy_version 31670 (0.0044) +[2024-06-10 21:23:13,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43417.6, 300 sec: 43432.2). Total num frames: 518930432. Throughput: 0: 43530.8. Samples: 519119700. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:23:13,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:23:14,723][46990] Updated weights for policy 0, policy_version 31680 (0.0043) +[2024-06-10 21:23:18,240][46753] Fps is (10 sec: 39321.0, 60 sec: 43963.7, 300 sec: 43542.6). Total num frames: 519159808. Throughput: 0: 43560.5. Samples: 519239760. Policy #0 lag: (min: 0.0, avg: 12.8, max: 22.0) +[2024-06-10 21:23:18,244][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:23:19,404][46990] Updated weights for policy 0, policy_version 31690 (0.0027) +[2024-06-10 21:23:22,193][46990] Updated weights for policy 0, policy_version 31700 (0.0041) +[2024-06-10 21:23:23,240][46753] Fps is (10 sec: 49151.2, 60 sec: 43690.6, 300 sec: 43709.2). Total num frames: 519421952. Throughput: 0: 43587.4. Samples: 519506620. Policy #0 lag: (min: 0.0, avg: 12.8, max: 22.0) +[2024-06-10 21:23:23,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:23:26,585][46990] Updated weights for policy 0, policy_version 31710 (0.0023) +[2024-06-10 21:23:28,239][46753] Fps is (10 sec: 42599.1, 60 sec: 43417.7, 300 sec: 43542.6). Total num frames: 519585792. Throughput: 0: 43673.6. Samples: 519777760. Policy #0 lag: (min: 0.0, avg: 12.8, max: 22.0) +[2024-06-10 21:23:28,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:23:29,874][46990] Updated weights for policy 0, policy_version 31720 (0.0034) +[2024-06-10 21:23:33,239][46753] Fps is (10 sec: 37683.6, 60 sec: 43690.8, 300 sec: 43487.1). Total num frames: 519798784. Throughput: 0: 43446.2. Samples: 519891940. Policy #0 lag: (min: 0.0, avg: 12.8, max: 22.0) +[2024-06-10 21:23:33,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:23:34,504][46990] Updated weights for policy 0, policy_version 31730 (0.0033) +[2024-06-10 21:23:37,284][46990] Updated weights for policy 0, policy_version 31740 (0.0028) +[2024-06-10 21:23:38,239][46753] Fps is (10 sec: 49151.6, 60 sec: 43690.7, 300 sec: 43709.4). Total num frames: 520077312. Throughput: 0: 43504.5. Samples: 520158780. Policy #0 lag: (min: 0.0, avg: 12.3, max: 24.0) +[2024-06-10 21:23:38,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:23:42,092][46970] Signal inference workers to stop experience collection... (7700 times) +[2024-06-10 21:23:42,093][46970] Signal inference workers to resume experience collection... (7700 times) +[2024-06-10 21:23:42,144][46990] InferenceWorker_p0-w0: stopping experience collection (7700 times) +[2024-06-10 21:23:42,144][46990] InferenceWorker_p0-w0: resuming experience collection (7700 times) +[2024-06-10 21:23:42,232][46990] Updated weights for policy 0, policy_version 31750 (0.0030) +[2024-06-10 21:23:43,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43144.8, 300 sec: 43542.6). Total num frames: 520241152. Throughput: 0: 43693.4. Samples: 520432020. Policy #0 lag: (min: 0.0, avg: 12.3, max: 24.0) +[2024-06-10 21:23:43,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:23:44,472][46990] Updated weights for policy 0, policy_version 31760 (0.0023) +[2024-06-10 21:23:48,239][46753] Fps is (10 sec: 39321.9, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 520470528. Throughput: 0: 43689.8. Samples: 520551980. Policy #0 lag: (min: 0.0, avg: 12.3, max: 24.0) +[2024-06-10 21:23:48,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:23:49,659][46990] Updated weights for policy 0, policy_version 31770 (0.0036) +[2024-06-10 21:23:52,307][46990] Updated weights for policy 0, policy_version 31780 (0.0046) +[2024-06-10 21:23:53,239][46753] Fps is (10 sec: 49151.2, 60 sec: 43690.7, 300 sec: 43709.2). Total num frames: 520732672. Throughput: 0: 43650.1. Samples: 520814340. Policy #0 lag: (min: 0.0, avg: 12.3, max: 24.0) +[2024-06-10 21:23:53,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:23:56,801][46990] Updated weights for policy 0, policy_version 31790 (0.0040) +[2024-06-10 21:23:58,240][46753] Fps is (10 sec: 40959.1, 60 sec: 43144.8, 300 sec: 43487.0). Total num frames: 520880128. Throughput: 0: 43765.1. Samples: 521089140. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 21:23:58,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:23:59,853][46990] Updated weights for policy 0, policy_version 31800 (0.0024) +[2024-06-10 21:24:03,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 521125888. Throughput: 0: 43637.5. Samples: 521203440. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 21:24:03,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:24:04,734][46990] Updated weights for policy 0, policy_version 31810 (0.0033) +[2024-06-10 21:24:07,371][46990] Updated weights for policy 0, policy_version 31820 (0.0041) +[2024-06-10 21:24:08,239][46753] Fps is (10 sec: 50791.0, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 521388032. Throughput: 0: 43565.8. Samples: 521467080. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 21:24:08,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:24:12,681][46990] Updated weights for policy 0, policy_version 31830 (0.0032) +[2024-06-10 21:24:13,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43144.5, 300 sec: 43431.5). Total num frames: 521519104. Throughput: 0: 43560.9. Samples: 521738000. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 21:24:13,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:24:14,782][46990] Updated weights for policy 0, policy_version 31840 (0.0039) +[2024-06-10 21:24:18,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43690.8, 300 sec: 43542.6). Total num frames: 521781248. Throughput: 0: 43574.3. Samples: 521852780. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 21:24:18,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:24:19,931][46990] Updated weights for policy 0, policy_version 31850 (0.0028) +[2024-06-10 21:24:22,485][46990] Updated weights for policy 0, policy_version 31860 (0.0022) +[2024-06-10 21:24:23,244][46753] Fps is (10 sec: 52404.8, 60 sec: 43687.5, 300 sec: 43708.5). Total num frames: 522043392. Throughput: 0: 43573.4. Samples: 522119780. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 21:24:23,245][46753] Avg episode reward: [(0, '0.307')] +[2024-06-10 21:24:23,260][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000031863_522043392.pth... +[2024-06-10 21:24:23,309][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000031223_511557632.pth +[2024-06-10 21:24:23,319][46970] Saving new best policy, reward=0.307! +[2024-06-10 21:24:27,253][46990] Updated weights for policy 0, policy_version 31870 (0.0034) +[2024-06-10 21:24:28,239][46753] Fps is (10 sec: 39321.4, 60 sec: 43144.5, 300 sec: 43487.0). Total num frames: 522174464. Throughput: 0: 43636.3. Samples: 522395660. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 21:24:28,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:24:30,071][46990] Updated weights for policy 0, policy_version 31880 (0.0032) +[2024-06-10 21:24:33,239][46753] Fps is (10 sec: 37700.2, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 522420224. Throughput: 0: 43503.9. Samples: 522509660. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 21:24:33,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:24:35,109][46990] Updated weights for policy 0, policy_version 31890 (0.0031) +[2024-06-10 21:24:35,947][46970] Signal inference workers to stop experience collection... (7750 times) +[2024-06-10 21:24:35,947][46970] Signal inference workers to resume experience collection... (7750 times) +[2024-06-10 21:24:35,977][46990] InferenceWorker_p0-w0: stopping experience collection (7750 times) +[2024-06-10 21:24:35,978][46990] InferenceWorker_p0-w0: resuming experience collection (7750 times) +[2024-06-10 21:24:37,317][46990] Updated weights for policy 0, policy_version 31900 (0.0042) +[2024-06-10 21:24:38,240][46753] Fps is (10 sec: 52428.2, 60 sec: 43690.6, 300 sec: 43598.8). Total num frames: 522698752. Throughput: 0: 43593.7. Samples: 522776060. Policy #0 lag: (min: 0.0, avg: 9.5, max: 22.0) +[2024-06-10 21:24:38,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:24:42,781][46990] Updated weights for policy 0, policy_version 31910 (0.0024) +[2024-06-10 21:24:43,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43417.5, 300 sec: 43542.6). Total num frames: 522846208. Throughput: 0: 43506.4. Samples: 523046920. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:24:43,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:24:44,893][46990] Updated weights for policy 0, policy_version 31920 (0.0040) +[2024-06-10 21:24:48,239][46753] Fps is (10 sec: 39322.2, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 523091968. Throughput: 0: 43455.6. Samples: 523158940. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:24:48,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:24:50,333][46990] Updated weights for policy 0, policy_version 31930 (0.0049) +[2024-06-10 21:24:52,387][46990] Updated weights for policy 0, policy_version 31940 (0.0044) +[2024-06-10 21:24:53,239][46753] Fps is (10 sec: 50790.3, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 523354112. Throughput: 0: 43572.9. Samples: 523427860. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:24:53,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:24:57,712][46990] Updated weights for policy 0, policy_version 31950 (0.0052) +[2024-06-10 21:24:58,239][46753] Fps is (10 sec: 37683.4, 60 sec: 43144.7, 300 sec: 43431.5). Total num frames: 523468800. Throughput: 0: 43530.3. Samples: 523696860. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:24:58,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:25:00,233][46990] Updated weights for policy 0, policy_version 31960 (0.0033) +[2024-06-10 21:25:03,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 523747328. Throughput: 0: 43520.9. Samples: 523811220. Policy #0 lag: (min: 0.0, avg: 13.3, max: 23.0) +[2024-06-10 21:25:03,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:25:05,403][46990] Updated weights for policy 0, policy_version 31970 (0.0036) +[2024-06-10 21:25:07,770][46990] Updated weights for policy 0, policy_version 31980 (0.0042) +[2024-06-10 21:25:08,239][46753] Fps is (10 sec: 52428.9, 60 sec: 43417.7, 300 sec: 43598.1). Total num frames: 523993088. Throughput: 0: 43565.4. Samples: 524080020. Policy #0 lag: (min: 0.0, avg: 13.3, max: 23.0) +[2024-06-10 21:25:08,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:25:12,572][46990] Updated weights for policy 0, policy_version 31990 (0.0031) +[2024-06-10 21:25:13,239][46753] Fps is (10 sec: 39321.1, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 524140544. Throughput: 0: 43385.3. Samples: 524348000. Policy #0 lag: (min: 0.0, avg: 13.3, max: 23.0) +[2024-06-10 21:25:13,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:25:15,114][46990] Updated weights for policy 0, policy_version 32000 (0.0030) +[2024-06-10 21:25:18,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43963.7, 300 sec: 43598.8). Total num frames: 524419072. Throughput: 0: 43429.8. Samples: 524464000. Policy #0 lag: (min: 0.0, avg: 13.3, max: 23.0) +[2024-06-10 21:25:18,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:25:20,419][46990] Updated weights for policy 0, policy_version 32010 (0.0036) +[2024-06-10 21:25:22,882][46990] Updated weights for policy 0, policy_version 32020 (0.0038) +[2024-06-10 21:25:23,240][46753] Fps is (10 sec: 50788.6, 60 sec: 43420.6, 300 sec: 43653.6). Total num frames: 524648448. Throughput: 0: 43418.8. Samples: 524729920. Policy #0 lag: (min: 0.0, avg: 10.1, max: 24.0) +[2024-06-10 21:25:23,244][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:25:25,294][46970] Signal inference workers to stop experience collection... (7800 times) +[2024-06-10 21:25:25,342][46990] InferenceWorker_p0-w0: stopping experience collection (7800 times) +[2024-06-10 21:25:25,349][46970] Signal inference workers to resume experience collection... (7800 times) +[2024-06-10 21:25:25,363][46990] InferenceWorker_p0-w0: resuming experience collection (7800 times) +[2024-06-10 21:25:28,157][46990] Updated weights for policy 0, policy_version 32030 (0.0046) +[2024-06-10 21:25:28,239][46753] Fps is (10 sec: 36044.9, 60 sec: 43417.7, 300 sec: 43487.0). Total num frames: 524779520. Throughput: 0: 43396.1. Samples: 524999740. Policy #0 lag: (min: 0.0, avg: 10.1, max: 24.0) +[2024-06-10 21:25:28,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:25:30,600][46990] Updated weights for policy 0, policy_version 32040 (0.0034) +[2024-06-10 21:25:33,239][46753] Fps is (10 sec: 42600.0, 60 sec: 44236.8, 300 sec: 43598.9). Total num frames: 525074432. Throughput: 0: 43537.7. Samples: 525118140. Policy #0 lag: (min: 0.0, avg: 10.1, max: 24.0) +[2024-06-10 21:25:33,242][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:25:35,381][46990] Updated weights for policy 0, policy_version 32050 (0.0043) +[2024-06-10 21:25:38,020][46990] Updated weights for policy 0, policy_version 32060 (0.0044) +[2024-06-10 21:25:38,240][46753] Fps is (10 sec: 50789.5, 60 sec: 43144.5, 300 sec: 43598.1). Total num frames: 525287424. Throughput: 0: 43526.1. Samples: 525386540. Policy #0 lag: (min: 0.0, avg: 10.1, max: 24.0) +[2024-06-10 21:25:38,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:25:42,586][46990] Updated weights for policy 0, policy_version 32070 (0.0024) +[2024-06-10 21:25:43,241][46753] Fps is (10 sec: 36038.7, 60 sec: 43143.3, 300 sec: 43431.2). Total num frames: 525434880. Throughput: 0: 43492.5. Samples: 525654100. Policy #0 lag: (min: 0.0, avg: 10.1, max: 24.0) +[2024-06-10 21:25:43,242][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:25:45,389][46990] Updated weights for policy 0, policy_version 32080 (0.0046) +[2024-06-10 21:25:48,240][46753] Fps is (10 sec: 44236.9, 60 sec: 43963.6, 300 sec: 43598.1). Total num frames: 525729792. Throughput: 0: 43604.7. Samples: 525773440. Policy #0 lag: (min: 0.0, avg: 7.1, max: 21.0) +[2024-06-10 21:25:48,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:25:50,128][46990] Updated weights for policy 0, policy_version 32090 (0.0035) +[2024-06-10 21:25:53,043][46990] Updated weights for policy 0, policy_version 32100 (0.0042) +[2024-06-10 21:25:53,239][46753] Fps is (10 sec: 49160.3, 60 sec: 42871.5, 300 sec: 43542.6). Total num frames: 525926400. Throughput: 0: 43429.2. Samples: 526034340. Policy #0 lag: (min: 0.0, avg: 7.1, max: 21.0) +[2024-06-10 21:25:53,240][46753] Avg episode reward: [(0, '0.307')] +[2024-06-10 21:25:57,749][46990] Updated weights for policy 0, policy_version 32110 (0.0036) +[2024-06-10 21:25:58,239][46753] Fps is (10 sec: 36045.3, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 526090240. Throughput: 0: 43247.2. Samples: 526294120. Policy #0 lag: (min: 0.0, avg: 7.1, max: 21.0) +[2024-06-10 21:25:58,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:26:01,173][46990] Updated weights for policy 0, policy_version 32120 (0.0035) +[2024-06-10 21:26:03,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 526368768. Throughput: 0: 43559.2. Samples: 526424160. Policy #0 lag: (min: 0.0, avg: 7.1, max: 21.0) +[2024-06-10 21:26:03,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:26:04,854][46990] Updated weights for policy 0, policy_version 32130 (0.0039) +[2024-06-10 21:26:08,240][46753] Fps is (10 sec: 47513.1, 60 sec: 42871.4, 300 sec: 43487.0). Total num frames: 526565376. Throughput: 0: 43491.0. Samples: 526687000. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:26:08,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:26:08,493][46990] Updated weights for policy 0, policy_version 32140 (0.0021) +[2024-06-10 21:26:12,390][46990] Updated weights for policy 0, policy_version 32150 (0.0036) +[2024-06-10 21:26:13,240][46753] Fps is (10 sec: 37682.3, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 526745600. Throughput: 0: 43358.0. Samples: 526950860. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:26:13,243][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:26:15,833][46990] Updated weights for policy 0, policy_version 32160 (0.0038) +[2024-06-10 21:26:18,240][46753] Fps is (10 sec: 45874.9, 60 sec: 43417.5, 300 sec: 43542.5). Total num frames: 527024128. Throughput: 0: 43542.6. Samples: 527077560. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:26:18,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:26:19,759][46990] Updated weights for policy 0, policy_version 32170 (0.0034) +[2024-06-10 21:26:23,239][46753] Fps is (10 sec: 47514.4, 60 sec: 42871.8, 300 sec: 43542.6). Total num frames: 527220736. Throughput: 0: 43373.9. Samples: 527338360. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:26:23,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:26:23,253][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000032179_527220736.pth... +[2024-06-10 21:26:23,307][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000031543_516800512.pth +[2024-06-10 21:26:23,645][46990] Updated weights for policy 0, policy_version 32180 (0.0033) +[2024-06-10 21:26:27,487][46990] Updated weights for policy 0, policy_version 32190 (0.0035) +[2024-06-10 21:26:28,239][46753] Fps is (10 sec: 39322.0, 60 sec: 43963.6, 300 sec: 43542.6). Total num frames: 527417344. Throughput: 0: 43179.4. Samples: 527597100. Policy #0 lag: (min: 0.0, avg: 12.7, max: 21.0) +[2024-06-10 21:26:28,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:26:31,318][46990] Updated weights for policy 0, policy_version 32200 (0.0034) +[2024-06-10 21:26:31,841][46970] Signal inference workers to stop experience collection... (7850 times) +[2024-06-10 21:26:31,841][46970] Signal inference workers to resume experience collection... (7850 times) +[2024-06-10 21:26:31,870][46990] InferenceWorker_p0-w0: stopping experience collection (7850 times) +[2024-06-10 21:26:31,870][46990] InferenceWorker_p0-w0: resuming experience collection (7850 times) +[2024-06-10 21:26:33,240][46753] Fps is (10 sec: 44236.0, 60 sec: 43144.5, 300 sec: 43487.0). Total num frames: 527663104. Throughput: 0: 43436.4. Samples: 527728080. Policy #0 lag: (min: 0.0, avg: 12.7, max: 21.0) +[2024-06-10 21:26:33,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:26:34,982][46990] Updated weights for policy 0, policy_version 32210 (0.0033) +[2024-06-10 21:26:38,239][46753] Fps is (10 sec: 44236.8, 60 sec: 42871.5, 300 sec: 43431.5). Total num frames: 527859712. Throughput: 0: 43340.0. Samples: 527984640. Policy #0 lag: (min: 0.0, avg: 12.7, max: 21.0) +[2024-06-10 21:26:38,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:26:38,796][46990] Updated weights for policy 0, policy_version 32220 (0.0037) +[2024-06-10 21:26:42,847][46990] Updated weights for policy 0, policy_version 32230 (0.0028) +[2024-06-10 21:26:43,240][46753] Fps is (10 sec: 39321.6, 60 sec: 43691.8, 300 sec: 43487.0). Total num frames: 528056320. Throughput: 0: 43447.4. Samples: 528249260. Policy #0 lag: (min: 0.0, avg: 12.7, max: 21.0) +[2024-06-10 21:26:43,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 21:26:46,210][46990] Updated weights for policy 0, policy_version 32240 (0.0031) +[2024-06-10 21:26:48,240][46753] Fps is (10 sec: 47513.3, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 528334848. Throughput: 0: 43505.1. Samples: 528381900. Policy #0 lag: (min: 1.0, avg: 10.2, max: 23.0) +[2024-06-10 21:26:48,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:26:50,273][46990] Updated weights for policy 0, policy_version 32250 (0.0047) +[2024-06-10 21:26:53,244][46753] Fps is (10 sec: 44217.2, 60 sec: 42868.2, 300 sec: 43430.8). Total num frames: 528498688. Throughput: 0: 43188.6. Samples: 528630680. Policy #0 lag: (min: 1.0, avg: 10.2, max: 23.0) +[2024-06-10 21:26:53,245][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:26:53,941][46990] Updated weights for policy 0, policy_version 32260 (0.0030) +[2024-06-10 21:26:57,538][46990] Updated weights for policy 0, policy_version 32270 (0.0032) +[2024-06-10 21:26:58,239][46753] Fps is (10 sec: 37683.7, 60 sec: 43690.6, 300 sec: 43542.6). Total num frames: 528711680. Throughput: 0: 42970.3. Samples: 528884520. Policy #0 lag: (min: 1.0, avg: 10.2, max: 23.0) +[2024-06-10 21:26:58,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:27:01,576][46990] Updated weights for policy 0, policy_version 32280 (0.0051) +[2024-06-10 21:27:03,239][46753] Fps is (10 sec: 45896.4, 60 sec: 43144.5, 300 sec: 43431.5). Total num frames: 528957440. Throughput: 0: 43147.8. Samples: 529019200. Policy #0 lag: (min: 1.0, avg: 10.2, max: 23.0) +[2024-06-10 21:27:03,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:27:05,084][46990] Updated weights for policy 0, policy_version 32290 (0.0036) +[2024-06-10 21:27:08,239][46753] Fps is (10 sec: 42598.5, 60 sec: 42871.5, 300 sec: 43431.5). Total num frames: 529137664. Throughput: 0: 43136.9. Samples: 529279520. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:27:08,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:27:09,226][46990] Updated weights for policy 0, policy_version 32300 (0.0027) +[2024-06-10 21:27:12,657][46990] Updated weights for policy 0, policy_version 32310 (0.0033) +[2024-06-10 21:27:13,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 529383424. Throughput: 0: 43241.9. Samples: 529542980. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:27:13,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:27:16,684][46990] Updated weights for policy 0, policy_version 32320 (0.0030) +[2024-06-10 21:27:18,239][46753] Fps is (10 sec: 49152.1, 60 sec: 43417.7, 300 sec: 43487.0). Total num frames: 529629184. Throughput: 0: 43366.8. Samples: 529679580. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:27:18,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:27:19,997][46990] Updated weights for policy 0, policy_version 32330 (0.0042) +[2024-06-10 21:27:23,239][46753] Fps is (10 sec: 40959.7, 60 sec: 42871.4, 300 sec: 43431.5). Total num frames: 529793024. Throughput: 0: 43365.4. Samples: 529936080. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:27:23,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:27:24,155][46990] Updated weights for policy 0, policy_version 32340 (0.0028) +[2024-06-10 21:27:27,573][46990] Updated weights for policy 0, policy_version 32350 (0.0034) +[2024-06-10 21:27:28,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 530038784. Throughput: 0: 43116.1. Samples: 530189480. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:27:28,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:27:31,692][46990] Updated weights for policy 0, policy_version 32360 (0.0033) +[2024-06-10 21:27:33,240][46753] Fps is (10 sec: 45874.1, 60 sec: 43144.4, 300 sec: 43375.9). Total num frames: 530251776. Throughput: 0: 43206.5. Samples: 530326200. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 21:27:33,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:27:35,293][46990] Updated weights for policy 0, policy_version 32370 (0.0039) +[2024-06-10 21:27:38,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43144.6, 300 sec: 43376.0). Total num frames: 530448384. Throughput: 0: 43522.6. Samples: 530589000. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 21:27:38,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:27:39,304][46990] Updated weights for policy 0, policy_version 32380 (0.0035) +[2024-06-10 21:27:42,608][46990] Updated weights for policy 0, policy_version 32390 (0.0034) +[2024-06-10 21:27:43,244][46753] Fps is (10 sec: 44218.0, 60 sec: 43960.5, 300 sec: 43541.9). Total num frames: 530694144. Throughput: 0: 43548.1. Samples: 530844380. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 21:27:43,244][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:27:46,805][46990] Updated weights for policy 0, policy_version 32400 (0.0041) +[2024-06-10 21:27:48,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43144.7, 300 sec: 43431.5). Total num frames: 530923520. Throughput: 0: 43611.5. Samples: 530981720. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 21:27:48,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:27:50,244][46990] Updated weights for policy 0, policy_version 32410 (0.0041) +[2024-06-10 21:27:53,239][46753] Fps is (10 sec: 39339.4, 60 sec: 43147.8, 300 sec: 43376.0). Total num frames: 531087360. Throughput: 0: 43469.3. Samples: 531235640. Policy #0 lag: (min: 0.0, avg: 12.1, max: 22.0) +[2024-06-10 21:27:53,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:27:54,234][46990] Updated weights for policy 0, policy_version 32420 (0.0034) +[2024-06-10 21:27:57,846][46990] Updated weights for policy 0, policy_version 32430 (0.0038) +[2024-06-10 21:27:58,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43963.8, 300 sec: 43542.6). Total num frames: 531349504. Throughput: 0: 43142.7. Samples: 531484400. Policy #0 lag: (min: 0.0, avg: 12.1, max: 22.0) +[2024-06-10 21:27:58,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:28:02,138][46990] Updated weights for policy 0, policy_version 32440 (0.0027) +[2024-06-10 21:28:03,240][46753] Fps is (10 sec: 45874.6, 60 sec: 43144.4, 300 sec: 43320.4). Total num frames: 531546112. Throughput: 0: 43252.3. Samples: 531625940. Policy #0 lag: (min: 0.0, avg: 12.1, max: 22.0) +[2024-06-10 21:28:03,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:28:04,936][46970] Signal inference workers to stop experience collection... (7900 times) +[2024-06-10 21:28:04,981][46990] InferenceWorker_p0-w0: stopping experience collection (7900 times) +[2024-06-10 21:28:05,051][46970] Signal inference workers to resume experience collection... (7900 times) +[2024-06-10 21:28:05,052][46990] InferenceWorker_p0-w0: resuming experience collection (7900 times) +[2024-06-10 21:28:05,181][46990] Updated weights for policy 0, policy_version 32450 (0.0021) +[2024-06-10 21:28:08,240][46753] Fps is (10 sec: 40959.5, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 531759104. Throughput: 0: 43323.9. Samples: 531885660. Policy #0 lag: (min: 0.0, avg: 12.1, max: 22.0) +[2024-06-10 21:28:08,249][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:28:09,623][46990] Updated weights for policy 0, policy_version 32460 (0.0038) +[2024-06-10 21:28:12,726][46990] Updated weights for policy 0, policy_version 32470 (0.0034) +[2024-06-10 21:28:13,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43690.6, 300 sec: 43542.6). Total num frames: 532004864. Throughput: 0: 43469.2. Samples: 532145600. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:28:13,252][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:28:17,123][46990] Updated weights for policy 0, policy_version 32480 (0.0035) +[2024-06-10 21:28:18,239][46753] Fps is (10 sec: 44237.0, 60 sec: 42871.4, 300 sec: 43320.4). Total num frames: 532201472. Throughput: 0: 43521.5. Samples: 532284660. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:28:18,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:28:20,163][46990] Updated weights for policy 0, policy_version 32490 (0.0048) +[2024-06-10 21:28:23,240][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 532414464. Throughput: 0: 43392.4. Samples: 532541660. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:28:23,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:28:23,253][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000032496_532414464.pth... +[2024-06-10 21:28:23,317][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000031863_522043392.pth +[2024-06-10 21:28:24,421][46990] Updated weights for policy 0, policy_version 32500 (0.0042) +[2024-06-10 21:28:27,827][46990] Updated weights for policy 0, policy_version 32510 (0.0033) +[2024-06-10 21:28:28,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 532660224. Throughput: 0: 43295.9. Samples: 532792500. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:28:28,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:28:32,081][46990] Updated weights for policy 0, policy_version 32520 (0.0032) +[2024-06-10 21:28:33,239][46753] Fps is (10 sec: 40960.2, 60 sec: 42871.6, 300 sec: 43209.3). Total num frames: 532824064. Throughput: 0: 43311.9. Samples: 532930760. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 21:28:33,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:28:35,091][46990] Updated weights for policy 0, policy_version 32530 (0.0029) +[2024-06-10 21:28:38,240][46753] Fps is (10 sec: 39321.3, 60 sec: 43417.5, 300 sec: 43431.5). Total num frames: 533053440. Throughput: 0: 43357.6. Samples: 533186740. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 21:28:38,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:28:40,013][46990] Updated weights for policy 0, policy_version 32540 (0.0034) +[2024-06-10 21:28:42,577][46990] Updated weights for policy 0, policy_version 32550 (0.0025) +[2024-06-10 21:28:43,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43420.8, 300 sec: 43487.0). Total num frames: 533299200. Throughput: 0: 43599.0. Samples: 533446360. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 21:28:43,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:28:47,362][46990] Updated weights for policy 0, policy_version 32560 (0.0030) +[2024-06-10 21:28:48,239][46753] Fps is (10 sec: 44237.5, 60 sec: 42871.5, 300 sec: 43264.9). Total num frames: 533495808. Throughput: 0: 43597.9. Samples: 533587840. Policy #0 lag: (min: 0.0, avg: 10.0, max: 23.0) +[2024-06-10 21:28:48,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:28:50,233][46990] Updated weights for policy 0, policy_version 32570 (0.0042) +[2024-06-10 21:28:53,240][46753] Fps is (10 sec: 40959.6, 60 sec: 43690.5, 300 sec: 43487.0). Total num frames: 533708800. Throughput: 0: 43472.0. Samples: 533841900. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 21:28:53,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:28:55,149][46990] Updated weights for policy 0, policy_version 32580 (0.0036) +[2024-06-10 21:28:57,807][46990] Updated weights for policy 0, policy_version 32590 (0.0056) +[2024-06-10 21:28:58,239][46753] Fps is (10 sec: 47513.5, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 533970944. Throughput: 0: 43314.8. Samples: 534094760. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 21:28:58,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:29:02,978][46990] Updated weights for policy 0, policy_version 32600 (0.0032) +[2024-06-10 21:29:03,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43144.6, 300 sec: 43209.3). Total num frames: 534134784. Throughput: 0: 43349.4. Samples: 534235380. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 21:29:03,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:29:05,169][46990] Updated weights for policy 0, policy_version 32610 (0.0023) +[2024-06-10 21:29:08,239][46753] Fps is (10 sec: 37683.5, 60 sec: 43144.7, 300 sec: 43487.0). Total num frames: 534347776. Throughput: 0: 43375.3. Samples: 534493540. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 21:29:08,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:29:10,332][46990] Updated weights for policy 0, policy_version 32620 (0.0032) +[2024-06-10 21:29:12,599][46990] Updated weights for policy 0, policy_version 32630 (0.0043) +[2024-06-10 21:29:13,240][46753] Fps is (10 sec: 47511.4, 60 sec: 43417.3, 300 sec: 43486.9). Total num frames: 534609920. Throughput: 0: 43486.7. Samples: 534749420. Policy #0 lag: (min: 0.0, avg: 8.6, max: 21.0) +[2024-06-10 21:29:13,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:29:17,689][46990] Updated weights for policy 0, policy_version 32640 (0.0035) +[2024-06-10 21:29:18,239][46753] Fps is (10 sec: 45874.5, 60 sec: 43417.6, 300 sec: 43265.5). Total num frames: 534806528. Throughput: 0: 43568.0. Samples: 534891320. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 21:29:18,243][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:29:20,287][46990] Updated weights for policy 0, policy_version 32650 (0.0034) +[2024-06-10 21:29:23,240][46753] Fps is (10 sec: 40961.6, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 535019520. Throughput: 0: 43670.2. Samples: 535151900. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 21:29:23,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:29:25,479][46990] Updated weights for policy 0, policy_version 32660 (0.0051) +[2024-06-10 21:29:27,866][46990] Updated weights for policy 0, policy_version 32670 (0.0027) +[2024-06-10 21:29:28,240][46753] Fps is (10 sec: 47513.3, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 535281664. Throughput: 0: 43339.0. Samples: 535396620. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 21:29:28,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:29:30,455][46970] Signal inference workers to stop experience collection... (7950 times) +[2024-06-10 21:29:30,507][46970] Signal inference workers to resume experience collection... (7950 times) +[2024-06-10 21:29:30,508][46990] InferenceWorker_p0-w0: stopping experience collection (7950 times) +[2024-06-10 21:29:30,555][46990] InferenceWorker_p0-w0: resuming experience collection (7950 times) +[2024-06-10 21:29:32,870][46990] Updated weights for policy 0, policy_version 32680 (0.0038) +[2024-06-10 21:29:33,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43417.7, 300 sec: 43153.8). Total num frames: 535429120. Throughput: 0: 43319.6. Samples: 535537220. Policy #0 lag: (min: 0.0, avg: 9.9, max: 21.0) +[2024-06-10 21:29:33,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:29:35,289][46990] Updated weights for policy 0, policy_version 32690 (0.0030) +[2024-06-10 21:29:38,239][46753] Fps is (10 sec: 37683.4, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 535658496. Throughput: 0: 43533.0. Samples: 535800880. Policy #0 lag: (min: 0.0, avg: 12.7, max: 23.0) +[2024-06-10 21:29:38,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:29:40,366][46990] Updated weights for policy 0, policy_version 32700 (0.0033) +[2024-06-10 21:29:42,750][46990] Updated weights for policy 0, policy_version 32710 (0.0027) +[2024-06-10 21:29:43,239][46753] Fps is (10 sec: 49152.2, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 535920640. Throughput: 0: 43475.2. Samples: 536051140. Policy #0 lag: (min: 0.0, avg: 12.7, max: 23.0) +[2024-06-10 21:29:43,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:29:47,770][46990] Updated weights for policy 0, policy_version 32720 (0.0045) +[2024-06-10 21:29:48,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43690.7, 300 sec: 43264.9). Total num frames: 536117248. Throughput: 0: 43627.6. Samples: 536198620. Policy #0 lag: (min: 0.0, avg: 12.7, max: 23.0) +[2024-06-10 21:29:48,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:29:50,465][46990] Updated weights for policy 0, policy_version 32730 (0.0031) +[2024-06-10 21:29:53,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.8, 300 sec: 43598.1). Total num frames: 536330240. Throughput: 0: 43652.8. Samples: 536457920. Policy #0 lag: (min: 0.0, avg: 12.7, max: 23.0) +[2024-06-10 21:29:53,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:29:55,256][46990] Updated weights for policy 0, policy_version 32740 (0.0033) +[2024-06-10 21:29:57,797][46990] Updated weights for policy 0, policy_version 32750 (0.0036) +[2024-06-10 21:29:58,244][46753] Fps is (10 sec: 47492.0, 60 sec: 43687.4, 300 sec: 43541.9). Total num frames: 536592384. Throughput: 0: 43489.9. Samples: 536706640. Policy #0 lag: (min: 0.0, avg: 12.7, max: 23.0) +[2024-06-10 21:29:58,244][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:30:02,833][46990] Updated weights for policy 0, policy_version 32760 (0.0037) +[2024-06-10 21:30:03,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.7, 300 sec: 43264.9). Total num frames: 536756224. Throughput: 0: 43423.6. Samples: 536845380. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 21:30:03,240][46753] Avg episode reward: [(0, '0.304')] +[2024-06-10 21:30:05,405][46990] Updated weights for policy 0, policy_version 32770 (0.0026) +[2024-06-10 21:30:08,239][46753] Fps is (10 sec: 37700.4, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 536969216. Throughput: 0: 43477.5. Samples: 537108380. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 21:30:08,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:30:10,231][46990] Updated weights for policy 0, policy_version 32780 (0.0036) +[2024-06-10 21:30:12,824][46990] Updated weights for policy 0, policy_version 32790 (0.0036) +[2024-06-10 21:30:13,239][46753] Fps is (10 sec: 49152.1, 60 sec: 43964.1, 300 sec: 43487.0). Total num frames: 537247744. Throughput: 0: 43638.3. Samples: 537360340. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 21:30:13,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:30:17,826][46990] Updated weights for policy 0, policy_version 32800 (0.0024) +[2024-06-10 21:30:18,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43690.7, 300 sec: 43320.5). Total num frames: 537427968. Throughput: 0: 43804.0. Samples: 537508400. Policy #0 lag: (min: 0.0, avg: 11.8, max: 23.0) +[2024-06-10 21:30:18,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:30:20,127][46990] Updated weights for policy 0, policy_version 32810 (0.0032) +[2024-06-10 21:30:23,239][46753] Fps is (10 sec: 36044.9, 60 sec: 43144.6, 300 sec: 43487.0). Total num frames: 537608192. Throughput: 0: 43557.4. Samples: 537760960. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 21:30:23,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:30:23,265][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000032814_537624576.pth... +[2024-06-10 21:30:23,336][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000032179_527220736.pth +[2024-06-10 21:30:25,394][46990] Updated weights for policy 0, policy_version 32820 (0.0036) +[2024-06-10 21:30:27,593][46970] Signal inference workers to stop experience collection... (8000 times) +[2024-06-10 21:30:27,594][46970] Signal inference workers to resume experience collection... (8000 times) +[2024-06-10 21:30:27,611][46990] InferenceWorker_p0-w0: stopping experience collection (8000 times) +[2024-06-10 21:30:27,612][46990] InferenceWorker_p0-w0: resuming experience collection (8000 times) +[2024-06-10 21:30:28,046][46990] Updated weights for policy 0, policy_version 32830 (0.0034) +[2024-06-10 21:30:28,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43417.7, 300 sec: 43431.5). Total num frames: 537886720. Throughput: 0: 43507.1. Samples: 538008960. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 21:30:28,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:30:32,983][46990] Updated weights for policy 0, policy_version 32840 (0.0042) +[2024-06-10 21:30:33,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43690.6, 300 sec: 43264.9). Total num frames: 538050560. Throughput: 0: 43214.0. Samples: 538143260. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 21:30:33,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:30:35,519][46990] Updated weights for policy 0, policy_version 32850 (0.0037) +[2024-06-10 21:30:38,239][46753] Fps is (10 sec: 39321.3, 60 sec: 43690.7, 300 sec: 43542.8). Total num frames: 538279936. Throughput: 0: 43344.8. Samples: 538408440. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 21:30:38,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:30:40,412][46990] Updated weights for policy 0, policy_version 32860 (0.0043) +[2024-06-10 21:30:42,841][46990] Updated weights for policy 0, policy_version 32870 (0.0032) +[2024-06-10 21:30:43,240][46753] Fps is (10 sec: 50790.0, 60 sec: 43963.5, 300 sec: 43487.0). Total num frames: 538558464. Throughput: 0: 43402.8. Samples: 538659580. Policy #0 lag: (min: 0.0, avg: 11.2, max: 23.0) +[2024-06-10 21:30:43,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:30:48,145][46990] Updated weights for policy 0, policy_version 32880 (0.0027) +[2024-06-10 21:30:48,244][46753] Fps is (10 sec: 42579.2, 60 sec: 43141.2, 300 sec: 43319.7). Total num frames: 538705920. Throughput: 0: 43565.8. Samples: 538806040. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 21:30:48,245][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:30:50,399][46990] Updated weights for policy 0, policy_version 32890 (0.0032) +[2024-06-10 21:30:53,239][46753] Fps is (10 sec: 37683.8, 60 sec: 43417.6, 300 sec: 43542.6). Total num frames: 538935296. Throughput: 0: 43424.3. Samples: 539062480. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 21:30:53,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:30:55,393][46990] Updated weights for policy 0, policy_version 32900 (0.0040) +[2024-06-10 21:30:58,239][46753] Fps is (10 sec: 47535.5, 60 sec: 43147.8, 300 sec: 43431.5). Total num frames: 539181056. Throughput: 0: 43445.8. Samples: 539315400. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 21:30:58,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:30:58,274][46990] Updated weights for policy 0, policy_version 32910 (0.0041) +[2024-06-10 21:31:03,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43144.5, 300 sec: 43320.4). Total num frames: 539344896. Throughput: 0: 43069.2. Samples: 539446520. Policy #0 lag: (min: 0.0, avg: 11.2, max: 21.0) +[2024-06-10 21:31:03,240][46753] Avg episode reward: [(0, '0.305')] +[2024-06-10 21:31:03,290][46990] Updated weights for policy 0, policy_version 32920 (0.0034) +[2024-06-10 21:31:05,827][46990] Updated weights for policy 0, policy_version 32930 (0.0036) +[2024-06-10 21:31:08,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.6, 300 sec: 43542.6). Total num frames: 539590656. Throughput: 0: 43276.0. Samples: 539708380. Policy #0 lag: (min: 0.0, avg: 12.9, max: 22.0) +[2024-06-10 21:31:08,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:31:10,761][46990] Updated weights for policy 0, policy_version 32940 (0.0049) +[2024-06-10 21:31:13,194][46990] Updated weights for policy 0, policy_version 32950 (0.0036) +[2024-06-10 21:31:13,239][46753] Fps is (10 sec: 50790.7, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 539852800. Throughput: 0: 43470.2. Samples: 539965120. Policy #0 lag: (min: 0.0, avg: 12.9, max: 22.0) +[2024-06-10 21:31:13,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:31:18,239][46753] Fps is (10 sec: 40960.4, 60 sec: 42871.5, 300 sec: 43320.4). Total num frames: 540000256. Throughput: 0: 43519.3. Samples: 540101620. Policy #0 lag: (min: 0.0, avg: 12.9, max: 22.0) +[2024-06-10 21:31:18,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:31:18,367][46990] Updated weights for policy 0, policy_version 32960 (0.0042) +[2024-06-10 21:31:20,997][46990] Updated weights for policy 0, policy_version 32970 (0.0029) +[2024-06-10 21:31:23,239][46753] Fps is (10 sec: 39321.6, 60 sec: 43963.7, 300 sec: 43487.0). Total num frames: 540246016. Throughput: 0: 43262.7. Samples: 540355260. Policy #0 lag: (min: 0.0, avg: 12.9, max: 22.0) +[2024-06-10 21:31:23,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:31:25,745][46990] Updated weights for policy 0, policy_version 32980 (0.0041) +[2024-06-10 21:31:28,239][46753] Fps is (10 sec: 45875.0, 60 sec: 42871.5, 300 sec: 43376.0). Total num frames: 540459008. Throughput: 0: 43572.2. Samples: 540620320. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:31:28,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:31:28,865][46990] Updated weights for policy 0, policy_version 32990 (0.0038) +[2024-06-10 21:31:33,239][46753] Fps is (10 sec: 39321.5, 60 sec: 43144.6, 300 sec: 43320.4). Total num frames: 540639232. Throughput: 0: 42977.2. Samples: 540739820. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:31:33,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:31:33,419][46990] Updated weights for policy 0, policy_version 33000 (0.0032) +[2024-06-10 21:31:33,454][46970] Signal inference workers to stop experience collection... (8050 times) +[2024-06-10 21:31:33,455][46970] Signal inference workers to resume experience collection... (8050 times) +[2024-06-10 21:31:33,499][46990] InferenceWorker_p0-w0: stopping experience collection (8050 times) +[2024-06-10 21:31:33,499][46990] InferenceWorker_p0-w0: resuming experience collection (8050 times) +[2024-06-10 21:31:36,148][46990] Updated weights for policy 0, policy_version 33010 (0.0042) +[2024-06-10 21:31:38,239][46753] Fps is (10 sec: 45874.7, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 540917760. Throughput: 0: 43212.0. Samples: 541007020. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:31:38,241][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:31:41,226][46990] Updated weights for policy 0, policy_version 33020 (0.0028) +[2024-06-10 21:31:43,239][46753] Fps is (10 sec: 49152.0, 60 sec: 42871.6, 300 sec: 43376.0). Total num frames: 541130752. Throughput: 0: 43327.9. Samples: 541265160. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:31:43,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:31:43,542][46990] Updated weights for policy 0, policy_version 33030 (0.0047) +[2024-06-10 21:31:48,240][46753] Fps is (10 sec: 37683.1, 60 sec: 43147.7, 300 sec: 43376.6). Total num frames: 541294592. Throughput: 0: 43338.6. Samples: 541396760. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:31:48,249][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:31:48,569][46990] Updated weights for policy 0, policy_version 33040 (0.0040) +[2024-06-10 21:31:51,270][46990] Updated weights for policy 0, policy_version 33050 (0.0039) +[2024-06-10 21:31:53,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 541556736. Throughput: 0: 43353.3. Samples: 541659280. Policy #0 lag: (min: 1.0, avg: 9.5, max: 24.0) +[2024-06-10 21:31:53,249][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:31:55,770][46990] Updated weights for policy 0, policy_version 33060 (0.0037) +[2024-06-10 21:31:58,239][46753] Fps is (10 sec: 47514.6, 60 sec: 43144.6, 300 sec: 43431.5). Total num frames: 541769728. Throughput: 0: 43626.8. Samples: 541928320. Policy #0 lag: (min: 1.0, avg: 9.5, max: 24.0) +[2024-06-10 21:31:58,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:31:59,035][46990] Updated weights for policy 0, policy_version 33070 (0.0027) +[2024-06-10 21:32:03,239][46753] Fps is (10 sec: 39321.5, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 541949952. Throughput: 0: 43270.1. Samples: 542048780. Policy #0 lag: (min: 1.0, avg: 9.5, max: 24.0) +[2024-06-10 21:32:03,251][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:32:03,433][46990] Updated weights for policy 0, policy_version 33080 (0.0033) +[2024-06-10 21:32:06,413][46990] Updated weights for policy 0, policy_version 33090 (0.0039) +[2024-06-10 21:32:08,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.8, 300 sec: 43542.6). Total num frames: 542228480. Throughput: 0: 43559.6. Samples: 542315440. Policy #0 lag: (min: 1.0, avg: 9.5, max: 24.0) +[2024-06-10 21:32:08,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:32:11,162][46990] Updated weights for policy 0, policy_version 33100 (0.0039) +[2024-06-10 21:32:13,240][46753] Fps is (10 sec: 47513.2, 60 sec: 42871.3, 300 sec: 43375.9). Total num frames: 542425088. Throughput: 0: 43548.3. Samples: 542580000. Policy #0 lag: (min: 0.0, avg: 8.2, max: 22.0) +[2024-06-10 21:32:13,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:32:13,861][46990] Updated weights for policy 0, policy_version 33110 (0.0035) +[2024-06-10 21:32:18,239][46753] Fps is (10 sec: 37683.4, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 542605312. Throughput: 0: 43693.4. Samples: 542706020. Policy #0 lag: (min: 0.0, avg: 8.2, max: 22.0) +[2024-06-10 21:32:18,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:32:18,577][46990] Updated weights for policy 0, policy_version 33120 (0.0045) +[2024-06-10 21:32:21,412][46990] Updated weights for policy 0, policy_version 33130 (0.0033) +[2024-06-10 21:32:23,240][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 542867456. Throughput: 0: 43566.7. Samples: 542967520. Policy #0 lag: (min: 0.0, avg: 8.2, max: 22.0) +[2024-06-10 21:32:23,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:32:23,246][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000033134_542867456.pth... +[2024-06-10 21:32:23,312][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000032496_532414464.pth +[2024-06-10 21:32:25,867][46990] Updated weights for policy 0, policy_version 33140 (0.0041) +[2024-06-10 21:32:28,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 543064064. Throughput: 0: 43647.6. Samples: 543229300. Policy #0 lag: (min: 0.0, avg: 8.2, max: 22.0) +[2024-06-10 21:32:28,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:32:29,469][46990] Updated weights for policy 0, policy_version 33150 (0.0049) +[2024-06-10 21:32:33,239][46753] Fps is (10 sec: 39322.1, 60 sec: 43690.7, 300 sec: 43431.5). Total num frames: 543260672. Throughput: 0: 43377.9. Samples: 543348760. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:32:33,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:32:33,437][46990] Updated weights for policy 0, policy_version 33160 (0.0028) +[2024-06-10 21:32:37,032][46990] Updated weights for policy 0, policy_version 33170 (0.0042) +[2024-06-10 21:32:38,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43417.6, 300 sec: 43487.7). Total num frames: 543522816. Throughput: 0: 43563.6. Samples: 543619640. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:32:38,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:32:41,094][46990] Updated weights for policy 0, policy_version 33180 (0.0035) +[2024-06-10 21:32:43,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43144.6, 300 sec: 43375.9). Total num frames: 543719424. Throughput: 0: 43422.2. Samples: 543882320. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:32:43,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:32:44,209][46990] Updated weights for policy 0, policy_version 33190 (0.0030) +[2024-06-10 21:32:48,148][46970] Signal inference workers to stop experience collection... (8100 times) +[2024-06-10 21:32:48,194][46990] InferenceWorker_p0-w0: stopping experience collection (8100 times) +[2024-06-10 21:32:48,200][46970] Signal inference workers to resume experience collection... (8100 times) +[2024-06-10 21:32:48,216][46990] InferenceWorker_p0-w0: resuming experience collection (8100 times) +[2024-06-10 21:32:48,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43963.9, 300 sec: 43542.6). Total num frames: 543932416. Throughput: 0: 43679.3. Samples: 544014340. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:32:48,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:32:48,339][46990] Updated weights for policy 0, policy_version 33200 (0.0032) +[2024-06-10 21:32:51,837][46990] Updated weights for policy 0, policy_version 33210 (0.0023) +[2024-06-10 21:32:53,240][46753] Fps is (10 sec: 47512.6, 60 sec: 43963.7, 300 sec: 43542.5). Total num frames: 544194560. Throughput: 0: 43522.9. Samples: 544273980. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:32:53,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:32:55,765][46990] Updated weights for policy 0, policy_version 33220 (0.0044) +[2024-06-10 21:32:58,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43144.4, 300 sec: 43431.5). Total num frames: 544358400. Throughput: 0: 43500.1. Samples: 544537500. Policy #0 lag: (min: 0.0, avg: 10.7, max: 20.0) +[2024-06-10 21:32:58,242][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:32:59,689][46990] Updated weights for policy 0, policy_version 33230 (0.0032) +[2024-06-10 21:33:03,239][46753] Fps is (10 sec: 39322.4, 60 sec: 43963.8, 300 sec: 43487.0). Total num frames: 544587776. Throughput: 0: 43322.2. Samples: 544655520. Policy #0 lag: (min: 0.0, avg: 10.7, max: 20.0) +[2024-06-10 21:33:03,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:33:03,428][46990] Updated weights for policy 0, policy_version 33240 (0.0027) +[2024-06-10 21:33:07,038][46990] Updated weights for policy 0, policy_version 33250 (0.0030) +[2024-06-10 21:33:08,239][46753] Fps is (10 sec: 47514.3, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 544833536. Throughput: 0: 43537.5. Samples: 544926700. Policy #0 lag: (min: 0.0, avg: 10.7, max: 20.0) +[2024-06-10 21:33:08,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:33:11,280][46990] Updated weights for policy 0, policy_version 33260 (0.0033) +[2024-06-10 21:33:13,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43144.6, 300 sec: 43431.5). Total num frames: 545013760. Throughput: 0: 43568.4. Samples: 545189880. Policy #0 lag: (min: 0.0, avg: 10.7, max: 20.0) +[2024-06-10 21:33:13,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:33:14,433][46990] Updated weights for policy 0, policy_version 33270 (0.0034) +[2024-06-10 21:33:18,239][46753] Fps is (10 sec: 39321.1, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 545226752. Throughput: 0: 43713.3. Samples: 545315860. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 21:33:18,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:33:18,554][46990] Updated weights for policy 0, policy_version 33280 (0.0047) +[2024-06-10 21:33:22,115][46990] Updated weights for policy 0, policy_version 33290 (0.0041) +[2024-06-10 21:33:23,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43690.8, 300 sec: 43487.0). Total num frames: 545488896. Throughput: 0: 43522.7. Samples: 545578160. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 21:33:23,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:33:25,869][46990] Updated weights for policy 0, policy_version 33300 (0.0038) +[2024-06-10 21:33:28,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43144.4, 300 sec: 43487.0). Total num frames: 545652736. Throughput: 0: 43504.7. Samples: 545840040. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 21:33:28,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:33:29,789][46990] Updated weights for policy 0, policy_version 33310 (0.0034) +[2024-06-10 21:33:33,239][46753] Fps is (10 sec: 40959.7, 60 sec: 43963.7, 300 sec: 43542.6). Total num frames: 545898496. Throughput: 0: 43152.3. Samples: 545956200. Policy #0 lag: (min: 0.0, avg: 12.0, max: 23.0) +[2024-06-10 21:33:33,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:33:33,469][46990] Updated weights for policy 0, policy_version 33320 (0.0032) +[2024-06-10 21:33:37,143][46990] Updated weights for policy 0, policy_version 33330 (0.0026) +[2024-06-10 21:33:38,239][46753] Fps is (10 sec: 49152.3, 60 sec: 43690.6, 300 sec: 43542.6). Total num frames: 546144256. Throughput: 0: 43420.5. Samples: 546227900. Policy #0 lag: (min: 0.0, avg: 10.6, max: 20.0) +[2024-06-10 21:33:38,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:33:41,330][46990] Updated weights for policy 0, policy_version 33340 (0.0040) +[2024-06-10 21:33:43,240][46753] Fps is (10 sec: 42597.9, 60 sec: 43417.5, 300 sec: 43487.0). Total num frames: 546324480. Throughput: 0: 43413.7. Samples: 546491120. Policy #0 lag: (min: 0.0, avg: 10.6, max: 20.0) +[2024-06-10 21:33:43,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:33:44,787][46990] Updated weights for policy 0, policy_version 33350 (0.0045) +[2024-06-10 21:33:48,240][46753] Fps is (10 sec: 39321.2, 60 sec: 43417.4, 300 sec: 43487.0). Total num frames: 546537472. Throughput: 0: 43572.3. Samples: 546616280. Policy #0 lag: (min: 0.0, avg: 10.6, max: 20.0) +[2024-06-10 21:33:48,240][46753] Avg episode reward: [(0, '0.309')] +[2024-06-10 21:33:48,244][46970] Saving new best policy, reward=0.309! +[2024-06-10 21:33:48,643][46990] Updated weights for policy 0, policy_version 33360 (0.0042) +[2024-06-10 21:33:52,222][46990] Updated weights for policy 0, policy_version 33370 (0.0038) +[2024-06-10 21:33:53,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43144.6, 300 sec: 43431.5). Total num frames: 546783232. Throughput: 0: 43516.7. Samples: 546884960. Policy #0 lag: (min: 0.0, avg: 10.6, max: 20.0) +[2024-06-10 21:33:53,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:33:55,742][46970] Signal inference workers to stop experience collection... (8150 times) +[2024-06-10 21:33:55,742][46970] Signal inference workers to resume experience collection... (8150 times) +[2024-06-10 21:33:55,791][46990] InferenceWorker_p0-w0: stopping experience collection (8150 times) +[2024-06-10 21:33:55,791][46990] InferenceWorker_p0-w0: resuming experience collection (8150 times) +[2024-06-10 21:33:55,875][46990] Updated weights for policy 0, policy_version 33380 (0.0035) +[2024-06-10 21:33:58,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43144.6, 300 sec: 43431.5). Total num frames: 546947072. Throughput: 0: 43395.1. Samples: 547142660. Policy #0 lag: (min: 0.0, avg: 10.6, max: 20.0) +[2024-06-10 21:33:58,240][46753] Avg episode reward: [(0, '0.304')] +[2024-06-10 21:33:59,939][46990] Updated weights for policy 0, policy_version 33390 (0.0034) +[2024-06-10 21:34:03,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43417.5, 300 sec: 43542.5). Total num frames: 547192832. Throughput: 0: 43332.0. Samples: 547265800. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:34:03,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:34:03,508][46990] Updated weights for policy 0, policy_version 33400 (0.0035) +[2024-06-10 21:34:07,132][46990] Updated weights for policy 0, policy_version 33410 (0.0026) +[2024-06-10 21:34:08,239][46753] Fps is (10 sec: 49152.1, 60 sec: 43417.5, 300 sec: 43487.1). Total num frames: 547438592. Throughput: 0: 43412.0. Samples: 547531700. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:34:08,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:34:11,296][46990] Updated weights for policy 0, policy_version 33420 (0.0031) +[2024-06-10 21:34:13,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 547618816. Throughput: 0: 43551.3. Samples: 547799840. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:34:13,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:34:14,840][46990] Updated weights for policy 0, policy_version 33430 (0.0033) +[2024-06-10 21:34:18,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 547831808. Throughput: 0: 43581.4. Samples: 547917360. Policy #0 lag: (min: 0.0, avg: 8.8, max: 21.0) +[2024-06-10 21:34:18,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:34:19,168][46990] Updated weights for policy 0, policy_version 33440 (0.0039) +[2024-06-10 21:34:22,518][46990] Updated weights for policy 0, policy_version 33450 (0.0029) +[2024-06-10 21:34:23,244][46753] Fps is (10 sec: 47491.7, 60 sec: 43414.3, 300 sec: 43430.8). Total num frames: 548093952. Throughput: 0: 43467.2. Samples: 548184120. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 21:34:23,245][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:34:23,255][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000033453_548093952.pth... +[2024-06-10 21:34:23,309][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000032814_537624576.pth +[2024-06-10 21:34:26,471][46990] Updated weights for policy 0, policy_version 33460 (0.0038) +[2024-06-10 21:34:28,244][46753] Fps is (10 sec: 40941.6, 60 sec: 43141.4, 300 sec: 43430.8). Total num frames: 548241408. Throughput: 0: 43476.3. Samples: 548447740. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 21:34:28,244][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:34:29,872][46990] Updated weights for policy 0, policy_version 33470 (0.0038) +[2024-06-10 21:34:33,239][46753] Fps is (10 sec: 42617.6, 60 sec: 43690.6, 300 sec: 43598.1). Total num frames: 548519936. Throughput: 0: 43360.1. Samples: 548567480. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 21:34:33,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:34:33,934][46990] Updated weights for policy 0, policy_version 33480 (0.0032) +[2024-06-10 21:34:37,449][46990] Updated weights for policy 0, policy_version 33490 (0.0030) +[2024-06-10 21:34:38,239][46753] Fps is (10 sec: 49173.6, 60 sec: 43144.5, 300 sec: 43431.5). Total num frames: 548732928. Throughput: 0: 43234.7. Samples: 548830520. Policy #0 lag: (min: 0.0, avg: 8.1, max: 20.0) +[2024-06-10 21:34:38,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:34:41,793][46990] Updated weights for policy 0, policy_version 33500 (0.0039) +[2024-06-10 21:34:43,239][46753] Fps is (10 sec: 39322.1, 60 sec: 43144.7, 300 sec: 43375.9). Total num frames: 548913152. Throughput: 0: 43585.0. Samples: 549103980. Policy #0 lag: (min: 0.0, avg: 8.9, max: 20.0) +[2024-06-10 21:34:43,240][46753] Avg episode reward: [(0, '0.304')] +[2024-06-10 21:34:44,954][46990] Updated weights for policy 0, policy_version 33510 (0.0050) +[2024-06-10 21:34:48,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 549158912. Throughput: 0: 43439.1. Samples: 549220560. Policy #0 lag: (min: 0.0, avg: 8.9, max: 20.0) +[2024-06-10 21:34:48,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:34:49,406][46990] Updated weights for policy 0, policy_version 33520 (0.0035) +[2024-06-10 21:34:52,657][46990] Updated weights for policy 0, policy_version 33530 (0.0046) +[2024-06-10 21:34:53,239][46753] Fps is (10 sec: 49151.7, 60 sec: 43690.7, 300 sec: 43432.1). Total num frames: 549404672. Throughput: 0: 43507.1. Samples: 549489520. Policy #0 lag: (min: 0.0, avg: 8.9, max: 20.0) +[2024-06-10 21:34:53,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:34:56,640][46990] Updated weights for policy 0, policy_version 33540 (0.0047) +[2024-06-10 21:34:58,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43417.6, 300 sec: 43375.9). Total num frames: 549552128. Throughput: 0: 43365.7. Samples: 549751300. Policy #0 lag: (min: 0.0, avg: 8.9, max: 20.0) +[2024-06-10 21:34:58,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:34:59,894][46990] Updated weights for policy 0, policy_version 33550 (0.0021) +[2024-06-10 21:35:03,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 549814272. Throughput: 0: 43493.8. Samples: 549874580. Policy #0 lag: (min: 0.0, avg: 8.9, max: 20.0) +[2024-06-10 21:35:03,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:35:04,354][46990] Updated weights for policy 0, policy_version 33560 (0.0039) +[2024-06-10 21:35:05,982][46970] Signal inference workers to stop experience collection... (8200 times) +[2024-06-10 21:35:06,005][46990] InferenceWorker_p0-w0: stopping experience collection (8200 times) +[2024-06-10 21:35:06,040][46970] Signal inference workers to resume experience collection... (8200 times) +[2024-06-10 21:35:06,040][46990] InferenceWorker_p0-w0: resuming experience collection (8200 times) +[2024-06-10 21:35:07,437][46990] Updated weights for policy 0, policy_version 33570 (0.0032) +[2024-06-10 21:35:08,239][46753] Fps is (10 sec: 49152.2, 60 sec: 43417.6, 300 sec: 43376.0). Total num frames: 550043648. Throughput: 0: 43448.0. Samples: 550139080. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 21:35:08,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:35:12,274][46990] Updated weights for policy 0, policy_version 33580 (0.0030) +[2024-06-10 21:35:13,239][46753] Fps is (10 sec: 37683.0, 60 sec: 42871.4, 300 sec: 43264.9). Total num frames: 550191104. Throughput: 0: 43601.2. Samples: 550409600. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 21:35:13,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:35:15,166][46990] Updated weights for policy 0, policy_version 33590 (0.0033) +[2024-06-10 21:35:18,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 550469632. Throughput: 0: 43520.9. Samples: 550525920. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 21:35:18,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:35:19,727][46990] Updated weights for policy 0, policy_version 33600 (0.0028) +[2024-06-10 21:35:22,806][46990] Updated weights for policy 0, policy_version 33610 (0.0040) +[2024-06-10 21:35:23,239][46753] Fps is (10 sec: 50790.7, 60 sec: 43420.9, 300 sec: 43431.5). Total num frames: 550699008. Throughput: 0: 43695.7. Samples: 550796820. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 21:35:23,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:35:26,910][46990] Updated weights for policy 0, policy_version 33620 (0.0048) +[2024-06-10 21:35:28,239][46753] Fps is (10 sec: 37683.1, 60 sec: 43420.8, 300 sec: 43376.0). Total num frames: 550846464. Throughput: 0: 43494.5. Samples: 551061240. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 21:35:28,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:35:30,038][46990] Updated weights for policy 0, policy_version 33630 (0.0026) +[2024-06-10 21:35:33,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43690.7, 300 sec: 43598.1). Total num frames: 551141376. Throughput: 0: 43675.6. Samples: 551185960. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 21:35:33,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:35:34,471][46990] Updated weights for policy 0, policy_version 33640 (0.0028) +[2024-06-10 21:35:37,401][46990] Updated weights for policy 0, policy_version 33650 (0.0028) +[2024-06-10 21:35:38,239][46753] Fps is (10 sec: 49152.1, 60 sec: 43417.6, 300 sec: 43320.4). Total num frames: 551337984. Throughput: 0: 43677.7. Samples: 551455020. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 21:35:38,242][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:35:42,547][46990] Updated weights for policy 0, policy_version 33660 (0.0026) +[2024-06-10 21:35:43,239][46753] Fps is (10 sec: 36044.8, 60 sec: 43144.5, 300 sec: 43376.6). Total num frames: 551501824. Throughput: 0: 43624.9. Samples: 551714420. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 21:35:43,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:35:45,362][46990] Updated weights for policy 0, policy_version 33670 (0.0033) +[2024-06-10 21:35:48,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43963.8, 300 sec: 43598.1). Total num frames: 551796736. Throughput: 0: 43590.7. Samples: 551836160. Policy #0 lag: (min: 0.0, avg: 10.6, max: 23.0) +[2024-06-10 21:35:48,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:35:49,737][46990] Updated weights for policy 0, policy_version 33680 (0.0030) +[2024-06-10 21:35:52,940][46990] Updated weights for policy 0, policy_version 33690 (0.0037) +[2024-06-10 21:35:53,239][46753] Fps is (10 sec: 47513.9, 60 sec: 42871.5, 300 sec: 43376.0). Total num frames: 551976960. Throughput: 0: 43695.2. Samples: 552105360. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:35:53,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:35:56,968][46990] Updated weights for policy 0, policy_version 33700 (0.0035) +[2024-06-10 21:35:58,239][46753] Fps is (10 sec: 36044.7, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 552157184. Throughput: 0: 43478.2. Samples: 552366120. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:35:58,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:36:00,441][46990] Updated weights for policy 0, policy_version 33710 (0.0033) +[2024-06-10 21:36:03,239][46753] Fps is (10 sec: 47512.9, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 552452096. Throughput: 0: 43616.4. Samples: 552488660. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:36:03,240][46753] Avg episode reward: [(0, '0.304')] +[2024-06-10 21:36:04,631][46990] Updated weights for policy 0, policy_version 33720 (0.0039) +[2024-06-10 21:36:07,598][46970] Signal inference workers to stop experience collection... (8250 times) +[2024-06-10 21:36:07,598][46970] Signal inference workers to resume experience collection... (8250 times) +[2024-06-10 21:36:07,646][46990] InferenceWorker_p0-w0: stopping experience collection (8250 times) +[2024-06-10 21:36:07,646][46990] InferenceWorker_p0-w0: resuming experience collection (8250 times) +[2024-06-10 21:36:07,729][46990] Updated weights for policy 0, policy_version 33730 (0.0034) +[2024-06-10 21:36:08,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43144.6, 300 sec: 43320.4). Total num frames: 552632320. Throughput: 0: 43615.6. Samples: 552759520. Policy #0 lag: (min: 0.0, avg: 9.7, max: 21.0) +[2024-06-10 21:36:08,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:36:12,588][46990] Updated weights for policy 0, policy_version 33740 (0.0043) +[2024-06-10 21:36:13,239][46753] Fps is (10 sec: 36044.9, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 552812544. Throughput: 0: 43458.7. Samples: 553016880. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 21:36:13,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:36:15,602][46990] Updated weights for policy 0, policy_version 33750 (0.0034) +[2024-06-10 21:36:18,239][46753] Fps is (10 sec: 47513.1, 60 sec: 43963.7, 300 sec: 43598.1). Total num frames: 553107456. Throughput: 0: 43503.0. Samples: 553143600. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 21:36:18,242][46753] Avg episode reward: [(0, '0.306')] +[2024-06-10 21:36:19,781][46990] Updated weights for policy 0, policy_version 33760 (0.0031) +[2024-06-10 21:36:23,239][46753] Fps is (10 sec: 45875.7, 60 sec: 42871.5, 300 sec: 43431.5). Total num frames: 553271296. Throughput: 0: 43387.7. Samples: 553407460. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 21:36:23,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:36:23,282][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000033770_553287680.pth... +[2024-06-10 21:36:23,290][46990] Updated weights for policy 0, policy_version 33770 (0.0026) +[2024-06-10 21:36:23,334][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000033134_542867456.pth +[2024-06-10 21:36:27,028][46990] Updated weights for policy 0, policy_version 33780 (0.0054) +[2024-06-10 21:36:28,240][46753] Fps is (10 sec: 36044.5, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 553467904. Throughput: 0: 43419.0. Samples: 553668280. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 21:36:28,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:36:30,874][46990] Updated weights for policy 0, policy_version 33790 (0.0029) +[2024-06-10 21:36:33,239][46753] Fps is (10 sec: 47513.2, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 553746432. Throughput: 0: 43536.8. Samples: 553795320. Policy #0 lag: (min: 0.0, avg: 10.2, max: 21.0) +[2024-06-10 21:36:33,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:36:34,676][46990] Updated weights for policy 0, policy_version 33800 (0.0041) +[2024-06-10 21:36:38,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43144.6, 300 sec: 43375.9). Total num frames: 553926656. Throughput: 0: 43506.1. Samples: 554063140. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:36:38,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:36:38,287][46990] Updated weights for policy 0, policy_version 33810 (0.0039) +[2024-06-10 21:36:42,401][46990] Updated weights for policy 0, policy_version 33820 (0.0035) +[2024-06-10 21:36:43,239][46753] Fps is (10 sec: 37683.2, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 554123264. Throughput: 0: 43412.0. Samples: 554319660. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:36:43,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:36:45,864][46990] Updated weights for policy 0, policy_version 33830 (0.0041) +[2024-06-10 21:36:48,240][46753] Fps is (10 sec: 47512.9, 60 sec: 43417.5, 300 sec: 43542.5). Total num frames: 554401792. Throughput: 0: 43462.6. Samples: 554444480. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:36:48,243][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:36:49,932][46990] Updated weights for policy 0, policy_version 33840 (0.0028) +[2024-06-10 21:36:53,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43417.5, 300 sec: 43431.5). Total num frames: 554582016. Throughput: 0: 43315.0. Samples: 554708700. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:36:53,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:36:53,361][46990] Updated weights for policy 0, policy_version 33850 (0.0023) +[2024-06-10 21:36:57,190][46990] Updated weights for policy 0, policy_version 33860 (0.0034) +[2024-06-10 21:36:58,239][46753] Fps is (10 sec: 37683.8, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 554778624. Throughput: 0: 43293.8. Samples: 554965100. Policy #0 lag: (min: 0.0, avg: 12.9, max: 22.0) +[2024-06-10 21:36:58,240][46753] Avg episode reward: [(0, '0.304')] +[2024-06-10 21:37:01,220][46990] Updated weights for policy 0, policy_version 33870 (0.0026) +[2024-06-10 21:37:03,239][46753] Fps is (10 sec: 47514.0, 60 sec: 43417.7, 300 sec: 43487.0). Total num frames: 555057152. Throughput: 0: 43426.3. Samples: 555097780. Policy #0 lag: (min: 0.0, avg: 12.9, max: 22.0) +[2024-06-10 21:37:03,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:37:05,017][46990] Updated weights for policy 0, policy_version 33880 (0.0033) +[2024-06-10 21:37:08,239][46753] Fps is (10 sec: 42598.5, 60 sec: 42871.5, 300 sec: 43320.4). Total num frames: 555204608. Throughput: 0: 43339.1. Samples: 555357720. Policy #0 lag: (min: 0.0, avg: 12.9, max: 22.0) +[2024-06-10 21:37:08,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:37:08,686][46990] Updated weights for policy 0, policy_version 33890 (0.0032) +[2024-06-10 21:37:08,783][46970] Signal inference workers to stop experience collection... (8300 times) +[2024-06-10 21:37:08,827][46990] InferenceWorker_p0-w0: stopping experience collection (8300 times) +[2024-06-10 21:37:08,832][46970] Signal inference workers to resume experience collection... (8300 times) +[2024-06-10 21:37:08,840][46990] InferenceWorker_p0-w0: resuming experience collection (8300 times) +[2024-06-10 21:37:12,169][46990] Updated weights for policy 0, policy_version 33900 (0.0027) +[2024-06-10 21:37:13,240][46753] Fps is (10 sec: 37681.4, 60 sec: 43690.4, 300 sec: 43486.9). Total num frames: 555433984. Throughput: 0: 43361.0. Samples: 555619540. Policy #0 lag: (min: 0.0, avg: 12.9, max: 22.0) +[2024-06-10 21:37:13,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:37:15,938][46990] Updated weights for policy 0, policy_version 33910 (0.0030) +[2024-06-10 21:37:18,239][46753] Fps is (10 sec: 50790.4, 60 sec: 43417.7, 300 sec: 43542.6). Total num frames: 555712512. Throughput: 0: 43380.5. Samples: 555747440. Policy #0 lag: (min: 0.0, avg: 12.9, max: 22.0) +[2024-06-10 21:37:18,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:37:19,939][46990] Updated weights for policy 0, policy_version 33920 (0.0029) +[2024-06-10 21:37:23,239][46753] Fps is (10 sec: 44238.9, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 555876352. Throughput: 0: 43282.3. Samples: 556010840. Policy #0 lag: (min: 0.0, avg: 13.1, max: 23.0) +[2024-06-10 21:37:23,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:37:23,524][46990] Updated weights for policy 0, policy_version 33930 (0.0041) +[2024-06-10 21:37:27,332][46990] Updated weights for policy 0, policy_version 33940 (0.0027) +[2024-06-10 21:37:28,240][46753] Fps is (10 sec: 37682.6, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 556089344. Throughput: 0: 43295.0. Samples: 556267940. Policy #0 lag: (min: 0.0, avg: 13.1, max: 23.0) +[2024-06-10 21:37:28,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:37:31,258][46990] Updated weights for policy 0, policy_version 33950 (0.0026) +[2024-06-10 21:37:33,239][46753] Fps is (10 sec: 47513.6, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 556351488. Throughput: 0: 43537.5. Samples: 556403660. Policy #0 lag: (min: 0.0, avg: 13.1, max: 23.0) +[2024-06-10 21:37:33,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:37:35,059][46990] Updated weights for policy 0, policy_version 33960 (0.0038) +[2024-06-10 21:37:38,239][46753] Fps is (10 sec: 42599.1, 60 sec: 43144.6, 300 sec: 43375.9). Total num frames: 556515328. Throughput: 0: 43345.4. Samples: 556659240. Policy #0 lag: (min: 0.0, avg: 13.1, max: 23.0) +[2024-06-10 21:37:38,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:37:38,792][46990] Updated weights for policy 0, policy_version 33970 (0.0042) +[2024-06-10 21:37:42,341][46990] Updated weights for policy 0, policy_version 33980 (0.0037) +[2024-06-10 21:37:43,240][46753] Fps is (10 sec: 39321.0, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 556744704. Throughput: 0: 43382.5. Samples: 556917320. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 21:37:43,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:37:46,208][46990] Updated weights for policy 0, policy_version 33990 (0.0028) +[2024-06-10 21:37:48,239][46753] Fps is (10 sec: 49151.5, 60 sec: 43417.7, 300 sec: 43431.5). Total num frames: 557006848. Throughput: 0: 43396.4. Samples: 557050620. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 21:37:48,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:37:49,943][46990] Updated weights for policy 0, policy_version 34000 (0.0029) +[2024-06-10 21:37:53,239][46753] Fps is (10 sec: 44237.3, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 557187072. Throughput: 0: 43338.2. Samples: 557307940. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 21:37:53,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:37:53,795][46990] Updated weights for policy 0, policy_version 34010 (0.0037) +[2024-06-10 21:37:57,714][46990] Updated weights for policy 0, policy_version 34020 (0.0040) +[2024-06-10 21:37:58,239][46753] Fps is (10 sec: 39322.1, 60 sec: 43690.7, 300 sec: 43431.5). Total num frames: 557400064. Throughput: 0: 43244.5. Samples: 557565520. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 21:37:58,240][46753] Avg episode reward: [(0, '0.283')] +[2024-06-10 21:38:01,405][46990] Updated weights for policy 0, policy_version 34030 (0.0043) +[2024-06-10 21:38:03,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43144.5, 300 sec: 43431.5). Total num frames: 557645824. Throughput: 0: 43385.3. Samples: 557699780. Policy #0 lag: (min: 0.0, avg: 12.7, max: 22.0) +[2024-06-10 21:38:03,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:38:05,239][46990] Updated weights for policy 0, policy_version 34040 (0.0026) +[2024-06-10 21:38:08,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43417.6, 300 sec: 43376.0). Total num frames: 557809664. Throughput: 0: 43234.7. Samples: 557956400. Policy #0 lag: (min: 0.0, avg: 11.3, max: 22.0) +[2024-06-10 21:38:08,240][46753] Avg episode reward: [(0, '0.304')] +[2024-06-10 21:38:09,066][46990] Updated weights for policy 0, policy_version 34050 (0.0033) +[2024-06-10 21:38:12,471][46990] Updated weights for policy 0, policy_version 34060 (0.0027) +[2024-06-10 21:38:13,240][46753] Fps is (10 sec: 40959.7, 60 sec: 43690.9, 300 sec: 43487.0). Total num frames: 558055424. Throughput: 0: 43360.5. Samples: 558219160. Policy #0 lag: (min: 0.0, avg: 11.3, max: 22.0) +[2024-06-10 21:38:13,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:38:16,308][46990] Updated weights for policy 0, policy_version 34070 (0.0028) +[2024-06-10 21:38:18,239][46753] Fps is (10 sec: 49151.4, 60 sec: 43144.5, 300 sec: 43431.5). Total num frames: 558301184. Throughput: 0: 43361.7. Samples: 558354940. Policy #0 lag: (min: 0.0, avg: 11.3, max: 22.0) +[2024-06-10 21:38:18,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:38:20,015][46990] Updated weights for policy 0, policy_version 34080 (0.0030) +[2024-06-10 21:38:23,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 558481408. Throughput: 0: 43342.2. Samples: 558609640. Policy #0 lag: (min: 0.0, avg: 11.3, max: 22.0) +[2024-06-10 21:38:23,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:38:23,309][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000034088_558497792.pth... +[2024-06-10 21:38:23,355][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000033453_548093952.pth +[2024-06-10 21:38:23,747][46990] Updated weights for policy 0, policy_version 34090 (0.0040) +[2024-06-10 21:38:27,907][46990] Updated weights for policy 0, policy_version 34100 (0.0037) +[2024-06-10 21:38:28,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43690.7, 300 sec: 43431.5). Total num frames: 558710784. Throughput: 0: 43343.6. Samples: 558867780. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:38:28,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:38:31,463][46970] Signal inference workers to stop experience collection... (8350 times) +[2024-06-10 21:38:31,464][46970] Signal inference workers to resume experience collection... (8350 times) +[2024-06-10 21:38:31,511][46990] InferenceWorker_p0-w0: stopping experience collection (8350 times) +[2024-06-10 21:38:31,511][46990] InferenceWorker_p0-w0: resuming experience collection (8350 times) +[2024-06-10 21:38:31,595][46990] Updated weights for policy 0, policy_version 34110 (0.0043) +[2024-06-10 21:38:33,240][46753] Fps is (10 sec: 45874.5, 60 sec: 43144.4, 300 sec: 43375.9). Total num frames: 558940160. Throughput: 0: 43314.6. Samples: 558999780. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:38:33,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:38:35,466][46990] Updated weights for policy 0, policy_version 34120 (0.0037) +[2024-06-10 21:38:38,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43417.6, 300 sec: 43376.0). Total num frames: 559120384. Throughput: 0: 43314.3. Samples: 559257080. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:38:38,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:38:39,300][46990] Updated weights for policy 0, policy_version 34130 (0.0032) +[2024-06-10 21:38:42,943][46990] Updated weights for policy 0, policy_version 34140 (0.0046) +[2024-06-10 21:38:43,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43417.7, 300 sec: 43431.5). Total num frames: 559349760. Throughput: 0: 43432.4. Samples: 559519980. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:38:43,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:38:46,569][46990] Updated weights for policy 0, policy_version 34150 (0.0028) +[2024-06-10 21:38:48,239][46753] Fps is (10 sec: 47513.1, 60 sec: 43144.5, 300 sec: 43431.5). Total num frames: 559595520. Throughput: 0: 43412.4. Samples: 559653340. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:38:48,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:38:50,135][46990] Updated weights for policy 0, policy_version 34160 (0.0030) +[2024-06-10 21:38:53,241][46753] Fps is (10 sec: 42592.9, 60 sec: 43143.6, 300 sec: 43486.8). Total num frames: 559775744. Throughput: 0: 43320.0. Samples: 559905860. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 21:38:53,250][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:38:53,821][46990] Updated weights for policy 0, policy_version 34170 (0.0030) +[2024-06-10 21:38:57,697][46990] Updated weights for policy 0, policy_version 34180 (0.0039) +[2024-06-10 21:38:58,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43417.5, 300 sec: 43431.5). Total num frames: 560005120. Throughput: 0: 43280.9. Samples: 560166800. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 21:38:58,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:39:01,693][46990] Updated weights for policy 0, policy_version 34190 (0.0045) +[2024-06-10 21:39:03,239][46753] Fps is (10 sec: 45881.3, 60 sec: 43144.6, 300 sec: 43376.0). Total num frames: 560234496. Throughput: 0: 43387.6. Samples: 560307380. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 21:39:03,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:39:05,208][46990] Updated weights for policy 0, policy_version 34200 (0.0041) +[2024-06-10 21:39:08,240][46753] Fps is (10 sec: 42598.1, 60 sec: 43690.5, 300 sec: 43431.5). Total num frames: 560431104. Throughput: 0: 43434.9. Samples: 560564220. Policy #0 lag: (min: 0.0, avg: 10.8, max: 21.0) +[2024-06-10 21:39:08,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:39:09,244][46990] Updated weights for policy 0, policy_version 34210 (0.0041) +[2024-06-10 21:39:12,455][46990] Updated weights for policy 0, policy_version 34220 (0.0026) +[2024-06-10 21:39:13,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43690.7, 300 sec: 43542.5). Total num frames: 560676864. Throughput: 0: 43597.3. Samples: 560829660. Policy #0 lag: (min: 1.0, avg: 10.4, max: 21.0) +[2024-06-10 21:39:13,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:39:16,766][46990] Updated weights for policy 0, policy_version 34230 (0.0036) +[2024-06-10 21:39:18,240][46753] Fps is (10 sec: 47513.6, 60 sec: 43417.5, 300 sec: 43432.1). Total num frames: 560906240. Throughput: 0: 43566.2. Samples: 560960260. Policy #0 lag: (min: 1.0, avg: 10.4, max: 21.0) +[2024-06-10 21:39:18,240][46753] Avg episode reward: [(0, '0.280')] +[2024-06-10 21:39:20,225][46990] Updated weights for policy 0, policy_version 34240 (0.0030) +[2024-06-10 21:39:23,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43690.7, 300 sec: 43598.8). Total num frames: 561102848. Throughput: 0: 43560.9. Samples: 561217320. Policy #0 lag: (min: 1.0, avg: 10.4, max: 21.0) +[2024-06-10 21:39:23,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:39:24,095][46990] Updated weights for policy 0, policy_version 34250 (0.0031) +[2024-06-10 21:39:27,798][46990] Updated weights for policy 0, policy_version 34260 (0.0037) +[2024-06-10 21:39:28,239][46753] Fps is (10 sec: 40960.9, 60 sec: 43417.7, 300 sec: 43376.0). Total num frames: 561315840. Throughput: 0: 43507.6. Samples: 561477820. Policy #0 lag: (min: 1.0, avg: 10.4, max: 21.0) +[2024-06-10 21:39:28,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:39:31,602][46990] Updated weights for policy 0, policy_version 34270 (0.0029) +[2024-06-10 21:39:33,244][46753] Fps is (10 sec: 44216.5, 60 sec: 43414.4, 300 sec: 43430.8). Total num frames: 561545216. Throughput: 0: 43552.1. Samples: 561613380. Policy #0 lag: (min: 1.0, avg: 10.4, max: 21.0) +[2024-06-10 21:39:33,245][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:39:35,296][46990] Updated weights for policy 0, policy_version 34280 (0.0047) +[2024-06-10 21:39:38,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43963.8, 300 sec: 43542.6). Total num frames: 561758208. Throughput: 0: 43691.2. Samples: 561871900. Policy #0 lag: (min: 0.0, avg: 11.0, max: 22.0) +[2024-06-10 21:39:38,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:39:39,236][46990] Updated weights for policy 0, policy_version 34290 (0.0029) +[2024-06-10 21:39:42,660][46990] Updated weights for policy 0, policy_version 34300 (0.0042) +[2024-06-10 21:39:43,239][46753] Fps is (10 sec: 42617.8, 60 sec: 43690.7, 300 sec: 43431.5). Total num frames: 561971200. Throughput: 0: 43757.9. Samples: 562135900. Policy #0 lag: (min: 0.0, avg: 11.0, max: 22.0) +[2024-06-10 21:39:43,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:39:46,734][46990] Updated weights for policy 0, policy_version 34310 (0.0028) +[2024-06-10 21:39:48,239][46753] Fps is (10 sec: 42597.8, 60 sec: 43144.6, 300 sec: 43320.4). Total num frames: 562184192. Throughput: 0: 43566.6. Samples: 562267880. Policy #0 lag: (min: 0.0, avg: 11.0, max: 22.0) +[2024-06-10 21:39:48,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:39:50,111][46990] Updated weights for policy 0, policy_version 34320 (0.0044) +[2024-06-10 21:39:53,239][46753] Fps is (10 sec: 45875.0, 60 sec: 44237.7, 300 sec: 43653.6). Total num frames: 562429952. Throughput: 0: 43620.1. Samples: 562527120. Policy #0 lag: (min: 0.0, avg: 11.0, max: 22.0) +[2024-06-10 21:39:53,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:39:54,386][46990] Updated weights for policy 0, policy_version 34330 (0.0034) +[2024-06-10 21:39:57,697][46990] Updated weights for policy 0, policy_version 34340 (0.0034) +[2024-06-10 21:39:58,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43963.7, 300 sec: 43487.0). Total num frames: 562642944. Throughput: 0: 43488.0. Samples: 562786620. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 21:39:58,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:40:01,610][46970] Signal inference workers to stop experience collection... (8400 times) +[2024-06-10 21:40:01,610][46970] Signal inference workers to resume experience collection... (8400 times) +[2024-06-10 21:40:01,624][46990] InferenceWorker_p0-w0: stopping experience collection (8400 times) +[2024-06-10 21:40:01,624][46990] InferenceWorker_p0-w0: resuming experience collection (8400 times) +[2024-06-10 21:40:01,763][46990] Updated weights for policy 0, policy_version 34350 (0.0038) +[2024-06-10 21:40:03,240][46753] Fps is (10 sec: 40959.7, 60 sec: 43417.5, 300 sec: 43375.9). Total num frames: 562839552. Throughput: 0: 43522.7. Samples: 562918780. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 21:40:03,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:40:05,135][46990] Updated weights for policy 0, policy_version 34360 (0.0028) +[2024-06-10 21:40:08,244][46753] Fps is (10 sec: 44217.1, 60 sec: 44233.6, 300 sec: 43708.5). Total num frames: 563085312. Throughput: 0: 43663.6. Samples: 563182380. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 21:40:08,244][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:40:09,359][46990] Updated weights for policy 0, policy_version 34370 (0.0033) +[2024-06-10 21:40:12,721][46990] Updated weights for policy 0, policy_version 34380 (0.0037) +[2024-06-10 21:40:13,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43417.7, 300 sec: 43431.5). Total num frames: 563281920. Throughput: 0: 43681.7. Samples: 563443500. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 21:40:13,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:40:16,673][46990] Updated weights for policy 0, policy_version 34390 (0.0050) +[2024-06-10 21:40:18,239][46753] Fps is (10 sec: 39339.3, 60 sec: 42871.6, 300 sec: 43320.4). Total num frames: 563478528. Throughput: 0: 43554.2. Samples: 563573120. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 21:40:18,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:40:20,264][46990] Updated weights for policy 0, policy_version 34400 (0.0037) +[2024-06-10 21:40:23,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43690.6, 300 sec: 43653.6). Total num frames: 563724288. Throughput: 0: 43567.8. Samples: 563832460. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 21:40:23,240][46753] Avg episode reward: [(0, '0.305')] +[2024-06-10 21:40:23,357][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000034408_563740672.pth... +[2024-06-10 21:40:23,401][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000033770_553287680.pth +[2024-06-10 21:40:24,572][46990] Updated weights for policy 0, policy_version 34410 (0.0041) +[2024-06-10 21:40:27,967][46990] Updated weights for policy 0, policy_version 34420 (0.0039) +[2024-06-10 21:40:28,244][46753] Fps is (10 sec: 45854.7, 60 sec: 43687.3, 300 sec: 43375.3). Total num frames: 563937280. Throughput: 0: 43493.4. Samples: 564093300. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 21:40:28,244][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:40:32,317][46990] Updated weights for policy 0, policy_version 34430 (0.0040) +[2024-06-10 21:40:33,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43147.8, 300 sec: 43376.0). Total num frames: 564133888. Throughput: 0: 43584.4. Samples: 564229180. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 21:40:33,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:40:35,527][46990] Updated weights for policy 0, policy_version 34440 (0.0044) +[2024-06-10 21:40:38,239][46753] Fps is (10 sec: 42617.8, 60 sec: 43417.5, 300 sec: 43598.1). Total num frames: 564363264. Throughput: 0: 43543.2. Samples: 564486560. Policy #0 lag: (min: 0.0, avg: 9.3, max: 21.0) +[2024-06-10 21:40:38,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:40:39,536][46990] Updated weights for policy 0, policy_version 34450 (0.0032) +[2024-06-10 21:40:43,004][46990] Updated weights for policy 0, policy_version 34460 (0.0030) +[2024-06-10 21:40:43,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43690.6, 300 sec: 43375.9). Total num frames: 564592640. Throughput: 0: 43609.8. Samples: 564749060. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 21:40:43,249][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:40:46,767][46990] Updated weights for policy 0, policy_version 34470 (0.0035) +[2024-06-10 21:40:48,240][46753] Fps is (10 sec: 40959.4, 60 sec: 43144.5, 300 sec: 43375.9). Total num frames: 564772864. Throughput: 0: 43595.6. Samples: 564880580. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 21:40:48,240][46753] Avg episode reward: [(0, '0.304')] +[2024-06-10 21:40:50,630][46990] Updated weights for policy 0, policy_version 34480 (0.0036) +[2024-06-10 21:40:53,240][46753] Fps is (10 sec: 44236.2, 60 sec: 43417.5, 300 sec: 43653.6). Total num frames: 565035008. Throughput: 0: 43366.0. Samples: 565133660. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 21:40:53,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:40:54,510][46990] Updated weights for policy 0, policy_version 34490 (0.0037) +[2024-06-10 21:40:58,091][46990] Updated weights for policy 0, policy_version 34500 (0.0038) +[2024-06-10 21:40:58,240][46753] Fps is (10 sec: 47513.4, 60 sec: 43417.5, 300 sec: 43375.9). Total num frames: 565248000. Throughput: 0: 43513.2. Samples: 565401600. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 21:40:58,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:41:01,965][46990] Updated weights for policy 0, policy_version 34510 (0.0042) +[2024-06-10 21:41:03,239][46753] Fps is (10 sec: 37683.8, 60 sec: 42871.5, 300 sec: 43320.4). Total num frames: 565411840. Throughput: 0: 43532.4. Samples: 565532080. Policy #0 lag: (min: 0.0, avg: 8.3, max: 20.0) +[2024-06-10 21:41:03,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:41:04,949][46970] Signal inference workers to stop experience collection... (8450 times) +[2024-06-10 21:41:04,949][46970] Signal inference workers to resume experience collection... (8450 times) +[2024-06-10 21:41:04,985][46990] InferenceWorker_p0-w0: stopping experience collection (8450 times) +[2024-06-10 21:41:04,985][46990] InferenceWorker_p0-w0: resuming experience collection (8450 times) +[2024-06-10 21:41:05,702][46990] Updated weights for policy 0, policy_version 34520 (0.0028) +[2024-06-10 21:41:08,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43420.9, 300 sec: 43653.6). Total num frames: 565690368. Throughput: 0: 43473.8. Samples: 565788780. Policy #0 lag: (min: 0.0, avg: 8.4, max: 20.0) +[2024-06-10 21:41:08,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:41:09,207][46990] Updated weights for policy 0, policy_version 34530 (0.0027) +[2024-06-10 21:41:13,140][46990] Updated weights for policy 0, policy_version 34540 (0.0055) +[2024-06-10 21:41:13,240][46753] Fps is (10 sec: 49151.1, 60 sec: 43690.5, 300 sec: 43375.9). Total num frames: 565903360. Throughput: 0: 43569.5. Samples: 566053740. Policy #0 lag: (min: 0.0, avg: 8.4, max: 20.0) +[2024-06-10 21:41:13,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:41:16,546][46990] Updated weights for policy 0, policy_version 34550 (0.0040) +[2024-06-10 21:41:18,239][46753] Fps is (10 sec: 39321.3, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 566083584. Throughput: 0: 43432.8. Samples: 566183660. Policy #0 lag: (min: 0.0, avg: 8.4, max: 20.0) +[2024-06-10 21:41:18,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:41:20,731][46990] Updated weights for policy 0, policy_version 34560 (0.0028) +[2024-06-10 21:41:23,239][46753] Fps is (10 sec: 44238.1, 60 sec: 43690.8, 300 sec: 43653.7). Total num frames: 566345728. Throughput: 0: 43412.9. Samples: 566440140. Policy #0 lag: (min: 0.0, avg: 8.4, max: 20.0) +[2024-06-10 21:41:23,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:41:24,282][46990] Updated weights for policy 0, policy_version 34570 (0.0042) +[2024-06-10 21:41:28,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43420.9, 300 sec: 43376.0). Total num frames: 566542336. Throughput: 0: 43474.3. Samples: 566705400. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 21:41:28,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:41:28,267][46990] Updated weights for policy 0, policy_version 34580 (0.0031) +[2024-06-10 21:41:32,017][46990] Updated weights for policy 0, policy_version 34590 (0.0037) +[2024-06-10 21:41:33,240][46753] Fps is (10 sec: 37682.5, 60 sec: 43144.5, 300 sec: 43375.9). Total num frames: 566722560. Throughput: 0: 43434.7. Samples: 566835140. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 21:41:33,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:41:35,564][46990] Updated weights for policy 0, policy_version 34600 (0.0036) +[2024-06-10 21:41:38,239][46753] Fps is (10 sec: 45874.6, 60 sec: 43963.6, 300 sec: 43653.6). Total num frames: 567001088. Throughput: 0: 43632.5. Samples: 567097120. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 21:41:38,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:41:39,227][46990] Updated weights for policy 0, policy_version 34610 (0.0042) +[2024-06-10 21:41:43,239][46753] Fps is (10 sec: 47513.7, 60 sec: 43417.6, 300 sec: 43376.0). Total num frames: 567197696. Throughput: 0: 43446.3. Samples: 567356680. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 21:41:43,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:41:43,475][46990] Updated weights for policy 0, policy_version 34620 (0.0028) +[2024-06-10 21:41:46,569][46990] Updated weights for policy 0, policy_version 34630 (0.0039) +[2024-06-10 21:41:48,239][46753] Fps is (10 sec: 37683.3, 60 sec: 43417.6, 300 sec: 43375.9). Total num frames: 567377920. Throughput: 0: 43465.3. Samples: 567488020. Policy #0 lag: (min: 0.0, avg: 8.2, max: 20.0) +[2024-06-10 21:41:48,242][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:41:51,093][46990] Updated weights for policy 0, policy_version 34640 (0.0037) +[2024-06-10 21:41:53,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43690.7, 300 sec: 43653.6). Total num frames: 567656448. Throughput: 0: 43394.6. Samples: 567741540. Policy #0 lag: (min: 0.0, avg: 8.0, max: 20.0) +[2024-06-10 21:41:53,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:41:54,479][46990] Updated weights for policy 0, policy_version 34650 (0.0027) +[2024-06-10 21:41:58,240][46753] Fps is (10 sec: 45875.0, 60 sec: 43144.5, 300 sec: 43320.4). Total num frames: 567836672. Throughput: 0: 43426.3. Samples: 568007920. Policy #0 lag: (min: 0.0, avg: 8.0, max: 20.0) +[2024-06-10 21:41:58,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:41:58,816][46990] Updated weights for policy 0, policy_version 34660 (0.0030) +[2024-06-10 21:41:59,899][46970] Signal inference workers to stop experience collection... (8500 times) +[2024-06-10 21:41:59,899][46970] Signal inference workers to resume experience collection... (8500 times) +[2024-06-10 21:41:59,946][46990] InferenceWorker_p0-w0: stopping experience collection (8500 times) +[2024-06-10 21:41:59,952][46990] InferenceWorker_p0-w0: resuming experience collection (8500 times) +[2024-06-10 21:42:02,544][46990] Updated weights for policy 0, policy_version 34670 (0.0041) +[2024-06-10 21:42:03,239][46753] Fps is (10 sec: 37683.6, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 568033280. Throughput: 0: 43182.3. Samples: 568126860. Policy #0 lag: (min: 0.0, avg: 8.0, max: 20.0) +[2024-06-10 21:42:03,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:42:06,095][46990] Updated weights for policy 0, policy_version 34680 (0.0031) +[2024-06-10 21:42:08,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43690.6, 300 sec: 43653.7). Total num frames: 568311808. Throughput: 0: 43354.1. Samples: 568391080. Policy #0 lag: (min: 0.0, avg: 8.0, max: 20.0) +[2024-06-10 21:42:08,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:42:09,906][46990] Updated weights for policy 0, policy_version 34690 (0.0032) +[2024-06-10 21:42:13,239][46753] Fps is (10 sec: 44236.8, 60 sec: 42871.6, 300 sec: 43264.9). Total num frames: 568475648. Throughput: 0: 43332.9. Samples: 568655380. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 21:42:13,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:42:13,919][46990] Updated weights for policy 0, policy_version 34700 (0.0038) +[2024-06-10 21:42:17,463][46990] Updated weights for policy 0, policy_version 34710 (0.0038) +[2024-06-10 21:42:18,239][46753] Fps is (10 sec: 37683.2, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 568688640. Throughput: 0: 43141.4. Samples: 568776500. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 21:42:18,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:42:21,676][46990] Updated weights for policy 0, policy_version 34720 (0.0028) +[2024-06-10 21:42:23,240][46753] Fps is (10 sec: 47512.7, 60 sec: 43417.4, 300 sec: 43598.1). Total num frames: 568950784. Throughput: 0: 43247.9. Samples: 569043280. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 21:42:23,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:42:23,273][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000034727_568967168.pth... +[2024-06-10 21:42:23,330][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000034088_558497792.pth +[2024-06-10 21:42:25,562][46990] Updated weights for policy 0, policy_version 34730 (0.0043) +[2024-06-10 21:42:28,239][46753] Fps is (10 sec: 42598.1, 60 sec: 42871.4, 300 sec: 43264.8). Total num frames: 569114624. Throughput: 0: 43228.0. Samples: 569301940. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 21:42:28,240][46753] Avg episode reward: [(0, '0.284')] +[2024-06-10 21:42:29,314][46990] Updated weights for policy 0, policy_version 34740 (0.0028) +[2024-06-10 21:42:33,037][46990] Updated weights for policy 0, policy_version 34750 (0.0033) +[2024-06-10 21:42:33,244][46753] Fps is (10 sec: 39304.6, 60 sec: 43687.5, 300 sec: 43486.4). Total num frames: 569344000. Throughput: 0: 42990.5. Samples: 569422780. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 21:42:33,244][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:42:36,683][46990] Updated weights for policy 0, policy_version 34760 (0.0028) +[2024-06-10 21:42:38,239][46753] Fps is (10 sec: 49152.1, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 569606144. Throughput: 0: 43248.9. Samples: 569687740. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 21:42:38,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:42:40,223][46990] Updated weights for policy 0, policy_version 34770 (0.0032) +[2024-06-10 21:42:43,239][46753] Fps is (10 sec: 42617.6, 60 sec: 42871.5, 300 sec: 43264.9). Total num frames: 569769984. Throughput: 0: 43259.2. Samples: 569954580. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 21:42:43,240][46753] Avg episode reward: [(0, '0.279')] +[2024-06-10 21:42:44,270][46990] Updated weights for policy 0, policy_version 34780 (0.0030) +[2024-06-10 21:42:47,851][46990] Updated weights for policy 0, policy_version 34790 (0.0029) +[2024-06-10 21:42:48,240][46753] Fps is (10 sec: 39321.3, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 569999360. Throughput: 0: 43362.0. Samples: 570078160. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 21:42:48,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:42:52,093][46990] Updated weights for policy 0, policy_version 34800 (0.0027) +[2024-06-10 21:42:53,239][46753] Fps is (10 sec: 49151.5, 60 sec: 43417.6, 300 sec: 43598.1). Total num frames: 570261504. Throughput: 0: 43433.3. Samples: 570345580. Policy #0 lag: (min: 0.0, avg: 8.4, max: 21.0) +[2024-06-10 21:42:53,241][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:42:55,484][46990] Updated weights for policy 0, policy_version 34810 (0.0027) +[2024-06-10 21:42:57,137][46970] Signal inference workers to stop experience collection... (8550 times) +[2024-06-10 21:42:57,164][46990] InferenceWorker_p0-w0: stopping experience collection (8550 times) +[2024-06-10 21:42:57,191][46970] Signal inference workers to resume experience collection... (8550 times) +[2024-06-10 21:42:57,192][46990] InferenceWorker_p0-w0: resuming experience collection (8550 times) +[2024-06-10 21:42:58,239][46753] Fps is (10 sec: 42599.2, 60 sec: 43144.6, 300 sec: 43320.4). Total num frames: 570425344. Throughput: 0: 43275.6. Samples: 570602780. Policy #0 lag: (min: 0.0, avg: 9.1, max: 20.0) +[2024-06-10 21:42:58,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:42:59,495][46990] Updated weights for policy 0, policy_version 34820 (0.0042) +[2024-06-10 21:43:02,748][46990] Updated weights for policy 0, policy_version 34830 (0.0035) +[2024-06-10 21:43:03,240][46753] Fps is (10 sec: 39321.5, 60 sec: 43690.6, 300 sec: 43542.5). Total num frames: 570654720. Throughput: 0: 43316.8. Samples: 570725760. Policy #0 lag: (min: 0.0, avg: 9.1, max: 20.0) +[2024-06-10 21:43:03,243][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:43:06,886][46990] Updated weights for policy 0, policy_version 34840 (0.0036) +[2024-06-10 21:43:08,239][46753] Fps is (10 sec: 47513.5, 60 sec: 43144.6, 300 sec: 43542.6). Total num frames: 570900480. Throughput: 0: 43314.4. Samples: 570992420. Policy #0 lag: (min: 0.0, avg: 9.1, max: 20.0) +[2024-06-10 21:43:08,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:43:10,076][46990] Updated weights for policy 0, policy_version 34850 (0.0034) +[2024-06-10 21:43:13,240][46753] Fps is (10 sec: 42598.4, 60 sec: 43417.5, 300 sec: 43320.4). Total num frames: 571080704. Throughput: 0: 43414.7. Samples: 571255600. Policy #0 lag: (min: 0.0, avg: 9.1, max: 20.0) +[2024-06-10 21:43:13,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:43:14,425][46990] Updated weights for policy 0, policy_version 34860 (0.0036) +[2024-06-10 21:43:17,711][46990] Updated weights for policy 0, policy_version 34870 (0.0045) +[2024-06-10 21:43:18,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 571310080. Throughput: 0: 43434.1. Samples: 571377120. Policy #0 lag: (min: 0.0, avg: 9.1, max: 20.0) +[2024-06-10 21:43:18,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:43:22,054][46990] Updated weights for policy 0, policy_version 34880 (0.0050) +[2024-06-10 21:43:23,239][46753] Fps is (10 sec: 45875.9, 60 sec: 43144.7, 300 sec: 43487.0). Total num frames: 571539456. Throughput: 0: 43430.4. Samples: 571642100. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:43:23,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:43:25,369][46990] Updated weights for policy 0, policy_version 34890 (0.0025) +[2024-06-10 21:43:28,244][46753] Fps is (10 sec: 42579.2, 60 sec: 43687.5, 300 sec: 43375.3). Total num frames: 571736064. Throughput: 0: 43260.1. Samples: 571901480. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:43:28,244][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:43:29,390][46990] Updated weights for policy 0, policy_version 34900 (0.0037) +[2024-06-10 21:43:32,697][46990] Updated weights for policy 0, policy_version 34910 (0.0044) +[2024-06-10 21:43:33,239][46753] Fps is (10 sec: 42597.9, 60 sec: 43693.9, 300 sec: 43542.5). Total num frames: 571965440. Throughput: 0: 43353.9. Samples: 572029080. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:43:33,243][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:43:37,015][46990] Updated weights for policy 0, policy_version 34920 (0.0027) +[2024-06-10 21:43:38,239][46753] Fps is (10 sec: 45895.9, 60 sec: 43144.6, 300 sec: 43542.6). Total num frames: 572194816. Throughput: 0: 43308.5. Samples: 572294460. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:43:38,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:43:39,904][46990] Updated weights for policy 0, policy_version 34930 (0.0042) +[2024-06-10 21:43:43,240][46753] Fps is (10 sec: 40959.9, 60 sec: 43417.5, 300 sec: 43320.4). Total num frames: 572375040. Throughput: 0: 43291.0. Samples: 572550880. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:43:43,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:43:44,651][46990] Updated weights for policy 0, policy_version 34940 (0.0029) +[2024-06-10 21:43:47,815][46990] Updated weights for policy 0, policy_version 34950 (0.0037) +[2024-06-10 21:43:48,240][46753] Fps is (10 sec: 42598.0, 60 sec: 43690.7, 300 sec: 43542.7). Total num frames: 572620800. Throughput: 0: 43413.8. Samples: 572679380. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:43:48,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:43:52,158][46990] Updated weights for policy 0, policy_version 34960 (0.0036) +[2024-06-10 21:43:53,244][46753] Fps is (10 sec: 45854.9, 60 sec: 42868.3, 300 sec: 43486.4). Total num frames: 572833792. Throughput: 0: 43288.5. Samples: 572940600. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:43:53,245][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:43:55,691][46990] Updated weights for policy 0, policy_version 34970 (0.0038) +[2024-06-10 21:43:58,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43417.6, 300 sec: 43375.9). Total num frames: 573030400. Throughput: 0: 43235.2. Samples: 573201180. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:43:58,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:43:59,461][46990] Updated weights for policy 0, policy_version 34980 (0.0037) +[2024-06-10 21:44:02,987][46990] Updated weights for policy 0, policy_version 34990 (0.0027) +[2024-06-10 21:44:03,239][46753] Fps is (10 sec: 44256.7, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 573276160. Throughput: 0: 43422.2. Samples: 573331120. Policy #0 lag: (min: 0.0, avg: 8.3, max: 21.0) +[2024-06-10 21:44:03,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:44:04,563][46970] Signal inference workers to stop experience collection... (8600 times) +[2024-06-10 21:44:04,575][46970] Signal inference workers to resume experience collection... (8600 times) +[2024-06-10 21:44:04,580][46990] InferenceWorker_p0-w0: stopping experience collection (8600 times) +[2024-06-10 21:44:04,604][46990] InferenceWorker_p0-w0: resuming experience collection (8600 times) +[2024-06-10 21:44:07,149][46990] Updated weights for policy 0, policy_version 35000 (0.0031) +[2024-06-10 21:44:08,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43144.5, 300 sec: 43431.5). Total num frames: 573489152. Throughput: 0: 43548.8. Samples: 573601800. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:44:08,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:44:10,264][46990] Updated weights for policy 0, policy_version 35010 (0.0034) +[2024-06-10 21:44:13,239][46753] Fps is (10 sec: 40960.4, 60 sec: 43417.7, 300 sec: 43320.4). Total num frames: 573685760. Throughput: 0: 43551.5. Samples: 573861100. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:44:13,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:44:14,666][46990] Updated weights for policy 0, policy_version 35020 (0.0036) +[2024-06-10 21:44:18,115][46990] Updated weights for policy 0, policy_version 35030 (0.0029) +[2024-06-10 21:44:18,239][46753] Fps is (10 sec: 44236.4, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 573931520. Throughput: 0: 43556.4. Samples: 573989120. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:44:18,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:44:22,000][46990] Updated weights for policy 0, policy_version 35040 (0.0033) +[2024-06-10 21:44:23,239][46753] Fps is (10 sec: 45875.1, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 574144512. Throughput: 0: 43492.9. Samples: 574251640. Policy #0 lag: (min: 0.0, avg: 8.9, max: 21.0) +[2024-06-10 21:44:23,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:44:23,469][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000035045_574177280.pth... +[2024-06-10 21:44:23,527][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000034408_563740672.pth +[2024-06-10 21:44:25,819][46990] Updated weights for policy 0, policy_version 35050 (0.0026) +[2024-06-10 21:44:28,239][46753] Fps is (10 sec: 40960.6, 60 sec: 43420.9, 300 sec: 43376.6). Total num frames: 574341120. Throughput: 0: 43601.0. Samples: 574512920. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 21:44:28,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:44:29,718][46990] Updated weights for policy 0, policy_version 35060 (0.0047) +[2024-06-10 21:44:33,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 574586880. Throughput: 0: 43656.5. Samples: 574643920. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 21:44:33,241][46990] Updated weights for policy 0, policy_version 35070 (0.0041) +[2024-06-10 21:44:33,248][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:44:37,011][46990] Updated weights for policy 0, policy_version 35080 (0.0037) +[2024-06-10 21:44:38,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43144.6, 300 sec: 43431.5). Total num frames: 574783488. Throughput: 0: 43661.3. Samples: 574905160. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 21:44:38,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:44:40,828][46990] Updated weights for policy 0, policy_version 35090 (0.0034) +[2024-06-10 21:44:43,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43690.7, 300 sec: 43431.5). Total num frames: 574996480. Throughput: 0: 43623.5. Samples: 575164240. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 21:44:43,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:44:44,993][46990] Updated weights for policy 0, policy_version 35100 (0.0031) +[2024-06-10 21:44:48,239][46753] Fps is (10 sec: 44236.3, 60 sec: 43417.6, 300 sec: 43375.9). Total num frames: 575225856. Throughput: 0: 43689.3. Samples: 575297140. Policy #0 lag: (min: 0.0, avg: 10.3, max: 21.0) +[2024-06-10 21:44:48,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:44:48,455][46990] Updated weights for policy 0, policy_version 35110 (0.0028) +[2024-06-10 21:44:52,267][46990] Updated weights for policy 0, policy_version 35120 (0.0042) +[2024-06-10 21:44:53,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43147.8, 300 sec: 43320.4). Total num frames: 575422464. Throughput: 0: 43368.4. Samples: 575553380. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:44:53,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:44:55,979][46990] Updated weights for policy 0, policy_version 35130 (0.0027) +[2024-06-10 21:44:58,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 575651840. Throughput: 0: 43304.3. Samples: 575809800. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:44:58,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:45:00,156][46990] Updated weights for policy 0, policy_version 35140 (0.0029) +[2024-06-10 21:45:03,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43144.5, 300 sec: 43321.1). Total num frames: 575864832. Throughput: 0: 43368.1. Samples: 575940680. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:45:03,240][46753] Avg episode reward: [(0, '0.306')] +[2024-06-10 21:45:03,512][46990] Updated weights for policy 0, policy_version 35150 (0.0047) +[2024-06-10 21:45:07,526][46990] Updated weights for policy 0, policy_version 35160 (0.0033) +[2024-06-10 21:45:08,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43144.6, 300 sec: 43376.0). Total num frames: 576077824. Throughput: 0: 43347.6. Samples: 576202280. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:45:08,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:45:11,157][46990] Updated weights for policy 0, policy_version 35170 (0.0030) +[2024-06-10 21:45:13,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.5, 300 sec: 43431.5). Total num frames: 576290816. Throughput: 0: 43376.8. Samples: 576464880. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 21:45:13,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:45:15,357][46990] Updated weights for policy 0, policy_version 35180 (0.0028) +[2024-06-10 21:45:18,244][46753] Fps is (10 sec: 44216.7, 60 sec: 43141.4, 300 sec: 43375.3). Total num frames: 576520192. Throughput: 0: 43259.7. Samples: 576590800. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 21:45:18,244][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:45:18,802][46990] Updated weights for policy 0, policy_version 35190 (0.0033) +[2024-06-10 21:45:22,813][46990] Updated weights for policy 0, policy_version 35200 (0.0041) +[2024-06-10 21:45:23,239][46753] Fps is (10 sec: 42598.7, 60 sec: 42871.5, 300 sec: 43321.1). Total num frames: 576716800. Throughput: 0: 43192.8. Samples: 576848840. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 21:45:23,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:45:26,271][46990] Updated weights for policy 0, policy_version 35210 (0.0030) +[2024-06-10 21:45:28,239][46753] Fps is (10 sec: 42617.5, 60 sec: 43417.5, 300 sec: 43431.5). Total num frames: 576946176. Throughput: 0: 43245.4. Samples: 577110280. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 21:45:28,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:45:30,249][46990] Updated weights for policy 0, policy_version 35220 (0.0035) +[2024-06-10 21:45:33,240][46753] Fps is (10 sec: 44236.4, 60 sec: 42871.4, 300 sec: 43375.9). Total num frames: 577159168. Throughput: 0: 43250.6. Samples: 577243420. Policy #0 lag: (min: 0.0, avg: 9.8, max: 21.0) +[2024-06-10 21:45:33,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:45:33,841][46990] Updated weights for policy 0, policy_version 35230 (0.0045) +[2024-06-10 21:45:38,215][46990] Updated weights for policy 0, policy_version 35240 (0.0034) +[2024-06-10 21:45:38,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43144.4, 300 sec: 43320.4). Total num frames: 577372160. Throughput: 0: 43320.0. Samples: 577502780. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 21:45:38,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:45:40,801][46970] Signal inference workers to stop experience collection... (8650 times) +[2024-06-10 21:45:40,801][46970] Signal inference workers to resume experience collection... (8650 times) +[2024-06-10 21:45:40,818][46990] InferenceWorker_p0-w0: stopping experience collection (8650 times) +[2024-06-10 21:45:40,818][46990] InferenceWorker_p0-w0: resuming experience collection (8650 times) +[2024-06-10 21:45:41,506][46990] Updated weights for policy 0, policy_version 35250 (0.0031) +[2024-06-10 21:45:43,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43144.5, 300 sec: 43431.5). Total num frames: 577585152. Throughput: 0: 43248.9. Samples: 577756000. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 21:45:43,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:45:45,858][46990] Updated weights for policy 0, policy_version 35260 (0.0039) +[2024-06-10 21:45:48,239][46753] Fps is (10 sec: 45875.6, 60 sec: 43417.7, 300 sec: 43376.0). Total num frames: 577830912. Throughput: 0: 43286.3. Samples: 577888560. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 21:45:48,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:45:48,887][46990] Updated weights for policy 0, policy_version 35270 (0.0042) +[2024-06-10 21:45:53,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43144.5, 300 sec: 43264.9). Total num frames: 578011136. Throughput: 0: 43271.5. Samples: 578149500. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 21:45:53,240][46753] Avg episode reward: [(0, '0.304')] +[2024-06-10 21:45:53,483][46990] Updated weights for policy 0, policy_version 35280 (0.0034) +[2024-06-10 21:45:56,574][46990] Updated weights for policy 0, policy_version 35290 (0.0037) +[2024-06-10 21:45:58,240][46753] Fps is (10 sec: 40959.4, 60 sec: 43144.5, 300 sec: 43487.0). Total num frames: 578240512. Throughput: 0: 43093.3. Samples: 578404080. Policy #0 lag: (min: 0.0, avg: 9.4, max: 22.0) +[2024-06-10 21:45:58,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:46:00,819][46990] Updated weights for policy 0, policy_version 35300 (0.0037) +[2024-06-10 21:46:03,239][46753] Fps is (10 sec: 44237.5, 60 sec: 43144.6, 300 sec: 43264.9). Total num frames: 578453504. Throughput: 0: 43336.0. Samples: 578540720. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:46:03,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:46:03,819][46990] Updated weights for policy 0, policy_version 35310 (0.0037) +[2024-06-10 21:46:08,239][46753] Fps is (10 sec: 40960.4, 60 sec: 42871.4, 300 sec: 43209.4). Total num frames: 578650112. Throughput: 0: 43440.0. Samples: 578803640. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:46:08,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:46:08,641][46990] Updated weights for policy 0, policy_version 35320 (0.0057) +[2024-06-10 21:46:11,512][46990] Updated weights for policy 0, policy_version 35330 (0.0044) +[2024-06-10 21:46:13,239][46753] Fps is (10 sec: 44235.9, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 578895872. Throughput: 0: 43357.7. Samples: 579061380. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:46:13,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:46:15,993][46990] Updated weights for policy 0, policy_version 35340 (0.0036) +[2024-06-10 21:46:18,239][46753] Fps is (10 sec: 47513.7, 60 sec: 43420.8, 300 sec: 43320.4). Total num frames: 579125248. Throughput: 0: 43276.5. Samples: 579190860. Policy #0 lag: (min: 0.0, avg: 10.0, max: 21.0) +[2024-06-10 21:46:18,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:46:18,963][46990] Updated weights for policy 0, policy_version 35350 (0.0029) +[2024-06-10 21:46:23,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43417.6, 300 sec: 43320.4). Total num frames: 579321856. Throughput: 0: 43389.4. Samples: 579455300. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 21:46:23,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:46:23,254][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000035359_579321856.pth... +[2024-06-10 21:46:23,299][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000034727_568967168.pth +[2024-06-10 21:46:23,573][46990] Updated weights for policy 0, policy_version 35360 (0.0034) +[2024-06-10 21:46:26,376][46990] Updated weights for policy 0, policy_version 35370 (0.0039) +[2024-06-10 21:46:28,239][46753] Fps is (10 sec: 40960.3, 60 sec: 43144.6, 300 sec: 43431.5). Total num frames: 579534848. Throughput: 0: 43419.3. Samples: 579709860. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 21:46:28,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:46:31,321][46990] Updated weights for policy 0, policy_version 35380 (0.0034) +[2024-06-10 21:46:33,239][46753] Fps is (10 sec: 44236.8, 60 sec: 43417.6, 300 sec: 43264.9). Total num frames: 579764224. Throughput: 0: 43567.1. Samples: 579849080. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 21:46:33,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:46:34,061][46990] Updated weights for policy 0, policy_version 35390 (0.0041) +[2024-06-10 21:46:38,239][46753] Fps is (10 sec: 42598.1, 60 sec: 43144.6, 300 sec: 43264.9). Total num frames: 579960832. Throughput: 0: 43523.6. Samples: 580108060. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 21:46:38,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:46:38,738][46990] Updated weights for policy 0, policy_version 35400 (0.0028) +[2024-06-10 21:46:41,465][46990] Updated weights for policy 0, policy_version 35410 (0.0041) +[2024-06-10 21:46:43,244][46753] Fps is (10 sec: 44217.0, 60 sec: 43687.5, 300 sec: 43486.4). Total num frames: 580206592. Throughput: 0: 43596.6. Samples: 580366120. Policy #0 lag: (min: 0.0, avg: 11.0, max: 21.0) +[2024-06-10 21:46:43,244][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:46:46,083][46990] Updated weights for policy 0, policy_version 35420 (0.0033) +[2024-06-10 21:46:48,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43144.5, 300 sec: 43264.9). Total num frames: 580419584. Throughput: 0: 43462.1. Samples: 580496520. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 21:46:48,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:46:48,949][46990] Updated weights for policy 0, policy_version 35430 (0.0036) +[2024-06-10 21:46:53,240][46753] Fps is (10 sec: 40977.7, 60 sec: 43417.5, 300 sec: 43320.4). Total num frames: 580616192. Throughput: 0: 43338.1. Samples: 580753860. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 21:46:53,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:46:53,717][46990] Updated weights for policy 0, policy_version 35440 (0.0035) +[2024-06-10 21:46:56,634][46990] Updated weights for policy 0, policy_version 35450 (0.0039) +[2024-06-10 21:46:58,239][46753] Fps is (10 sec: 44237.0, 60 sec: 43690.8, 300 sec: 43487.0). Total num frames: 580861952. Throughput: 0: 43323.2. Samples: 581010920. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 21:46:58,240][46753] Avg episode reward: [(0, '0.282')] +[2024-06-10 21:47:01,370][46990] Updated weights for policy 0, policy_version 35460 (0.0035) +[2024-06-10 21:47:02,897][46970] Signal inference workers to stop experience collection... (8700 times) +[2024-06-10 21:47:02,940][46990] InferenceWorker_p0-w0: stopping experience collection (8700 times) +[2024-06-10 21:47:02,943][46970] Signal inference workers to resume experience collection... (8700 times) +[2024-06-10 21:47:02,949][46990] InferenceWorker_p0-w0: resuming experience collection (8700 times) +[2024-06-10 21:47:03,239][46753] Fps is (10 sec: 45876.2, 60 sec: 43690.6, 300 sec: 43264.9). Total num frames: 581074944. Throughput: 0: 43595.1. Samples: 581152640. Policy #0 lag: (min: 0.0, avg: 10.2, max: 22.0) +[2024-06-10 21:47:03,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:47:04,143][46990] Updated weights for policy 0, policy_version 35470 (0.0033) +[2024-06-10 21:47:08,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43690.7, 300 sec: 43375.9). Total num frames: 581271552. Throughput: 0: 43444.5. Samples: 581410300. Policy #0 lag: (min: 0.0, avg: 8.7, max: 20.0) +[2024-06-10 21:47:08,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:47:08,636][46990] Updated weights for policy 0, policy_version 35480 (0.0041) +[2024-06-10 21:47:11,535][46990] Updated weights for policy 0, policy_version 35490 (0.0028) +[2024-06-10 21:47:13,241][46753] Fps is (10 sec: 44231.0, 60 sec: 43689.8, 300 sec: 43486.8). Total num frames: 581517312. Throughput: 0: 43520.5. Samples: 581668340. Policy #0 lag: (min: 0.0, avg: 8.7, max: 20.0) +[2024-06-10 21:47:13,241][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:47:16,505][46990] Updated weights for policy 0, policy_version 35500 (0.0039) +[2024-06-10 21:47:18,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43144.6, 300 sec: 43264.9). Total num frames: 581713920. Throughput: 0: 43330.8. Samples: 581798960. Policy #0 lag: (min: 0.0, avg: 8.7, max: 20.0) +[2024-06-10 21:47:18,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:47:19,338][46990] Updated weights for policy 0, policy_version 35510 (0.0042) +[2024-06-10 21:47:23,244][46753] Fps is (10 sec: 40946.7, 60 sec: 43414.3, 300 sec: 43430.8). Total num frames: 581926912. Throughput: 0: 43309.4. Samples: 582057180. Policy #0 lag: (min: 0.0, avg: 8.7, max: 20.0) +[2024-06-10 21:47:23,245][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:47:23,729][46990] Updated weights for policy 0, policy_version 35520 (0.0035) +[2024-06-10 21:47:26,858][46990] Updated weights for policy 0, policy_version 35530 (0.0039) +[2024-06-10 21:47:28,240][46753] Fps is (10 sec: 45874.5, 60 sec: 43963.6, 300 sec: 43487.7). Total num frames: 582172672. Throughput: 0: 43169.1. Samples: 582308540. Policy #0 lag: (min: 0.0, avg: 8.7, max: 20.0) +[2024-06-10 21:47:28,241][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:47:31,357][46990] Updated weights for policy 0, policy_version 35540 (0.0027) +[2024-06-10 21:47:33,240][46753] Fps is (10 sec: 44255.9, 60 sec: 43417.5, 300 sec: 43264.9). Total num frames: 582369280. Throughput: 0: 43493.6. Samples: 582453740. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 21:47:33,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:47:34,559][46990] Updated weights for policy 0, policy_version 35550 (0.0049) +[2024-06-10 21:47:38,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43690.7, 300 sec: 43431.5). Total num frames: 582582272. Throughput: 0: 43491.2. Samples: 582710960. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 21:47:38,240][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:47:39,115][46990] Updated weights for policy 0, policy_version 35560 (0.0039) +[2024-06-10 21:47:42,063][46990] Updated weights for policy 0, policy_version 35570 (0.0035) +[2024-06-10 21:47:43,239][46753] Fps is (10 sec: 45876.0, 60 sec: 43693.9, 300 sec: 43487.0). Total num frames: 582828032. Throughput: 0: 43347.1. Samples: 582961540. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 21:47:43,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:47:46,565][46990] Updated weights for policy 0, policy_version 35580 (0.0033) +[2024-06-10 21:47:48,239][46753] Fps is (10 sec: 40960.1, 60 sec: 42871.5, 300 sec: 43153.8). Total num frames: 582991872. Throughput: 0: 43162.2. Samples: 583094940. Policy #0 lag: (min: 0.0, avg: 9.5, max: 21.0) +[2024-06-10 21:47:48,240][46753] Avg episode reward: [(0, '0.304')] +[2024-06-10 21:47:49,799][46990] Updated weights for policy 0, policy_version 35590 (0.0034) +[2024-06-10 21:47:53,239][46753] Fps is (10 sec: 39321.6, 60 sec: 43417.7, 300 sec: 43375.9). Total num frames: 583221248. Throughput: 0: 43272.4. Samples: 583357560. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:47:53,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:47:53,768][46990] Updated weights for policy 0, policy_version 35600 (0.0031) +[2024-06-10 21:47:57,144][46990] Updated weights for policy 0, policy_version 35610 (0.0031) +[2024-06-10 21:47:58,244][46753] Fps is (10 sec: 49130.1, 60 sec: 43687.4, 300 sec: 43486.4). Total num frames: 583483392. Throughput: 0: 43170.7. Samples: 583611160. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:47:58,244][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:48:01,680][46990] Updated weights for policy 0, policy_version 35620 (0.0051) +[2024-06-10 21:48:03,239][46753] Fps is (10 sec: 45874.9, 60 sec: 43417.5, 300 sec: 43320.4). Total num frames: 583680000. Throughput: 0: 43423.4. Samples: 583753020. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:48:03,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:48:04,815][46990] Updated weights for policy 0, policy_version 35630 (0.0043) +[2024-06-10 21:48:08,240][46753] Fps is (10 sec: 40977.8, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 583892992. Throughput: 0: 43395.4. Samples: 584009780. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:48:08,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:48:09,253][46990] Updated weights for policy 0, policy_version 35640 (0.0041) +[2024-06-10 21:48:12,553][46990] Updated weights for policy 0, policy_version 35650 (0.0037) +[2024-06-10 21:48:12,567][46970] Signal inference workers to stop experience collection... (8750 times) +[2024-06-10 21:48:12,568][46970] Signal inference workers to resume experience collection... (8750 times) +[2024-06-10 21:48:12,599][46990] InferenceWorker_p0-w0: stopping experience collection (8750 times) +[2024-06-10 21:48:12,599][46990] InferenceWorker_p0-w0: resuming experience collection (8750 times) +[2024-06-10 21:48:13,239][46753] Fps is (10 sec: 45875.7, 60 sec: 43691.6, 300 sec: 43487.0). Total num frames: 584138752. Throughput: 0: 43474.8. Samples: 584264900. Policy #0 lag: (min: 0.0, avg: 9.2, max: 21.0) +[2024-06-10 21:48:13,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:48:16,686][46990] Updated weights for policy 0, policy_version 35660 (0.0032) +[2024-06-10 21:48:18,239][46753] Fps is (10 sec: 42599.0, 60 sec: 43417.6, 300 sec: 43320.4). Total num frames: 584318976. Throughput: 0: 43122.9. Samples: 584394260. Policy #0 lag: (min: 1.0, avg: 9.2, max: 21.0) +[2024-06-10 21:48:18,240][46753] Avg episode reward: [(0, '0.305')] +[2024-06-10 21:48:19,889][46990] Updated weights for policy 0, policy_version 35670 (0.0038) +[2024-06-10 21:48:23,239][46753] Fps is (10 sec: 40959.9, 60 sec: 43694.0, 300 sec: 43432.1). Total num frames: 584548352. Throughput: 0: 43257.4. Samples: 584657540. Policy #0 lag: (min: 1.0, avg: 9.2, max: 21.0) +[2024-06-10 21:48:23,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:48:23,255][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000035678_584548352.pth... +[2024-06-10 21:48:23,314][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000035045_574177280.pth +[2024-06-10 21:48:23,849][46990] Updated weights for policy 0, policy_version 35680 (0.0044) +[2024-06-10 21:48:27,499][46990] Updated weights for policy 0, policy_version 35690 (0.0034) +[2024-06-10 21:48:28,239][46753] Fps is (10 sec: 45875.3, 60 sec: 43417.7, 300 sec: 43431.5). Total num frames: 584777728. Throughput: 0: 43457.8. Samples: 584917140. Policy #0 lag: (min: 1.0, avg: 9.2, max: 21.0) +[2024-06-10 21:48:28,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:48:31,654][46990] Updated weights for policy 0, policy_version 35700 (0.0050) +[2024-06-10 21:48:33,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43144.7, 300 sec: 43264.9). Total num frames: 584957952. Throughput: 0: 43452.0. Samples: 585050280. Policy #0 lag: (min: 1.0, avg: 9.2, max: 21.0) +[2024-06-10 21:48:33,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:48:35,141][46990] Updated weights for policy 0, policy_version 35710 (0.0035) +[2024-06-10 21:48:38,244][46753] Fps is (10 sec: 40941.5, 60 sec: 43414.4, 300 sec: 43430.8). Total num frames: 585187328. Throughput: 0: 43317.1. Samples: 585307020. Policy #0 lag: (min: 1.0, avg: 9.2, max: 21.0) +[2024-06-10 21:48:38,244][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:48:39,455][46990] Updated weights for policy 0, policy_version 35720 (0.0043) +[2024-06-10 21:48:42,646][46990] Updated weights for policy 0, policy_version 35730 (0.0046) +[2024-06-10 21:48:43,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43144.5, 300 sec: 43376.0). Total num frames: 585416704. Throughput: 0: 43499.4. Samples: 585568440. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 21:48:43,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:48:46,706][46990] Updated weights for policy 0, policy_version 35740 (0.0027) +[2024-06-10 21:48:48,239][46753] Fps is (10 sec: 42617.7, 60 sec: 43690.7, 300 sec: 43321.1). Total num frames: 585613312. Throughput: 0: 43185.9. Samples: 585696380. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 21:48:48,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:48:50,185][46990] Updated weights for policy 0, policy_version 35750 (0.0037) +[2024-06-10 21:48:53,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 585842688. Throughput: 0: 43244.4. Samples: 585955780. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 21:48:53,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:48:53,918][46990] Updated weights for policy 0, policy_version 35760 (0.0040) +[2024-06-10 21:48:57,950][46990] Updated weights for policy 0, policy_version 35770 (0.0034) +[2024-06-10 21:48:58,239][46753] Fps is (10 sec: 45874.8, 60 sec: 43147.7, 300 sec: 43375.9). Total num frames: 586072064. Throughput: 0: 43446.1. Samples: 586219980. Policy #0 lag: (min: 0.0, avg: 10.2, max: 23.0) +[2024-06-10 21:48:58,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:49:01,862][46990] Updated weights for policy 0, policy_version 35780 (0.0030) +[2024-06-10 21:49:03,239][46753] Fps is (10 sec: 42598.9, 60 sec: 43144.6, 300 sec: 43320.4). Total num frames: 586268672. Throughput: 0: 43393.7. Samples: 586346980. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 21:49:03,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:49:05,288][46990] Updated weights for policy 0, policy_version 35790 (0.0039) +[2024-06-10 21:49:08,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43417.7, 300 sec: 43431.5). Total num frames: 586498048. Throughput: 0: 43406.3. Samples: 586610820. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 21:49:08,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:49:09,541][46990] Updated weights for policy 0, policy_version 35800 (0.0042) +[2024-06-10 21:49:12,743][46990] Updated weights for policy 0, policy_version 35810 (0.0029) +[2024-06-10 21:49:13,240][46753] Fps is (10 sec: 44236.3, 60 sec: 42871.3, 300 sec: 43320.4). Total num frames: 586711040. Throughput: 0: 43347.4. Samples: 586867780. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 21:49:13,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:49:16,894][46990] Updated weights for policy 0, policy_version 35820 (0.0030) +[2024-06-10 21:49:18,240][46753] Fps is (10 sec: 42597.7, 60 sec: 43417.5, 300 sec: 43320.4). Total num frames: 586924032. Throughput: 0: 43290.5. Samples: 586998360. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 21:49:18,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:49:20,650][46990] Updated weights for policy 0, policy_version 35830 (0.0032) +[2024-06-10 21:49:23,239][46753] Fps is (10 sec: 44237.6, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 587153408. Throughput: 0: 43341.2. Samples: 587257180. Policy #0 lag: (min: 0.0, avg: 10.5, max: 23.0) +[2024-06-10 21:49:23,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:49:24,201][46990] Updated weights for policy 0, policy_version 35840 (0.0035) +[2024-06-10 21:49:27,968][46990] Updated weights for policy 0, policy_version 35850 (0.0040) +[2024-06-10 21:49:28,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43144.5, 300 sec: 43320.4). Total num frames: 587366400. Throughput: 0: 43383.5. Samples: 587520700. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:49:28,242][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:49:31,965][46990] Updated weights for policy 0, policy_version 35860 (0.0038) +[2024-06-10 21:49:33,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43690.7, 300 sec: 43375.9). Total num frames: 587579392. Throughput: 0: 43520.0. Samples: 587654780. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:49:33,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:49:35,268][46990] Updated weights for policy 0, policy_version 35870 (0.0036) +[2024-06-10 21:49:38,240][46753] Fps is (10 sec: 42598.0, 60 sec: 43420.7, 300 sec: 43375.9). Total num frames: 587792384. Throughput: 0: 43470.7. Samples: 587911960. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:49:38,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:49:39,442][46990] Updated weights for policy 0, policy_version 35880 (0.0026) +[2024-06-10 21:49:42,977][46990] Updated weights for policy 0, policy_version 35890 (0.0044) +[2024-06-10 21:49:43,240][46753] Fps is (10 sec: 44236.3, 60 sec: 43417.5, 300 sec: 43375.9). Total num frames: 588021760. Throughput: 0: 43386.6. Samples: 588172380. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:49:43,243][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:49:47,132][46990] Updated weights for policy 0, policy_version 35900 (0.0039) +[2024-06-10 21:49:48,239][46753] Fps is (10 sec: 44237.4, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 588234752. Throughput: 0: 43504.0. Samples: 588304660. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 21:49:48,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:49:50,513][46990] Updated weights for policy 0, policy_version 35910 (0.0042) +[2024-06-10 21:49:53,239][46753] Fps is (10 sec: 42598.6, 60 sec: 43417.7, 300 sec: 43376.0). Total num frames: 588447744. Throughput: 0: 43309.7. Samples: 588559760. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 21:49:53,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:49:54,705][46990] Updated weights for policy 0, policy_version 35920 (0.0032) +[2024-06-10 21:49:58,148][46990] Updated weights for policy 0, policy_version 35930 (0.0047) +[2024-06-10 21:49:58,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43417.7, 300 sec: 43431.5). Total num frames: 588677120. Throughput: 0: 43518.4. Samples: 588826100. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 21:49:58,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:50:01,932][46990] Updated weights for policy 0, policy_version 35940 (0.0036) +[2024-06-10 21:50:03,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.6, 300 sec: 43375.9). Total num frames: 588873728. Throughput: 0: 43589.8. Samples: 588959900. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 21:50:03,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:50:04,241][46970] Signal inference workers to stop experience collection... (8800 times) +[2024-06-10 21:50:04,242][46970] Signal inference workers to resume experience collection... (8800 times) +[2024-06-10 21:50:04,286][46990] InferenceWorker_p0-w0: stopping experience collection (8800 times) +[2024-06-10 21:50:04,286][46990] InferenceWorker_p0-w0: resuming experience collection (8800 times) +[2024-06-10 21:50:05,461][46990] Updated weights for policy 0, policy_version 35950 (0.0043) +[2024-06-10 21:50:08,239][46753] Fps is (10 sec: 42597.8, 60 sec: 43417.5, 300 sec: 43431.5). Total num frames: 589103104. Throughput: 0: 43605.7. Samples: 589219440. Policy #0 lag: (min: 0.0, avg: 10.4, max: 22.0) +[2024-06-10 21:50:08,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:50:09,539][46990] Updated weights for policy 0, policy_version 35960 (0.0044) +[2024-06-10 21:50:13,240][46753] Fps is (10 sec: 44236.6, 60 sec: 43417.6, 300 sec: 43376.6). Total num frames: 589316096. Throughput: 0: 43516.8. Samples: 589478960. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:50:13,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:50:13,250][46990] Updated weights for policy 0, policy_version 35970 (0.0043) +[2024-06-10 21:50:17,170][46990] Updated weights for policy 0, policy_version 35980 (0.0034) +[2024-06-10 21:50:18,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.8, 300 sec: 43487.0). Total num frames: 589545472. Throughput: 0: 43432.0. Samples: 589609220. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:50:18,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:50:20,893][46990] Updated weights for policy 0, policy_version 35990 (0.0033) +[2024-06-10 21:50:23,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43144.5, 300 sec: 43375.9). Total num frames: 589742080. Throughput: 0: 43428.1. Samples: 589866220. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:50:23,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:50:23,245][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000035995_589742080.pth... +[2024-06-10 21:50:23,299][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000035359_579321856.pth +[2024-06-10 21:50:24,643][46990] Updated weights for policy 0, policy_version 36000 (0.0047) +[2024-06-10 21:50:28,244][46753] Fps is (10 sec: 40941.3, 60 sec: 43141.3, 300 sec: 43375.3). Total num frames: 589955072. Throughput: 0: 43409.9. Samples: 590126020. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:50:28,245][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:50:28,706][46990] Updated weights for policy 0, policy_version 36010 (0.0042) +[2024-06-10 21:50:32,075][46990] Updated weights for policy 0, policy_version 36020 (0.0037) +[2024-06-10 21:50:33,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 590184448. Throughput: 0: 43375.2. Samples: 590256540. Policy #0 lag: (min: 0.0, avg: 9.9, max: 22.0) +[2024-06-10 21:50:33,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:50:35,909][46990] Updated weights for policy 0, policy_version 36030 (0.0043) +[2024-06-10 21:50:38,240][46753] Fps is (10 sec: 45895.7, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 590413824. Throughput: 0: 43663.0. Samples: 590524600. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:50:38,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:50:39,519][46990] Updated weights for policy 0, policy_version 36040 (0.0028) +[2024-06-10 21:50:43,240][46753] Fps is (10 sec: 42597.9, 60 sec: 43144.5, 300 sec: 43320.4). Total num frames: 590610432. Throughput: 0: 43472.8. Samples: 590782380. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:50:43,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:50:43,623][46990] Updated weights for policy 0, policy_version 36050 (0.0032) +[2024-06-10 21:50:47,162][46990] Updated weights for policy 0, policy_version 36060 (0.0027) +[2024-06-10 21:50:48,239][46753] Fps is (10 sec: 42598.8, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 590839808. Throughput: 0: 43320.9. Samples: 590909340. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:50:48,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:50:51,345][46990] Updated weights for policy 0, policy_version 36070 (0.0032) +[2024-06-10 21:50:53,240][46753] Fps is (10 sec: 44236.8, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 591052800. Throughput: 0: 43432.5. Samples: 591173900. Policy #0 lag: (min: 0.0, avg: 10.0, max: 22.0) +[2024-06-10 21:50:53,240][46753] Avg episode reward: [(0, '0.287')] +[2024-06-10 21:50:54,710][46990] Updated weights for policy 0, policy_version 36080 (0.0041) +[2024-06-10 21:50:58,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43144.5, 300 sec: 43431.5). Total num frames: 591265792. Throughput: 0: 43448.6. Samples: 591434140. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:50:58,240][46753] Avg episode reward: [(0, '0.310')] +[2024-06-10 21:50:58,240][46970] Saving new best policy, reward=0.310! +[2024-06-10 21:50:58,932][46990] Updated weights for policy 0, policy_version 36090 (0.0037) +[2024-06-10 21:51:02,127][46990] Updated weights for policy 0, policy_version 36100 (0.0032) +[2024-06-10 21:51:03,239][46753] Fps is (10 sec: 44237.1, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 591495168. Throughput: 0: 43451.1. Samples: 591564520. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:51:03,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:51:06,499][46990] Updated weights for policy 0, policy_version 36110 (0.0024) +[2024-06-10 21:51:08,239][46753] Fps is (10 sec: 44236.6, 60 sec: 43417.7, 300 sec: 43431.5). Total num frames: 591708160. Throughput: 0: 43620.9. Samples: 591829160. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:51:08,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:51:09,833][46990] Updated weights for policy 0, policy_version 36120 (0.0050) +[2024-06-10 21:51:13,239][46753] Fps is (10 sec: 39321.6, 60 sec: 42871.5, 300 sec: 43264.9). Total num frames: 591888384. Throughput: 0: 43640.9. Samples: 592089660. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:51:13,252][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:51:14,171][46990] Updated weights for policy 0, policy_version 36130 (0.0028) +[2024-06-10 21:51:17,429][46990] Updated weights for policy 0, policy_version 36140 (0.0030) +[2024-06-10 21:51:18,239][46753] Fps is (10 sec: 44236.7, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 592150528. Throughput: 0: 43476.8. Samples: 592213000. Policy #0 lag: (min: 0.0, avg: 9.4, max: 21.0) +[2024-06-10 21:51:18,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:51:21,725][46990] Updated weights for policy 0, policy_version 36150 (0.0033) +[2024-06-10 21:51:23,239][46753] Fps is (10 sec: 47513.8, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 592363520. Throughput: 0: 43381.5. Samples: 592476760. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:51:23,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:51:24,746][46990] Updated weights for policy 0, policy_version 36160 (0.0036) +[2024-06-10 21:51:28,239][46753] Fps is (10 sec: 39321.8, 60 sec: 43147.8, 300 sec: 43320.4). Total num frames: 592543744. Throughput: 0: 43490.8. Samples: 592739460. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:51:28,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:51:29,252][46990] Updated weights for policy 0, policy_version 36170 (0.0026) +[2024-06-10 21:51:32,142][46990] Updated weights for policy 0, policy_version 36180 (0.0031) +[2024-06-10 21:51:33,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43417.6, 300 sec: 43487.0). Total num frames: 592789504. Throughput: 0: 43541.8. Samples: 592868720. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:51:33,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:51:33,962][46970] Signal inference workers to stop experience collection... (8850 times) +[2024-06-10 21:51:33,962][46970] Signal inference workers to resume experience collection... (8850 times) +[2024-06-10 21:51:33,989][46990] InferenceWorker_p0-w0: stopping experience collection (8850 times) +[2024-06-10 21:51:33,989][46990] InferenceWorker_p0-w0: resuming experience collection (8850 times) +[2024-06-10 21:51:36,668][46990] Updated weights for policy 0, policy_version 36190 (0.0034) +[2024-06-10 21:51:38,239][46753] Fps is (10 sec: 45875.2, 60 sec: 43144.6, 300 sec: 43376.6). Total num frames: 593002496. Throughput: 0: 43538.8. Samples: 593133140. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:51:38,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:51:39,842][46990] Updated weights for policy 0, policy_version 36200 (0.0039) +[2024-06-10 21:51:43,240][46753] Fps is (10 sec: 40959.4, 60 sec: 43144.5, 300 sec: 43320.4). Total num frames: 593199104. Throughput: 0: 43531.8. Samples: 593393080. Policy #0 lag: (min: 0.0, avg: 10.6, max: 21.0) +[2024-06-10 21:51:43,243][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:51:44,542][46990] Updated weights for policy 0, policy_version 36210 (0.0039) +[2024-06-10 21:51:47,235][46990] Updated weights for policy 0, policy_version 36220 (0.0033) +[2024-06-10 21:51:48,240][46753] Fps is (10 sec: 44234.5, 60 sec: 43417.3, 300 sec: 43487.0). Total num frames: 593444864. Throughput: 0: 43408.4. Samples: 593517920. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 21:51:48,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:51:52,015][46990] Updated weights for policy 0, policy_version 36230 (0.0044) +[2024-06-10 21:51:53,242][46753] Fps is (10 sec: 44227.2, 60 sec: 43142.9, 300 sec: 43320.1). Total num frames: 593641472. Throughput: 0: 43460.9. Samples: 593785000. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 21:51:53,242][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:51:54,557][46990] Updated weights for policy 0, policy_version 36240 (0.0028) +[2024-06-10 21:51:58,239][46753] Fps is (10 sec: 40962.2, 60 sec: 43144.5, 300 sec: 43320.4). Total num frames: 593854464. Throughput: 0: 43349.8. Samples: 594040400. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 21:51:58,240][46753] Avg episode reward: [(0, '0.278')] +[2024-06-10 21:51:59,691][46990] Updated weights for policy 0, policy_version 36250 (0.0036) +[2024-06-10 21:52:02,153][46990] Updated weights for policy 0, policy_version 36260 (0.0040) +[2024-06-10 21:52:03,239][46753] Fps is (10 sec: 47524.6, 60 sec: 43690.7, 300 sec: 43542.6). Total num frames: 594116608. Throughput: 0: 43584.0. Samples: 594174280. Policy #0 lag: (min: 0.0, avg: 7.8, max: 20.0) +[2024-06-10 21:52:03,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:52:07,220][46990] Updated weights for policy 0, policy_version 36270 (0.0027) +[2024-06-10 21:52:08,239][46753] Fps is (10 sec: 44236.2, 60 sec: 43144.5, 300 sec: 43320.6). Total num frames: 594296832. Throughput: 0: 43600.8. Samples: 594438800. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 21:52:08,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:52:09,992][46990] Updated weights for policy 0, policy_version 36280 (0.0042) +[2024-06-10 21:52:13,244][46753] Fps is (10 sec: 39303.8, 60 sec: 43687.4, 300 sec: 43375.3). Total num frames: 594509824. Throughput: 0: 43427.6. Samples: 594693900. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 21:52:13,245][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:52:15,086][46990] Updated weights for policy 0, policy_version 36290 (0.0040) +[2024-06-10 21:52:17,530][46990] Updated weights for policy 0, policy_version 36300 (0.0027) +[2024-06-10 21:52:18,244][46753] Fps is (10 sec: 45855.1, 60 sec: 43414.4, 300 sec: 43487.0). Total num frames: 594755584. Throughput: 0: 43444.5. Samples: 594823920. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 21:52:18,245][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:52:22,506][46990] Updated weights for policy 0, policy_version 36310 (0.0039) +[2024-06-10 21:52:23,239][46753] Fps is (10 sec: 42617.4, 60 sec: 42871.4, 300 sec: 43264.9). Total num frames: 594935808. Throughput: 0: 43379.0. Samples: 595085200. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 21:52:23,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:52:23,247][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000036312_594935808.pth... +[2024-06-10 21:52:23,321][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000035678_584548352.pth +[2024-06-10 21:52:24,941][46990] Updated weights for policy 0, policy_version 36320 (0.0042) +[2024-06-10 21:52:28,239][46753] Fps is (10 sec: 40978.4, 60 sec: 43690.7, 300 sec: 43376.0). Total num frames: 595165184. Throughput: 0: 43222.4. Samples: 595338080. Policy #0 lag: (min: 0.0, avg: 9.1, max: 22.0) +[2024-06-10 21:52:28,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:52:30,189][46990] Updated weights for policy 0, policy_version 36330 (0.0038) +[2024-06-10 21:52:31,686][46970] Signal inference workers to stop experience collection... (8900 times) +[2024-06-10 21:52:31,742][46990] InferenceWorker_p0-w0: stopping experience collection (8900 times) +[2024-06-10 21:52:31,745][46970] Signal inference workers to resume experience collection... (8900 times) +[2024-06-10 21:52:31,757][46990] InferenceWorker_p0-w0: resuming experience collection (8900 times) +[2024-06-10 21:52:32,612][46990] Updated weights for policy 0, policy_version 36340 (0.0026) +[2024-06-10 21:52:33,241][46753] Fps is (10 sec: 47507.6, 60 sec: 43689.7, 300 sec: 43486.8). Total num frames: 595410944. Throughput: 0: 43460.5. Samples: 595473680. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 21:52:33,241][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:52:37,627][46990] Updated weights for policy 0, policy_version 36350 (0.0041) +[2024-06-10 21:52:38,239][46753] Fps is (10 sec: 40960.2, 60 sec: 42871.5, 300 sec: 43209.3). Total num frames: 595574784. Throughput: 0: 43288.9. Samples: 595732900. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 21:52:38,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:52:40,221][46990] Updated weights for policy 0, policy_version 36360 (0.0027) +[2024-06-10 21:52:43,239][46753] Fps is (10 sec: 40965.4, 60 sec: 43690.7, 300 sec: 43487.0). Total num frames: 595820544. Throughput: 0: 43348.4. Samples: 595991080. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 21:52:43,240][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:52:45,558][46990] Updated weights for policy 0, policy_version 36370 (0.0039) +[2024-06-10 21:52:47,750][46990] Updated weights for policy 0, policy_version 36380 (0.0040) +[2024-06-10 21:52:48,239][46753] Fps is (10 sec: 47513.3, 60 sec: 43418.0, 300 sec: 43487.0). Total num frames: 596049920. Throughput: 0: 43284.9. Samples: 596122100. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 21:52:48,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:52:53,070][46990] Updated weights for policy 0, policy_version 36390 (0.0025) +[2024-06-10 21:52:53,239][46753] Fps is (10 sec: 39321.6, 60 sec: 42873.1, 300 sec: 43154.4). Total num frames: 596213760. Throughput: 0: 43127.2. Samples: 596379520. Policy #0 lag: (min: 0.0, avg: 9.5, max: 23.0) +[2024-06-10 21:52:53,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:52:55,354][46990] Updated weights for policy 0, policy_version 36400 (0.0034) +[2024-06-10 21:52:58,239][46753] Fps is (10 sec: 42598.4, 60 sec: 43690.6, 300 sec: 43376.0). Total num frames: 596475904. Throughput: 0: 43104.8. Samples: 596633420. Policy #0 lag: (min: 0.0, avg: 8.7, max: 18.0) +[2024-06-10 21:52:58,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:53:00,504][46990] Updated weights for policy 0, policy_version 36410 (0.0029) +[2024-06-10 21:53:02,822][46990] Updated weights for policy 0, policy_version 36420 (0.0027) +[2024-06-10 21:53:03,239][46753] Fps is (10 sec: 50790.0, 60 sec: 43417.5, 300 sec: 43487.0). Total num frames: 596721664. Throughput: 0: 43442.0. Samples: 596778620. Policy #0 lag: (min: 0.0, avg: 8.7, max: 18.0) +[2024-06-10 21:53:03,240][46753] Avg episode reward: [(0, '0.297')] +[2024-06-10 21:53:07,954][46990] Updated weights for policy 0, policy_version 36430 (0.0037) +[2024-06-10 21:53:08,240][46753] Fps is (10 sec: 39321.1, 60 sec: 42871.5, 300 sec: 43153.8). Total num frames: 596869120. Throughput: 0: 43243.1. Samples: 597031140. Policy #0 lag: (min: 0.0, avg: 8.7, max: 18.0) +[2024-06-10 21:53:08,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:53:10,499][46990] Updated weights for policy 0, policy_version 36440 (0.0033) +[2024-06-10 21:53:13,239][46753] Fps is (10 sec: 40960.5, 60 sec: 43694.0, 300 sec: 43431.5). Total num frames: 597131264. Throughput: 0: 43376.5. Samples: 597290020. Policy #0 lag: (min: 0.0, avg: 8.7, max: 18.0) +[2024-06-10 21:53:13,240][46753] Avg episode reward: [(0, '0.303')] +[2024-06-10 21:53:15,635][46990] Updated weights for policy 0, policy_version 36450 (0.0033) +[2024-06-10 21:53:18,054][46990] Updated weights for policy 0, policy_version 36460 (0.0041) +[2024-06-10 21:53:18,239][46753] Fps is (10 sec: 49152.9, 60 sec: 43420.9, 300 sec: 43431.5). Total num frames: 597360640. Throughput: 0: 43492.9. Samples: 597430800. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 21:53:18,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:53:22,904][46990] Updated weights for policy 0, policy_version 36470 (0.0033) +[2024-06-10 21:53:23,240][46753] Fps is (10 sec: 39321.1, 60 sec: 43144.5, 300 sec: 43209.3). Total num frames: 597524480. Throughput: 0: 43387.0. Samples: 597685320. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 21:53:23,241][46753] Avg episode reward: [(0, '0.289')] +[2024-06-10 21:53:25,627][46990] Updated weights for policy 0, policy_version 36480 (0.0041) +[2024-06-10 21:53:26,651][46970] Signal inference workers to stop experience collection... (8950 times) +[2024-06-10 21:53:26,651][46970] Signal inference workers to resume experience collection... (8950 times) +[2024-06-10 21:53:26,691][46990] InferenceWorker_p0-w0: stopping experience collection (8950 times) +[2024-06-10 21:53:26,691][46990] InferenceWorker_p0-w0: resuming experience collection (8950 times) +[2024-06-10 21:53:28,239][46753] Fps is (10 sec: 44236.5, 60 sec: 43963.7, 300 sec: 43542.6). Total num frames: 597803008. Throughput: 0: 43294.7. Samples: 597939340. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 21:53:28,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:53:30,691][46990] Updated weights for policy 0, policy_version 36490 (0.0033) +[2024-06-10 21:53:33,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43145.5, 300 sec: 43432.1). Total num frames: 597999616. Throughput: 0: 43514.6. Samples: 598080260. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 21:53:33,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:53:33,315][46990] Updated weights for policy 0, policy_version 36500 (0.0033) +[2024-06-10 21:53:38,059][46990] Updated weights for policy 0, policy_version 36510 (0.0037) +[2024-06-10 21:53:38,239][46753] Fps is (10 sec: 37683.2, 60 sec: 43417.6, 300 sec: 43264.9). Total num frames: 598179840. Throughput: 0: 43331.6. Samples: 598329440. Policy #0 lag: (min: 0.0, avg: 11.2, max: 22.0) +[2024-06-10 21:53:38,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:53:40,722][46990] Updated weights for policy 0, policy_version 36520 (0.0040) +[2024-06-10 21:53:43,239][46753] Fps is (10 sec: 45875.0, 60 sec: 43963.7, 300 sec: 43542.5). Total num frames: 598458368. Throughput: 0: 43499.5. Samples: 598590900. Policy #0 lag: (min: 0.0, avg: 13.0, max: 26.0) +[2024-06-10 21:53:43,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:53:45,614][46990] Updated weights for policy 0, policy_version 36530 (0.0029) +[2024-06-10 21:53:48,239][46753] Fps is (10 sec: 47513.4, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 598654976. Throughput: 0: 43164.0. Samples: 598721000. Policy #0 lag: (min: 0.0, avg: 13.0, max: 26.0) +[2024-06-10 21:53:48,241][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:53:48,370][46990] Updated weights for policy 0, policy_version 36540 (0.0032) +[2024-06-10 21:53:53,100][46990] Updated weights for policy 0, policy_version 36550 (0.0046) +[2024-06-10 21:53:53,239][46753] Fps is (10 sec: 37683.3, 60 sec: 43690.6, 300 sec: 43264.9). Total num frames: 598835200. Throughput: 0: 43272.1. Samples: 598978380. Policy #0 lag: (min: 0.0, avg: 13.0, max: 26.0) +[2024-06-10 21:53:53,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:53:55,972][46990] Updated weights for policy 0, policy_version 36560 (0.0032) +[2024-06-10 21:53:58,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43690.6, 300 sec: 43487.0). Total num frames: 599097344. Throughput: 0: 43069.3. Samples: 599228140. Policy #0 lag: (min: 0.0, avg: 13.0, max: 26.0) +[2024-06-10 21:53:58,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:54:00,756][46990] Updated weights for policy 0, policy_version 36570 (0.0028) +[2024-06-10 21:54:03,239][46753] Fps is (10 sec: 44236.6, 60 sec: 42598.4, 300 sec: 43320.4). Total num frames: 599277568. Throughput: 0: 43028.7. Samples: 599367100. Policy #0 lag: (min: 0.0, avg: 13.0, max: 26.0) +[2024-06-10 21:54:03,244][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:54:03,761][46990] Updated weights for policy 0, policy_version 36580 (0.0036) +[2024-06-10 21:54:08,244][46753] Fps is (10 sec: 37666.2, 60 sec: 43414.4, 300 sec: 43264.2). Total num frames: 599474176. Throughput: 0: 42826.9. Samples: 599612720. Policy #0 lag: (min: 0.0, avg: 13.0, max: 23.0) +[2024-06-10 21:54:08,245][46753] Avg episode reward: [(0, '0.300')] +[2024-06-10 21:54:08,410][46990] Updated weights for policy 0, policy_version 36590 (0.0043) +[2024-06-10 21:54:11,309][46990] Updated weights for policy 0, policy_version 36600 (0.0032) +[2024-06-10 21:54:13,239][46753] Fps is (10 sec: 45875.8, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 599736320. Throughput: 0: 42966.7. Samples: 599872840. Policy #0 lag: (min: 0.0, avg: 13.0, max: 23.0) +[2024-06-10 21:54:13,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:54:15,856][46990] Updated weights for policy 0, policy_version 36610 (0.0036) +[2024-06-10 21:54:18,239][46753] Fps is (10 sec: 44256.8, 60 sec: 42598.3, 300 sec: 43264.9). Total num frames: 599916544. Throughput: 0: 42877.8. Samples: 600009760. Policy #0 lag: (min: 0.0, avg: 13.0, max: 23.0) +[2024-06-10 21:54:18,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:54:18,608][46970] Signal inference workers to stop experience collection... (9000 times) +[2024-06-10 21:54:18,644][46990] InferenceWorker_p0-w0: stopping experience collection (9000 times) +[2024-06-10 21:54:18,658][46970] Signal inference workers to resume experience collection... (9000 times) +[2024-06-10 21:54:18,670][46990] InferenceWorker_p0-w0: resuming experience collection (9000 times) +[2024-06-10 21:54:18,792][46990] Updated weights for policy 0, policy_version 36620 (0.0029) +[2024-06-10 21:54:23,240][46753] Fps is (10 sec: 39321.0, 60 sec: 43417.6, 300 sec: 43264.9). Total num frames: 600129536. Throughput: 0: 43050.1. Samples: 600266700. Policy #0 lag: (min: 0.0, avg: 13.0, max: 23.0) +[2024-06-10 21:54:23,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:54:23,390][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000036630_600145920.pth... +[2024-06-10 21:54:23,397][46990] Updated weights for policy 0, policy_version 36630 (0.0029) +[2024-06-10 21:54:23,440][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000035995_589742080.pth +[2024-06-10 21:54:26,274][46990] Updated weights for policy 0, policy_version 36640 (0.0036) +[2024-06-10 21:54:28,239][46753] Fps is (10 sec: 47513.9, 60 sec: 43144.6, 300 sec: 43431.5). Total num frames: 600391680. Throughput: 0: 43002.3. Samples: 600526000. Policy #0 lag: (min: 0.0, avg: 12.4, max: 23.0) +[2024-06-10 21:54:28,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:54:30,695][46990] Updated weights for policy 0, policy_version 36650 (0.0024) +[2024-06-10 21:54:33,239][46753] Fps is (10 sec: 44237.3, 60 sec: 42871.5, 300 sec: 43320.4). Total num frames: 600571904. Throughput: 0: 43030.7. Samples: 600657380. Policy #0 lag: (min: 0.0, avg: 12.4, max: 23.0) +[2024-06-10 21:54:33,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:54:33,996][46990] Updated weights for policy 0, policy_version 36660 (0.0029) +[2024-06-10 21:54:38,199][46990] Updated weights for policy 0, policy_version 36670 (0.0046) +[2024-06-10 21:54:38,240][46753] Fps is (10 sec: 40959.2, 60 sec: 43690.6, 300 sec: 43320.4). Total num frames: 600801280. Throughput: 0: 42979.0. Samples: 600912440. Policy #0 lag: (min: 0.0, avg: 12.4, max: 23.0) +[2024-06-10 21:54:38,240][46753] Avg episode reward: [(0, '0.281')] +[2024-06-10 21:54:41,471][46990] Updated weights for policy 0, policy_version 36680 (0.0038) +[2024-06-10 21:54:43,239][46753] Fps is (10 sec: 47513.7, 60 sec: 43144.6, 300 sec: 43431.5). Total num frames: 601047040. Throughput: 0: 43165.8. Samples: 601170600. Policy #0 lag: (min: 0.0, avg: 12.4, max: 23.0) +[2024-06-10 21:54:43,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:54:45,575][46990] Updated weights for policy 0, policy_version 36690 (0.0037) +[2024-06-10 21:54:48,239][46753] Fps is (10 sec: 40960.9, 60 sec: 42598.5, 300 sec: 43264.9). Total num frames: 601210880. Throughput: 0: 43084.2. Samples: 601305880. Policy #0 lag: (min: 0.0, avg: 12.4, max: 23.0) +[2024-06-10 21:54:48,240][46753] Avg episode reward: [(0, '0.286')] +[2024-06-10 21:54:49,193][46990] Updated weights for policy 0, policy_version 36700 (0.0029) +[2024-06-10 21:54:53,239][46753] Fps is (10 sec: 39321.7, 60 sec: 43417.7, 300 sec: 43264.9). Total num frames: 601440256. Throughput: 0: 43270.6. Samples: 601559700. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 21:54:53,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:54:53,247][46990] Updated weights for policy 0, policy_version 36710 (0.0032) +[2024-06-10 21:54:56,922][46990] Updated weights for policy 0, policy_version 36720 (0.0051) +[2024-06-10 21:54:58,239][46753] Fps is (10 sec: 47513.3, 60 sec: 43144.6, 300 sec: 43431.5). Total num frames: 601686016. Throughput: 0: 43288.9. Samples: 601820840. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 21:54:58,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:55:00,598][46990] Updated weights for policy 0, policy_version 36730 (0.0025) +[2024-06-10 21:55:03,240][46753] Fps is (10 sec: 39321.0, 60 sec: 42598.4, 300 sec: 43153.8). Total num frames: 601833472. Throughput: 0: 43233.7. Samples: 601955280. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 21:55:03,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:55:04,211][46990] Updated weights for policy 0, policy_version 36740 (0.0025) +[2024-06-10 21:55:08,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43694.0, 300 sec: 43320.4). Total num frames: 602095616. Throughput: 0: 43311.3. Samples: 602215700. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 21:55:08,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:55:08,270][46990] Updated weights for policy 0, policy_version 36750 (0.0027) +[2024-06-10 21:55:11,668][46990] Updated weights for policy 0, policy_version 36760 (0.0042) +[2024-06-10 21:55:12,424][46970] Signal inference workers to stop experience collection... (9050 times) +[2024-06-10 21:55:12,425][46970] Signal inference workers to resume experience collection... (9050 times) +[2024-06-10 21:55:12,437][46990] InferenceWorker_p0-w0: stopping experience collection (9050 times) +[2024-06-10 21:55:12,437][46990] InferenceWorker_p0-w0: resuming experience collection (9050 times) +[2024-06-10 21:55:13,239][46753] Fps is (10 sec: 52429.2, 60 sec: 43690.6, 300 sec: 43431.5). Total num frames: 602357760. Throughput: 0: 43245.7. Samples: 602472060. Policy #0 lag: (min: 0.0, avg: 11.5, max: 21.0) +[2024-06-10 21:55:13,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:55:15,753][46990] Updated weights for policy 0, policy_version 36770 (0.0036) +[2024-06-10 21:55:18,240][46753] Fps is (10 sec: 42597.3, 60 sec: 43417.5, 300 sec: 43320.4). Total num frames: 602521600. Throughput: 0: 43247.4. Samples: 602603520. Policy #0 lag: (min: 1.0, avg: 11.4, max: 22.0) +[2024-06-10 21:55:18,240][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:55:19,335][46990] Updated weights for policy 0, policy_version 36780 (0.0039) +[2024-06-10 21:55:23,240][46753] Fps is (10 sec: 39321.5, 60 sec: 43690.7, 300 sec: 43376.6). Total num frames: 602750976. Throughput: 0: 43415.6. Samples: 602866140. Policy #0 lag: (min: 1.0, avg: 11.4, max: 22.0) +[2024-06-10 21:55:23,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:55:23,428][46990] Updated weights for policy 0, policy_version 36790 (0.0027) +[2024-06-10 21:55:26,877][46990] Updated weights for policy 0, policy_version 36800 (0.0045) +[2024-06-10 21:55:28,239][46753] Fps is (10 sec: 45876.2, 60 sec: 43144.5, 300 sec: 43375.9). Total num frames: 602980352. Throughput: 0: 43432.5. Samples: 603125060. Policy #0 lag: (min: 1.0, avg: 11.4, max: 22.0) +[2024-06-10 21:55:28,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:55:30,801][46990] Updated weights for policy 0, policy_version 36810 (0.0029) +[2024-06-10 21:55:33,240][46753] Fps is (10 sec: 39321.4, 60 sec: 42871.4, 300 sec: 43153.8). Total num frames: 603144192. Throughput: 0: 43350.9. Samples: 603256680. Policy #0 lag: (min: 1.0, avg: 11.4, max: 22.0) +[2024-06-10 21:55:33,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:55:34,332][46990] Updated weights for policy 0, policy_version 36820 (0.0043) +[2024-06-10 21:55:38,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43144.6, 300 sec: 43320.4). Total num frames: 603389952. Throughput: 0: 43362.2. Samples: 603511000. Policy #0 lag: (min: 1.0, avg: 11.4, max: 22.0) +[2024-06-10 21:55:38,240][46753] Avg episode reward: [(0, '0.301')] +[2024-06-10 21:55:38,493][46990] Updated weights for policy 0, policy_version 36830 (0.0044) +[2024-06-10 21:55:42,161][46990] Updated weights for policy 0, policy_version 36840 (0.0034) +[2024-06-10 21:55:43,239][46753] Fps is (10 sec: 49152.6, 60 sec: 43144.5, 300 sec: 43375.9). Total num frames: 603635712. Throughput: 0: 43298.2. Samples: 603769260. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 21:55:43,240][46753] Avg episode reward: [(0, '0.290')] +[2024-06-10 21:55:46,350][46990] Updated weights for policy 0, policy_version 36850 (0.0034) +[2024-06-10 21:55:48,239][46753] Fps is (10 sec: 40960.1, 60 sec: 43144.5, 300 sec: 43209.3). Total num frames: 603799552. Throughput: 0: 43222.8. Samples: 603900300. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 21:55:48,240][46753] Avg episode reward: [(0, '0.306')] +[2024-06-10 21:55:49,707][46990] Updated weights for policy 0, policy_version 36860 (0.0037) +[2024-06-10 21:55:53,240][46753] Fps is (10 sec: 40959.6, 60 sec: 43417.5, 300 sec: 43320.4). Total num frames: 604045312. Throughput: 0: 43284.7. Samples: 604163520. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 21:55:53,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:55:53,619][46990] Updated weights for policy 0, policy_version 36870 (0.0032) +[2024-06-10 21:55:57,402][46990] Updated weights for policy 0, policy_version 36880 (0.0036) +[2024-06-10 21:55:58,239][46753] Fps is (10 sec: 47513.2, 60 sec: 43144.5, 300 sec: 43320.4). Total num frames: 604274688. Throughput: 0: 43282.7. Samples: 604419780. Policy #0 lag: (min: 0.0, avg: 10.1, max: 20.0) +[2024-06-10 21:55:58,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:56:01,240][46990] Updated weights for policy 0, policy_version 36890 (0.0046) +[2024-06-10 21:56:03,244][46753] Fps is (10 sec: 40942.1, 60 sec: 43687.5, 300 sec: 43208.7). Total num frames: 604454912. Throughput: 0: 43294.1. Samples: 604551940. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 21:56:03,245][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:56:04,937][46990] Updated weights for policy 0, policy_version 36900 (0.0040) +[2024-06-10 21:56:08,239][46753] Fps is (10 sec: 42598.2, 60 sec: 43417.5, 300 sec: 43431.5). Total num frames: 604700672. Throughput: 0: 43224.9. Samples: 604811260. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 21:56:08,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:56:08,824][46990] Updated weights for policy 0, policy_version 36910 (0.0044) +[2024-06-10 21:56:12,326][46990] Updated weights for policy 0, policy_version 36920 (0.0042) +[2024-06-10 21:56:13,239][46753] Fps is (10 sec: 47535.2, 60 sec: 42871.6, 300 sec: 43320.4). Total num frames: 604930048. Throughput: 0: 43219.6. Samples: 605069940. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 21:56:13,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:56:15,482][46970] Signal inference workers to stop experience collection... (9100 times) +[2024-06-10 21:56:15,535][46970] Signal inference workers to resume experience collection... (9100 times) +[2024-06-10 21:56:15,532][46990] InferenceWorker_p0-w0: stopping experience collection (9100 times) +[2024-06-10 21:56:15,553][46990] InferenceWorker_p0-w0: resuming experience collection (9100 times) +[2024-06-10 21:56:16,609][46990] Updated weights for policy 0, policy_version 36930 (0.0039) +[2024-06-10 21:56:18,239][46753] Fps is (10 sec: 39321.9, 60 sec: 42871.6, 300 sec: 43153.8). Total num frames: 605093888. Throughput: 0: 43187.3. Samples: 605200100. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 21:56:18,240][46753] Avg episode reward: [(0, '0.299')] +[2024-06-10 21:56:20,142][46990] Updated weights for policy 0, policy_version 36940 (0.0037) +[2024-06-10 21:56:23,239][46753] Fps is (10 sec: 42598.0, 60 sec: 43417.6, 300 sec: 43431.5). Total num frames: 605356032. Throughput: 0: 43284.9. Samples: 605458820. Policy #0 lag: (min: 0.0, avg: 8.0, max: 21.0) +[2024-06-10 21:56:23,240][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:56:23,252][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000036948_605356032.pth... +[2024-06-10 21:56:23,294][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000036312_594935808.pth +[2024-06-10 21:56:23,877][46990] Updated weights for policy 0, policy_version 36950 (0.0035) +[2024-06-10 21:56:27,680][46990] Updated weights for policy 0, policy_version 36960 (0.0043) +[2024-06-10 21:56:28,244][46753] Fps is (10 sec: 47492.1, 60 sec: 43141.3, 300 sec: 43319.7). Total num frames: 605569024. Throughput: 0: 43354.3. Samples: 605720400. Policy #0 lag: (min: 0.0, avg: 9.9, max: 23.0) +[2024-06-10 21:56:28,245][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:56:31,271][46990] Updated weights for policy 0, policy_version 36970 (0.0032) +[2024-06-10 21:56:33,239][46753] Fps is (10 sec: 37683.5, 60 sec: 43144.7, 300 sec: 43153.8). Total num frames: 605732864. Throughput: 0: 43358.7. Samples: 605851440. Policy #0 lag: (min: 0.0, avg: 9.9, max: 23.0) +[2024-06-10 21:56:33,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:56:35,414][46990] Updated weights for policy 0, policy_version 36980 (0.0027) +[2024-06-10 21:56:38,239][46753] Fps is (10 sec: 42618.0, 60 sec: 43417.7, 300 sec: 43376.0). Total num frames: 605995008. Throughput: 0: 43211.8. Samples: 606108040. Policy #0 lag: (min: 0.0, avg: 9.9, max: 23.0) +[2024-06-10 21:56:38,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:56:39,011][46990] Updated weights for policy 0, policy_version 36990 (0.0033) +[2024-06-10 21:56:42,855][46990] Updated weights for policy 0, policy_version 37000 (0.0027) +[2024-06-10 21:56:43,239][46753] Fps is (10 sec: 49151.7, 60 sec: 43144.5, 300 sec: 43320.5). Total num frames: 606224384. Throughput: 0: 43300.5. Samples: 606368300. Policy #0 lag: (min: 0.0, avg: 9.9, max: 23.0) +[2024-06-10 21:56:43,248][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:56:46,478][46990] Updated weights for policy 0, policy_version 37010 (0.0036) +[2024-06-10 21:56:48,239][46753] Fps is (10 sec: 40959.5, 60 sec: 43417.5, 300 sec: 43265.2). Total num frames: 606404608. Throughput: 0: 43367.0. Samples: 606503260. Policy #0 lag: (min: 0.0, avg: 9.9, max: 23.0) +[2024-06-10 21:56:48,240][46753] Avg episode reward: [(0, '0.288')] +[2024-06-10 21:56:50,295][46990] Updated weights for policy 0, policy_version 37020 (0.0035) +[2024-06-10 21:56:53,239][46753] Fps is (10 sec: 42598.3, 60 sec: 43417.6, 300 sec: 43375.9). Total num frames: 606650368. Throughput: 0: 43293.8. Samples: 606759480. Policy #0 lag: (min: 0.0, avg: 11.2, max: 20.0) +[2024-06-10 21:56:53,248][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:56:53,945][46990] Updated weights for policy 0, policy_version 37030 (0.0048) +[2024-06-10 21:56:58,082][46990] Updated weights for policy 0, policy_version 37040 (0.0036) +[2024-06-10 21:56:58,239][46753] Fps is (10 sec: 47513.6, 60 sec: 43417.6, 300 sec: 43264.9). Total num frames: 606879744. Throughput: 0: 43515.9. Samples: 607028160. Policy #0 lag: (min: 0.0, avg: 11.2, max: 20.0) +[2024-06-10 21:56:58,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:57:01,405][46990] Updated weights for policy 0, policy_version 37050 (0.0034) +[2024-06-10 21:57:03,239][46753] Fps is (10 sec: 40959.8, 60 sec: 43420.8, 300 sec: 43264.9). Total num frames: 607059968. Throughput: 0: 43547.0. Samples: 607159720. Policy #0 lag: (min: 0.0, avg: 11.2, max: 20.0) +[2024-06-10 21:57:03,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:57:05,529][46990] Updated weights for policy 0, policy_version 37060 (0.0038) +[2024-06-10 21:57:08,240][46753] Fps is (10 sec: 44234.7, 60 sec: 43690.4, 300 sec: 43432.1). Total num frames: 607322112. Throughput: 0: 43553.3. Samples: 607418740. Policy #0 lag: (min: 0.0, avg: 11.2, max: 20.0) +[2024-06-10 21:57:08,241][46753] Avg episode reward: [(0, '0.285')] +[2024-06-10 21:57:08,967][46990] Updated weights for policy 0, policy_version 37070 (0.0028) +[2024-06-10 21:57:13,113][46990] Updated weights for policy 0, policy_version 37080 (0.0033) +[2024-06-10 21:57:13,239][46753] Fps is (10 sec: 45875.5, 60 sec: 43144.5, 300 sec: 43265.5). Total num frames: 607518720. Throughput: 0: 43629.7. Samples: 607683540. Policy #0 lag: (min: 0.0, avg: 11.2, max: 20.0) +[2024-06-10 21:57:13,242][46753] Avg episode reward: [(0, '0.295')] +[2024-06-10 21:57:16,596][46990] Updated weights for policy 0, policy_version 37090 (0.0028) +[2024-06-10 21:57:18,240][46753] Fps is (10 sec: 39321.7, 60 sec: 43690.3, 300 sec: 43320.3). Total num frames: 607715328. Throughput: 0: 43484.8. Samples: 607808280. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 21:57:18,241][46753] Avg episode reward: [(0, '0.305')] +[2024-06-10 21:57:20,619][46990] Updated weights for policy 0, policy_version 37100 (0.0029) +[2024-06-10 21:57:23,239][46753] Fps is (10 sec: 44236.9, 60 sec: 43417.6, 300 sec: 43375.9). Total num frames: 607961088. Throughput: 0: 43582.6. Samples: 608069260. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 21:57:23,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:57:24,115][46990] Updated weights for policy 0, policy_version 37110 (0.0034) +[2024-06-10 21:57:28,117][46990] Updated weights for policy 0, policy_version 37120 (0.0029) +[2024-06-10 21:57:28,240][46753] Fps is (10 sec: 45876.9, 60 sec: 43420.8, 300 sec: 43265.0). Total num frames: 608174080. Throughput: 0: 43791.0. Samples: 608338900. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 21:57:28,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:57:31,426][46990] Updated weights for policy 0, policy_version 37130 (0.0036) +[2024-06-10 21:57:33,239][46753] Fps is (10 sec: 42598.6, 60 sec: 44236.8, 300 sec: 43431.5). Total num frames: 608387072. Throughput: 0: 43537.4. Samples: 608462440. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 21:57:33,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:57:35,992][46990] Updated weights for policy 0, policy_version 37140 (0.0040) +[2024-06-10 21:57:38,239][46753] Fps is (10 sec: 44237.2, 60 sec: 43690.6, 300 sec: 43375.9). Total num frames: 608616448. Throughput: 0: 43543.1. Samples: 608718920. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 21:57:38,240][46753] Avg episode reward: [(0, '0.294')] +[2024-06-10 21:57:39,439][46990] Updated weights for policy 0, policy_version 37150 (0.0041) +[2024-06-10 21:57:43,239][46753] Fps is (10 sec: 42598.5, 60 sec: 43144.6, 300 sec: 43264.9). Total num frames: 608813056. Throughput: 0: 43366.7. Samples: 608979660. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 21:57:43,240][46753] Avg episode reward: [(0, '0.291')] +[2024-06-10 21:57:43,444][46990] Updated weights for policy 0, policy_version 37160 (0.0053) +[2024-06-10 21:57:44,092][46970] Signal inference workers to stop experience collection... (9150 times) +[2024-06-10 21:57:44,092][46970] Signal inference workers to resume experience collection... (9150 times) +[2024-06-10 21:57:44,143][46990] InferenceWorker_p0-w0: stopping experience collection (9150 times) +[2024-06-10 21:57:44,144][46990] InferenceWorker_p0-w0: resuming experience collection (9150 times) +[2024-06-10 21:57:46,785][46990] Updated weights for policy 0, policy_version 37170 (0.0033) +[2024-06-10 21:57:48,239][46753] Fps is (10 sec: 40960.2, 60 sec: 43690.7, 300 sec: 43431.5). Total num frames: 609026048. Throughput: 0: 43242.8. Samples: 609105640. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 21:57:48,240][46753] Avg episode reward: [(0, '0.296')] +[2024-06-10 21:57:50,919][46990] Updated weights for policy 0, policy_version 37180 (0.0033) +[2024-06-10 21:57:53,239][46753] Fps is (10 sec: 45874.8, 60 sec: 43690.7, 300 sec: 43375.9). Total num frames: 609271808. Throughput: 0: 43380.0. Samples: 609370820. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 21:57:53,240][46753] Avg episode reward: [(0, '0.293')] +[2024-06-10 21:57:54,397][46990] Updated weights for policy 0, policy_version 37190 (0.0031) +[2024-06-10 21:57:58,239][46753] Fps is (10 sec: 42598.8, 60 sec: 42871.6, 300 sec: 43153.8). Total num frames: 609452032. Throughput: 0: 43424.6. Samples: 609637640. Policy #0 lag: (min: 0.0, avg: 11.8, max: 21.0) +[2024-06-10 21:57:58,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:57:58,530][46990] Updated weights for policy 0, policy_version 37200 (0.0031) +[2024-06-10 21:58:01,899][46990] Updated weights for policy 0, policy_version 37210 (0.0024) +[2024-06-10 21:58:03,239][46753] Fps is (10 sec: 42598.7, 60 sec: 43963.8, 300 sec: 43487.0). Total num frames: 609697792. Throughput: 0: 43352.9. Samples: 609759140. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 21:58:03,240][46753] Avg episode reward: [(0, '0.302')] +[2024-06-10 21:58:06,264][46990] Updated weights for policy 0, policy_version 37220 (0.0027) +[2024-06-10 21:58:08,240][46753] Fps is (10 sec: 49150.7, 60 sec: 43690.9, 300 sec: 43431.5). Total num frames: 609943552. Throughput: 0: 43481.6. Samples: 610025940. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 21:58:08,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:58:09,542][46990] Updated weights for policy 0, policy_version 37230 (0.0040) +[2024-06-10 21:58:13,239][46753] Fps is (10 sec: 40960.0, 60 sec: 43144.6, 300 sec: 43209.3). Total num frames: 610107392. Throughput: 0: 43371.7. Samples: 610290620. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 21:58:13,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:58:13,779][46990] Updated weights for policy 0, policy_version 37240 (0.0042) +[2024-06-10 21:58:17,066][46990] Updated weights for policy 0, policy_version 37250 (0.0033) +[2024-06-10 21:58:18,240][46753] Fps is (10 sec: 40960.2, 60 sec: 43964.0, 300 sec: 43487.0). Total num frames: 610353152. Throughput: 0: 43414.5. Samples: 610416100. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 21:58:18,240][46753] Avg episode reward: [(0, '0.292')] +[2024-06-10 21:58:21,162][46990] Updated weights for policy 0, policy_version 37260 (0.0029) +[2024-06-10 21:58:23,239][46753] Fps is (10 sec: 45875.4, 60 sec: 43417.7, 300 sec: 43264.9). Total num frames: 610566144. Throughput: 0: 43583.7. Samples: 610680180. Policy #0 lag: (min: 0.0, avg: 10.4, max: 21.0) +[2024-06-10 21:58:23,240][46753] Avg episode reward: [(0, '0.298')] +[2024-06-10 21:58:23,246][46970] Saving /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000037267_610582528.pth... +[2024-06-10 21:58:23,300][46970] Removing /workspace/metta/train_dir/p2.metta.k.5_8/checkpoint_p0/checkpoint_000036630_600145920.pth +[2024-06-10 21:58:24,600][46990] Updated weights for policy 0, policy_version 37270 (0.0029)