diff --git a/.gitattributes b/.gitattributes index 8b8986bd98e55db5da6185c93db8f201bfc95a86..a29444a9b7d6cf6c021a5167a0d9d4b4b78fd1ae 100644 --- a/.gitattributes +++ b/.gitattributes @@ -114,3 +114,8 @@ lm-evaluation-harness/wandb/run-20240608_134753-jc54oi3l/run-jc54oi3l.wandb filt lm-evaluation-harness/wandb/run-20240605_075207-j37ty43h/run-j37ty43h.wandb filter=lfs diff=lfs merge=lfs -text lm-evaluation-harness/wandb/run-20240608_111026-9apxn9eo/run-9apxn9eo.wandb filter=lfs diff=lfs merge=lfs -text lm-evaluation-harness/wandb/run-20240608_190333-82mnef5m/run-82mnef5m.wandb filter=lfs diff=lfs merge=lfs -text +lm-evaluation-harness/wandb/run-20240606_072817-wvw4h9c3/run-wvw4h9c3.wandb filter=lfs diff=lfs merge=lfs -text +lm-evaluation-harness/wandb/run-20240608_174352-w0xyfhqg/run-w0xyfhqg.wandb filter=lfs diff=lfs merge=lfs -text +lm-evaluation-harness/wandb/run-20240606_061005-a998jkug/run-a998jkug.wandb filter=lfs diff=lfs merge=lfs -text +lm-evaluation-harness/wandb/run-20240606_033253-tgejm7dv/run-tgejm7dv.wandb filter=lfs diff=lfs merge=lfs -text +lm-evaluation-harness/wandb/run-20240608_162527-0vroecl5/run-0vroecl5.wandb filter=lfs diff=lfs merge=lfs -text diff --git a/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/logs/debug-internal.log b/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/logs/debug-internal.log new file mode 100644 index 0000000000000000000000000000000000000000..54087366d9f50dfecad091ebdec34629a6da5b12 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/logs/debug-internal.log @@ -0,0 +1,19 @@ +2024-05-14 11:43:12,916 INFO StreamThr :80527 [internal.py:wandb_internal():85] W&B internal server running at pid: 80527, started at: 2024-05-14 11:43:12.915735 +2024-05-14 11:43:12,918 DEBUG HandlerThread:80527 [handler.py:handle_request():158] handle_request: status +2024-05-14 11:43:12,921 INFO WriterThread:80527 [datastore.py:open_for_write():87] open: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/run-tau13xz6.wandb +2024-05-14 11:43:12,921 DEBUG SenderThread:80527 [sender.py:send():378] send: header +2024-05-14 11:43:12,932 DEBUG SenderThread:80527 [sender.py:send():378] send: run +2024-05-14 11:43:13,188 INFO SenderThread:80527 [dir_watcher.py:__init__():211] watching files in: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/files +2024-05-14 11:43:13,188 INFO SenderThread:80527 [sender.py:_start_run_threads():1123] run started: tau13xz6 with start time 1715686992.91664 +2024-05-14 11:43:13,919 WARNING StreamThr :80527 [internal.py:is_dead():413] Internal process exiting, parent pid 79483 disappeared +2024-05-14 11:43:13,919 ERROR StreamThr :80527 [internal.py:wandb_internal():151] Internal process shutdown. +2024-05-14 11:43:13,932 INFO HandlerThread:80527 [handler.py:finish():882] shutting down handler +2024-05-14 11:43:13,933 INFO WriterThread:80527 [datastore.py:close():296] close: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/run-tau13xz6.wandb +2024-05-14 11:43:14,188 INFO SenderThread:80527 [sender.py:finish():1545] shutting down sender +2024-05-14 11:43:14,188 INFO SenderThread:80527 [dir_watcher.py:finish():358] shutting down directory watcher +2024-05-14 11:43:15,189 INFO SenderThread:80527 [dir_watcher.py:finish():388] scan: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/files +2024-05-14 11:43:15,189 INFO SenderThread:80527 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/files/config.yaml config.yaml +2024-05-14 11:43:15,189 INFO SenderThread:80527 [file_pusher.py:finish():169] shutting down file pusher +2024-05-14 11:43:15,189 INFO SenderThread:80527 [file_pusher.py:join():175] waiting for file pusher +2024-05-14 11:43:15,682 INFO wandb-upload_0:80527 [upload_job.py:push():130] Uploaded file /data/cronscript/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/files/config.yaml +2024-05-14 11:43:15,918 INFO MainThread:80527 [internal.py:handle_exit():75] Internal process exited diff --git a/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/logs/debug.log b/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/logs/debug.log new file mode 100644 index 0000000000000000000000000000000000000000..ae5e2ecdd14fc5adfb1a64743f684e4d6eac77b1 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/logs/debug.log @@ -0,0 +1,20 @@ +2024-05-14 11:43:12,911 INFO MainThread:79483 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0 +2024-05-14 11:43:12,911 INFO MainThread:79483 [wandb_setup.py:_flush():76] Configure stats pid to 79483 +2024-05-14 11:43:12,911 INFO MainThread:79483 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings +2024-05-14 11:43:12,911 INFO MainThread:79483 [wandb_setup.py:_flush():76] Loading settings from /data/cronscript/lm-evaluation-harness/wandb/settings +2024-05-14 11:43:12,911 INFO MainThread:79483 [wandb_setup.py:_flush():76] Loading settings from environment variables: {} +2024-05-14 11:43:12,911 INFO MainThread:79483 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False} +2024-05-14 11:43:12,911 WARNING MainThread:79483 [wandb_setup.py:_flush():76] Could not find program at -m lm_eval.__main__ +2024-05-14 11:43:12,911 INFO MainThread:79483 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program_relpath': None, 'program': '-m lm_eval.__main__'} +2024-05-14 11:43:12,911 INFO MainThread:79483 [wandb_setup.py:_flush():76] Applying login settings: {} +2024-05-14 11:43:12,912 INFO MainThread:79483 [wandb_init.py:_log_setup():520] Logging user logs to /data/cronscript/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/logs/debug.log +2024-05-14 11:43:12,912 INFO MainThread:79483 [wandb_init.py:_log_setup():521] Logging internal logs to /data/cronscript/lm-evaluation-harness/wandb/run-20240514_114312-tau13xz6/logs/debug-internal.log +2024-05-14 11:43:12,912 INFO MainThread:79483 [wandb_init.py:init():560] calling init triggers +2024-05-14 11:43:12,912 INFO MainThread:79483 [wandb_init.py:init():567] wandb.init called with sweep_config: {} +config: {} +2024-05-14 11:43:12,912 INFO MainThread:79483 [wandb_init.py:init():610] starting backend +2024-05-14 11:43:12,912 INFO MainThread:79483 [wandb_init.py:init():614] setting up manager +2024-05-14 11:43:12,914 INFO MainThread:79483 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn +2024-05-14 11:43:12,915 INFO MainThread:79483 [wandb_init.py:init():622] backend started and connected +2024-05-14 11:43:12,920 INFO MainThread:79483 [wandb_init.py:init():711] updated telemetry +2024-05-14 11:43:12,932 INFO MainThread:79483 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout diff --git a/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/config.yaml b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f7dabc891edb6d0c65a5c4671e8f6d6510d0fbed --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/config.yaml @@ -0,0 +1,43 @@ +wandb_version: 1 + +_wandb: + desc: null + value: + python_version: 3.10.12 + cli_version: 0.17.0 + framework: huggingface + huggingface_version: 4.40.2 + is_jupyter_run: false + is_kaggle_kernel: false + start_time: 1715704464 + t: + 1: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 2: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 3: + - 23 + 4: 3.10.12 + 5: 0.17.0 + 6: 4.40.2 + 8: + - 5 + 13: linux-x86_64 diff --git a/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log new file mode 100644 index 0000000000000000000000000000000000000000..b1a1ad4c117d8b97aacaf0c52bd638f838d730d6 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log @@ -0,0 +1,28 @@ + +2024-05-14:16:34:25,220 INFO [__main__.py:251] Verbosity set to INFO +2024-05-14:16:34:30,618 INFO [__main__.py:335] Selected Tasks: ['indiccopa-hi'] +2024-05-14:16:34:30,621 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 +2024-05-14:16:34:30,621 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/data/cronscript/ckpts//hf_ckpt//global_step100'} +/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/gpu_migration/core/register.py:145: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) + return func(*args, **kwargs) +/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/gpu_migration/torch/cuda/memory.py:36: UserWarning: No need to call empty_cache on HPU. It manages the memory internally in an effcient way. + warnings.warn( +/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. + warnings.warn( +You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 +[2024-05-14 16:34:41,212] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +2024-05-14:16:34:41,618 WARNING [task.py:763] [Task: indiccopa-hi] metric acc is defined, but aggregation is not. using default aggregation=mean +2024-05-14:16:34:41,618 WARNING [task.py:775] [Task: indiccopa-hi] metric acc is defined, but higher_is_better is not. using default higher_is_better=True +/usr/local/lib/python3.10/dist-packages/datasets/load.py:1486: FutureWarning: The repository for ai4bharat/IndicCOPA contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/ai4bharat/IndicCOPA +You can avoid this message in future by passing the argument `trust_remote_code=True`. +Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`. + warnings.warn( +Passed argument batch_size = auto:1. Detecting largest batch size +2024-05-14:16:34:42,912 WARNING [task.py:322] [Task: indiccopa-hi] has_training_docs and has_validation_docs are False, using test_docs as fewshot_docs but this is not recommended. +2024-05-14:16:34:42,913 WARNING [task.py:322] [Task: indiccopa-hi] has_training_docs and has_validation_docs are False, using test_docs as fewshot_docs but this is not recommended. +2024-05-14:16:34:42,933 INFO [task.py:395] Building contexts for indiccopa-hi on rank 7... +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 56/56 [00:00<00:00, 104113.93it/s] +2024-05-14:16:34:44,718 INFO [evaluator.py:379] Running loglikelihood requests +/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/gpu_migration/torch/cuda/memory.py:36: UserWarning: No need to call empty_cache on HPU. It manages the memory internally in an effcient way. + warnings.warn( +Determined largest batch size: 64 \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/requirements.txt b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..d63edd781bd5bdbb7f67523ac1ba9f0f1ed392dc --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/requirements.txt @@ -0,0 +1,163 @@ +DataProperty==1.0.1 +GitPython==3.1.43 +Jinja2==3.1.3 +Markdown==3.6 +MarkupSafe==2.1.5 +Pillow-SIMD==7.0.0.post3 +PyYAML==6.0 +Werkzeug==3.0.2 +absl-py==2.1.0 +accelerate==0.30.1 +aiohttp==3.9.4 +aiosignal==1.3.1 +antlr4-python3-runtime==4.9.3 +anyio==4.3.0 +async-timeout==4.0.3 +attrs==23.2.0 +av==9.2.0 +cachetools==5.3.3 +certifi==2024.2.2 +cffi==1.15.1 +cfgv==3.4.0 +chardet==5.2.0 +charset-normalizer==3.3.2 +click==8.1.7 +cmake==3.29.2 +colorama==0.4.6 +datasets==2.19.1 +deepspeed==0.12.4+hpu.synapse.v1.15.1 +dill==0.3.8 +distlib==0.3.8 +distro==1.9.0 +docker-pycreds==0.4.0 +einops==0.8.0 +evaluate==0.4.2 +exceptiongroup==1.2.0 +expecttest==0.2.1 +filelock==3.13.4 +frozenlist==1.4.1 +fsspec==2024.3.1 +gitdb==4.0.11 +google-auth-oauthlib==0.4.6 +google-auth==2.29.0 +grpcio==1.62.1 +h11==0.14.0 +habana-media-loader==1.15.1.15 +habana-pyhlml==1.15.1.15 +habana-torch-dataloader==1.15.1.15 +habana-torch-plugin==1.15.1.15 +habana_gpu_migration==1.15.1.15 +habana_quantization_toolkit==1.15.1.15 +hjson==3.1.0 +httpcore==1.0.5 +httpx==0.27.0 +huggingface-hub==0.23.0 +identify==2.5.35 +idna==3.7 +importlib_resources==6.4.0 +iniconfig==2.0.0 +joblib==1.4.2 +jsonlines==4.0.0 +lightning-habana==1.4.0 +lightning-utilities==0.11.2 +lightning==2.2.0.post0 +lm_eval==0.3.0 +lm_eval==0.4.2 +lm_eval==0.4.2 +lm_eval==0.4.2 +mbstrdecoder==1.1.3 +more-itertools==10.2.0 +mpi4py==3.1.4 +mpmath==1.3.0 +multidict==6.0.5 +multiprocess==0.70.16 +networkx==3.3 +ninja==1.11.1.1 +nltk==3.8.1 +nodeenv==1.8.0 +numexpr==2.10.0 +numpy==1.23.5 +oauthlib==3.2.2 +omegaconf==2.3.0 +openai==1.29.0 +packaging==24.0 +pandas==2.0.1 +pathspec==0.12.1 +pathvalidate==3.2.0 +peft==0.10.0 +perfetto==0.7.0 +pip==22.0.2 +pip==23.3.1 +platformdirs==4.2.0 +pluggy==1.4.0 +portalocker==2.8.2 +pre-commit==3.3.3 +protobuf==3.20.3 +psutil==5.9.8 +py-cpuinfo==9.0.0 +pyarrow-hotfix==0.6 +pyarrow==16.0.0 +pyasn1==0.6.0 +pyasn1_modules==0.4.0 +pybind11==2.10.4 +pycountry==23.12.11 +pycparser==2.22 +pydantic==1.10.13 +pynvml==8.0.4 +pytablewriter==1.2.0 +pytest==8.1.1 +python-dateutil==2.9.0.post0 +pytorch-lightning==2.2.2 +pytz==2024.1 +regex==2023.5.5 +requests-oauthlib==2.0.0 +requests==2.31.0 +rouge_score==0.1.2 +rsa==4.9 +sacrebleu==1.5.0 +safetensors==0.4.3 +scikit-learn==1.4.2 +scipy==1.13.0 +sentencepiece==0.2.0 +sentry-sdk==2.1.1 +setproctitle==1.3.3 +setuptools==59.6.0 +setuptools==69.5.1 +six==1.16.0 +smmap==5.0.1 +sniffio==1.3.1 +sqlitedict==2.1.0 +symengine==0.11.0 +sympy==1.12 +tabledata==1.3.3 +tcolorpy==0.1.6 +tdqm==0.0.1 +tensorboard-data-server==0.6.1 +tensorboard-plugin-wit==1.8.1 +tensorboard==2.11.2 +threadpoolctl==3.5.0 +tokenizers==0.19.1 +tomli==2.0.1 +torch==2.2.0a0+git8964477 +torch_tb_profiler==0.4.0 +torchaudio==2.2.0+08901ad +torchdata==0.7.1+5e6f7b7 +torchmetrics==1.3.2 +torchtext==0.17.0+400da5c +torchvision==0.17.0+b2383d4 +tqdm-multiprocess==0.0.11 +tqdm==4.66.2 +transformers==4.40.2 +typepy==1.3.2 +typing_extensions==4.11.0 +tzdata==2024.1 +urllib3==1.26.18 +virtualenv==20.25.1 +wandb==0.17.0 +wheel==0.37.1 +wheel==0.43.0 +word2number==1.1 +xxhash==3.4.1 +yamllint==1.35.1 +yarl==1.9.4 +zstandard==0.22.0 \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-metadata.json b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..6c6823c13ef5a23717fc4a35e3ae628aa1dc51aa --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-metadata.json @@ -0,0 +1,810 @@ +{ + "os": "Linux-5.15.0-92-generic-x86_64-with-glibc2.35", + "python": "3.10.12", + "heartbeatAt": "2024-05-14T16:34:25.089928", + "startedAt": "2024-05-14T16:34:24.673010", + "docker": null, + "cuda": null, + "args": [ + "--model", + "hf", + "--model_args", + "pretrained=/data/cronscript/ckpts//hf_ckpt//global_step100", + "--tasks", + "indiccopa-hi", + "--batch_size", + "auto", + "--wandb_args", + "project=bharatgpt,group=trial_expt" + ], + "state": "running", + "program": "-m lm_eval.__main__", + "codePathLocal": null, + "git": { + "remote": "https://github.com/EleutherAI/lm-evaluation-harness", + "commit": null + }, + "email": null, + "root": "/data/cronscript/lm-evaluation-harness", + "host": "vizzhy-150-3", + "username": "root", + "executable": "/usr/bin/python3", + "cpu_count": 76, + "cpu_count_logical": 152, + "cpu_freq": { + "current": 3391.9957960526312, + "min": 800.0, + "max": 3400.0 + }, + "cpu_freq_per_core": [ + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3299.996, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3299.996, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.006, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + } + ], + "disk": { + "/": { + "total": 866.4415092468262, + "used": 863.4842262268066 + } + }, + "memory": { + "total": 1007.5000267028809 + } +} diff --git a/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-summary.json b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-summary.json new file mode 100644 index 0000000000000000000000000000000000000000..f54bb3d71ddd5d7fb0c66b39e940c1d03cfc7cd1 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-summary.json @@ -0,0 +1 @@ +{"_wandb": {"runtime": 27}} \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/logs/debug-internal.log b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/logs/debug-internal.log new file mode 100644 index 0000000000000000000000000000000000000000..51dc14c0af97bac2a6402702a42578ee7d55cf31 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/logs/debug-internal.log @@ -0,0 +1,194 @@ +2024-05-14 16:34:24,687 INFO StreamThr :117715 [internal.py:wandb_internal():85] W&B internal server running at pid: 117715, started at: 2024-05-14 16:34:24.683734 +2024-05-14 16:34:24,689 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: status +2024-05-14 16:34:24,690 INFO WriterThread:117715 [datastore.py:open_for_write():87] open: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/run-3y7czkd2.wandb +2024-05-14 16:34:24,691 DEBUG SenderThread:117715 [sender.py:send():378] send: header +2024-05-14 16:34:24,697 DEBUG SenderThread:117715 [sender.py:send():378] send: run +2024-05-14 16:34:24,947 INFO SenderThread:117715 [dir_watcher.py:__init__():211] watching files in: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files +2024-05-14 16:34:24,947 INFO SenderThread:117715 [sender.py:_start_run_threads():1123] run started: 3y7czkd2 with start time 1715704464.683271 +2024-05-14 16:34:24,956 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: check_version +2024-05-14 16:34:24,956 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: check_version +2024-05-14 16:34:25,040 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: run_start +2024-05-14 16:34:25,042 DEBUG HandlerThread:117715 [system_info.py:__init__():26] System info init +2024-05-14 16:34:25,042 DEBUG HandlerThread:117715 [system_info.py:__init__():41] System info init done +2024-05-14 16:34:25,042 INFO HandlerThread:117715 [system_monitor.py:start():194] Starting system monitor +2024-05-14 16:34:25,042 INFO SystemMonitor:117715 [system_monitor.py:_start():158] Starting system asset monitoring threads +2024-05-14 16:34:25,042 INFO HandlerThread:117715 [system_monitor.py:probe():214] Collecting system info +2024-05-14 16:34:25,043 INFO SystemMonitor:117715 [interfaces.py:start():188] Started cpu monitoring +2024-05-14 16:34:25,043 INFO SystemMonitor:117715 [interfaces.py:start():188] Started disk monitoring +2024-05-14 16:34:25,043 INFO SystemMonitor:117715 [interfaces.py:start():188] Started memory monitoring +2024-05-14 16:34:25,044 INFO SystemMonitor:117715 [interfaces.py:start():188] Started network monitoring +2024-05-14 16:34:25,089 DEBUG HandlerThread:117715 [system_info.py:probe():150] Probing system +2024-05-14 16:34:25,098 DEBUG HandlerThread:117715 [system_info.py:_probe_git():135] Probing git +2024-05-14 16:34:25,119 ERROR HandlerThread:117715 [gitlib.py:root():92] git root error: Cmd('git') failed due to: exit code(128) + cmdline: git rev-parse --show-toplevel + stderr: 'fatal: detected dubious ownership in repository at '/data/cronscript/lm-evaluation-harness' +To add an exception for this directory, call: + + git config --global --add safe.directory /data/cronscript/lm-evaluation-harness' +2024-05-14 16:34:25,119 DEBUG HandlerThread:117715 [system_info.py:_probe_git():143] Probing git done +2024-05-14 16:34:25,119 DEBUG HandlerThread:117715 [system_info.py:probe():198] Probing system done +2024-05-14 16:34:25,119 DEBUG HandlerThread:117715 [system_monitor.py:probe():223] {'os': 'Linux-5.15.0-92-generic-x86_64-with-glibc2.35', 'python': '3.10.12', 'heartbeatAt': '2024-05-14T16:34:25.089928', 'startedAt': '2024-05-14T16:34:24.673010', 'docker': None, 'cuda': None, 'args': ('--model', 'hf', '--model_args', 'pretrained=/data/cronscript/ckpts//hf_ckpt//global_step100', '--tasks', 'indiccopa-hi', '--batch_size', 'auto', '--wandb_args', 'project=bharatgpt,group=trial_expt'), 'state': 'running', 'program': '-m lm_eval.__main__', 'codePathLocal': None, 'git': {'remote': 'https://github.com/EleutherAI/lm-evaluation-harness', 'commit': None}, 'email': None, 'root': '/data/cronscript/lm-evaluation-harness', 'host': 'vizzhy-150-3', 'username': 'root', 'executable': '/usr/bin/python3', 'cpu_count': 76, 'cpu_count_logical': 152, 'cpu_freq': {'current': 3391.9957960526312, 'min': 800.0, 'max': 3400.0}, 'cpu_freq_per_core': [{'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3299.996, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3299.996, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.006, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}], 'disk': {'/': {'total': 866.4415092468262, 'used': 863.4842262268066}}, 'memory': {'total': 1007.5000267028809}} +2024-05-14 16:34:25,119 INFO HandlerThread:117715 [system_monitor.py:probe():224] Finished collecting system info +2024-05-14 16:34:25,119 INFO HandlerThread:117715 [system_monitor.py:probe():227] Publishing system info +2024-05-14 16:34:25,120 INFO HandlerThread:117715 [system_monitor.py:probe():229] Finished publishing system info +2024-05-14 16:34:25,124 DEBUG SenderThread:117715 [sender.py:send():378] send: files +2024-05-14 16:34:25,124 INFO SenderThread:117715 [sender.py:_save_file():1389] saving file wandb-metadata.json with policy now +2024-05-14 16:34:25,217 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: python_packages +2024-05-14 16:34:25,217 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: stop_status +2024-05-14 16:34:25,217 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: python_packages +2024-05-14 16:34:25,218 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: stop_status +2024-05-14 16:34:25,353 DEBUG SenderThread:117715 [sender.py:send():378] send: telemetry +2024-05-14 16:34:25,629 INFO wandb-upload_0:117715 [upload_job.py:push():130] Uploaded file /tmp/tmp9kzsla8bwandb/s09xnp07-wandb-metadata.json +2024-05-14 16:34:25,948 INFO Thread-12 :117715 [dir_watcher.py:_on_file_created():271] file/dir created: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log +2024-05-14 16:34:25,949 INFO Thread-12 :117715 [dir_watcher.py:_on_file_created():271] file/dir created: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-metadata.json +2024-05-14 16:34:25,949 INFO Thread-12 :117715 [dir_watcher.py:_on_file_created():271] file/dir created: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/requirements.txt +2024-05-14 16:34:27,949 INFO Thread-12 :117715 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log +2024-05-14 16:34:30,355 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: status_report +2024-05-14 16:34:31,951 INFO Thread-12 :117715 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log +2024-05-14 16:34:35,622 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: status_report +2024-05-14 16:34:37,955 INFO Thread-12 :117715 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log +2024-05-14 16:34:40,219 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: stop_status +2024-05-14 16:34:40,219 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: stop_status +2024-05-14 16:34:41,213 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: status_report +2024-05-14 16:34:41,958 INFO Thread-12 :117715 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log +2024-05-14 16:34:43,959 INFO Thread-12 :117715 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log +2024-05-14 16:34:45,961 INFO Thread-12 :117715 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log +2024-05-14 16:34:46,281 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: status_report +2024-05-14 16:34:47,962 INFO Thread-12 :117715 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log +2024-05-14 16:34:51,282 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: status_report +2024-05-14 16:34:52,303 DEBUG SenderThread:117715 [sender.py:send():378] send: exit +2024-05-14 16:34:52,304 INFO SenderThread:117715 [sender.py:send_exit():585] handling exit code: 0 +2024-05-14 16:34:52,304 INFO SenderThread:117715 [sender.py:send_exit():587] handling runtime: 27 +2024-05-14 16:34:52,305 INFO SenderThread:117715 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-14 16:34:52,305 INFO SenderThread:117715 [sender.py:send_exit():593] send defer +2024-05-14 16:34:52,305 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:52,305 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 0 +2024-05-14 16:34:52,306 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:52,306 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 0 +2024-05-14 16:34:52,306 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 1 +2024-05-14 16:34:52,306 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:52,306 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 1 +2024-05-14 16:34:52,306 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:52,306 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 1 +2024-05-14 16:34:52,306 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 2 +2024-05-14 16:34:52,306 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:52,306 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 2 +2024-05-14 16:34:52,306 INFO HandlerThread:117715 [system_monitor.py:finish():203] Stopping system monitor +2024-05-14 16:34:52,306 DEBUG SystemMonitor:117715 [system_monitor.py:_start():172] Starting system metrics aggregation loop +2024-05-14 16:34:52,306 DEBUG SystemMonitor:117715 [system_monitor.py:_start():179] Finished system metrics aggregation loop +2024-05-14 16:34:52,307 DEBUG SystemMonitor:117715 [system_monitor.py:_start():183] Publishing last batch of metrics +2024-05-14 16:34:52,307 INFO HandlerThread:117715 [interfaces.py:finish():200] Joined cpu monitor +2024-05-14 16:34:52,308 INFO HandlerThread:117715 [interfaces.py:finish():200] Joined disk monitor +2024-05-14 16:34:52,308 INFO HandlerThread:117715 [interfaces.py:finish():200] Joined memory monitor +2024-05-14 16:34:52,308 INFO HandlerThread:117715 [interfaces.py:finish():200] Joined network monitor +2024-05-14 16:34:52,308 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:52,308 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 2 +2024-05-14 16:34:52,308 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 3 +2024-05-14 16:34:52,308 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:52,308 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 3 +2024-05-14 16:34:52,308 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:52,308 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 3 +2024-05-14 16:34:52,308 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 4 +2024-05-14 16:34:52,308 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:52,309 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 4 +2024-05-14 16:34:52,309 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:52,309 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 4 +2024-05-14 16:34:52,309 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 5 +2024-05-14 16:34:52,309 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:52,309 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 5 +2024-05-14 16:34:52,309 DEBUG SenderThread:117715 [sender.py:send():378] send: summary +2024-05-14 16:34:52,310 INFO SenderThread:117715 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-14 16:34:52,310 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:52,310 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 5 +2024-05-14 16:34:52,310 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 6 +2024-05-14 16:34:52,310 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:52,310 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 6 +2024-05-14 16:34:52,310 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:52,310 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 6 +2024-05-14 16:34:52,313 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: status_report +2024-05-14 16:34:52,397 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 7 +2024-05-14 16:34:52,397 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:52,397 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 7 +2024-05-14 16:34:52,397 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:52,398 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 7 +2024-05-14 16:34:52,965 INFO Thread-12 :117715 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/config.yaml +2024-05-14 16:34:52,966 INFO Thread-12 :117715 [dir_watcher.py:_on_file_created():271] file/dir created: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-summary.json +2024-05-14 16:34:53,303 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-14 16:34:55,229 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 8 +2024-05-14 16:34:55,229 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: poll_exit +2024-05-14 16:34:55,229 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:55,229 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 8 +2024-05-14 16:34:55,230 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:55,230 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 8 +2024-05-14 16:34:55,230 INFO SenderThread:117715 [job_builder.py:build():432] Attempting to build job artifact +2024-05-14 16:34:55,230 INFO SenderThread:117715 [job_builder.py:_get_source_type():576] no source found +2024-05-14 16:34:55,230 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 9 +2024-05-14 16:34:55,230 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:55,230 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 9 +2024-05-14 16:34:55,230 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:55,230 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 9 +2024-05-14 16:34:55,230 INFO SenderThread:117715 [dir_watcher.py:finish():358] shutting down directory watcher +2024-05-14 16:34:55,304 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-14 16:34:55,967 INFO SenderThread:117715 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log +2024-05-14 16:34:55,967 INFO SenderThread:117715 [dir_watcher.py:finish():388] scan: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files +2024-05-14 16:34:55,967 INFO SenderThread:117715 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-metadata.json wandb-metadata.json +2024-05-14 16:34:55,967 INFO SenderThread:117715 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/config.yaml config.yaml +2024-05-14 16:34:55,967 INFO SenderThread:117715 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/requirements.txt requirements.txt +2024-05-14 16:34:55,967 INFO SenderThread:117715 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log output.log +2024-05-14 16:34:55,970 INFO SenderThread:117715 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-summary.json wandb-summary.json +2024-05-14 16:34:55,972 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 10 +2024-05-14 16:34:55,972 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: poll_exit +2024-05-14 16:34:55,974 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:55,974 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 10 +2024-05-14 16:34:55,974 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:55,974 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 10 +2024-05-14 16:34:55,974 INFO SenderThread:117715 [file_pusher.py:finish():169] shutting down file pusher +2024-05-14 16:34:56,205 INFO wandb-upload_0:117715 [upload_job.py:push():130] Uploaded file /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/requirements.txt +2024-05-14 16:34:56,305 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-14 16:34:56,305 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: poll_exit +2024-05-14 16:34:56,365 INFO wandb-upload_1:117715 [upload_job.py:push():130] Uploaded file /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/config.yaml +2024-05-14 16:34:56,464 INFO wandb-upload_2:117715 [upload_job.py:push():130] Uploaded file /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/output.log +2024-05-14 16:34:56,472 INFO wandb-upload_3:117715 [upload_job.py:push():130] Uploaded file /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/files/wandb-summary.json +2024-05-14 16:34:56,672 INFO Thread-11 (_thread_body):117715 [sender.py:transition_state():613] send defer: 11 +2024-05-14 16:34:56,673 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:56,673 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 11 +2024-05-14 16:34:56,673 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:56,673 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 11 +2024-05-14 16:34:56,673 INFO SenderThread:117715 [file_pusher.py:join():175] waiting for file pusher +2024-05-14 16:34:56,674 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 12 +2024-05-14 16:34:56,674 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:56,674 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 12 +2024-05-14 16:34:56,674 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:56,674 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 12 +2024-05-14 16:34:56,674 INFO SenderThread:117715 [file_stream.py:finish():601] file stream finish called +2024-05-14 16:34:56,733 INFO SenderThread:117715 [file_stream.py:finish():605] file stream finish is done +2024-05-14 16:34:56,733 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 13 +2024-05-14 16:34:56,733 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:56,733 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 13 +2024-05-14 16:34:56,733 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:56,733 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 13 +2024-05-14 16:34:56,733 INFO SenderThread:117715 [sender.py:transition_state():613] send defer: 14 +2024-05-14 16:34:56,733 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:34:56,733 INFO HandlerThread:117715 [handler.py:handle_request_defer():184] handle defer: 14 +2024-05-14 16:34:56,733 DEBUG SenderThread:117715 [sender.py:send():378] send: final +2024-05-14 16:34:56,734 DEBUG SenderThread:117715 [sender.py:send():378] send: footer +2024-05-14 16:34:56,734 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: defer +2024-05-14 16:34:56,734 INFO SenderThread:117715 [sender.py:send_request_defer():609] handle sender defer: 14 +2024-05-14 16:34:56,734 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-14 16:34:56,734 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: poll_exit +2024-05-14 16:34:56,734 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-14 16:34:56,735 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: poll_exit +2024-05-14 16:34:56,735 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: server_info +2024-05-14 16:34:56,735 DEBUG SenderThread:117715 [sender.py:send_request():405] send_request: server_info +2024-05-14 16:34:56,736 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: get_summary +2024-05-14 16:34:56,736 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: sampled_history +2024-05-14 16:34:56,737 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: internal_messages +2024-05-14 16:34:56,788 INFO MainThread:117715 [wandb_run.py:_footer_history_summary_info():3994] rendering history +2024-05-14 16:34:56,788 INFO MainThread:117715 [wandb_run.py:_footer_history_summary_info():4026] rendering summary +2024-05-14 16:34:56,788 INFO MainThread:117715 [wandb_run.py:_footer_sync_info():3953] logging synced files +2024-05-14 16:34:56,788 DEBUG HandlerThread:117715 [handler.py:handle_request():158] handle_request: shutdown +2024-05-14 16:34:56,788 INFO HandlerThread:117715 [handler.py:finish():882] shutting down handler +2024-05-14 16:34:57,735 INFO WriterThread:117715 [datastore.py:close():296] close: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/run-3y7czkd2.wandb +2024-05-14 16:34:57,788 INFO SenderThread:117715 [sender.py:finish():1545] shutting down sender +2024-05-14 16:34:57,788 INFO SenderThread:117715 [file_pusher.py:finish():169] shutting down file pusher +2024-05-14 16:34:57,788 INFO SenderThread:117715 [file_pusher.py:join():175] waiting for file pusher diff --git a/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/logs/debug.log b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/logs/debug.log new file mode 100644 index 0000000000000000000000000000000000000000..9d7d6b0c973b4f98ea9c29a33acc8419da1e0799 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/logs/debug.log @@ -0,0 +1,29 @@ +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0 +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_setup.py:_flush():76] Configure stats pid to 116799 +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_setup.py:_flush():76] Loading settings from /data/cronscript/lm-evaluation-harness/wandb/settings +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_setup.py:_flush():76] Loading settings from environment variables: {} +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False} +2024-05-14 16:34:24,680 WARNING MainThread:116799 [wandb_setup.py:_flush():76] Could not find program at -m lm_eval.__main__ +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program_relpath': None, 'program': '-m lm_eval.__main__'} +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_setup.py:_flush():76] Applying login settings: {} +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_init.py:_log_setup():520] Logging user logs to /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/logs/debug.log +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_init.py:_log_setup():521] Logging internal logs to /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/logs/debug-internal.log +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_init.py:init():560] calling init triggers +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_init.py:init():567] wandb.init called with sweep_config: {} +config: {} +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_init.py:init():610] starting backend +2024-05-14 16:34:24,680 INFO MainThread:116799 [wandb_init.py:init():614] setting up manager +2024-05-14 16:34:24,682 INFO MainThread:116799 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn +2024-05-14 16:34:24,683 INFO MainThread:116799 [wandb_init.py:init():622] backend started and connected +2024-05-14 16:34:24,686 INFO MainThread:116799 [wandb_init.py:init():711] updated telemetry +2024-05-14 16:34:24,697 INFO MainThread:116799 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout +2024-05-14 16:34:24,955 INFO MainThread:116799 [wandb_run.py:_on_init():2396] communicating current version +2024-05-14 16:34:25,035 INFO MainThread:116799 [wandb_run.py:_on_init():2405] got version response +2024-05-14 16:34:25,035 INFO MainThread:116799 [wandb_init.py:init():795] starting run threads in backend +2024-05-14 16:34:25,217 INFO MainThread:116799 [wandb_run.py:_console_start():2374] atexit reg +2024-05-14 16:34:25,217 INFO MainThread:116799 [wandb_run.py:_redirect():2229] redirect: wrap_raw +2024-05-14 16:34:25,217 INFO MainThread:116799 [wandb_run.py:_redirect():2294] Wrapping output streams. +2024-05-14 16:34:25,218 INFO MainThread:116799 [wandb_run.py:_redirect():2319] Redirects installed. +2024-05-14 16:34:25,219 INFO MainThread:116799 [wandb_init.py:init():838] run started, returning control to user process +2024-05-14 16:34:57,789 WARNING MsgRouterThr:116799 [router.py:message_loop():77] message_loop has been closed diff --git a/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/run-3y7czkd2.wandb b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/run-3y7czkd2.wandb new file mode 100644 index 0000000000000000000000000000000000000000..c9ae90e2c538299f8ba692eecad67b6cb4e8ad50 Binary files /dev/null and b/lm-evaluation-harness/wandb/run-20240514_163424-3y7czkd2/run-3y7czkd2.wandb differ diff --git a/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/config.yaml b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..770bd3a3bacd57b25320b34641708cdf1cd2fa1d --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/config.yaml @@ -0,0 +1,43 @@ +wandb_version: 1 + +_wandb: + desc: null + value: + python_version: 3.10.12 + cli_version: 0.17.0 + framework: huggingface + huggingface_version: 4.40.2 + is_jupyter_run: false + is_kaggle_kernel: false + start_time: 1715704623 + t: + 1: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 2: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 3: + - 23 + 4: 3.10.12 + 5: 0.17.0 + 6: 4.40.2 + 8: + - 5 + 13: linux-x86_64 diff --git a/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/output.log b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/output.log new file mode 100644 index 0000000000000000000000000000000000000000..ee080e8636de23cf0193ec11a8b47a7a687384d0 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/output.log @@ -0,0 +1,33 @@ + +2024-05-14:16:37:03,919 INFO [__main__.py:251] Verbosity set to INFO +2024-05-14:16:37:08,439 INFO [__main__.py:335] Selected Tasks: ['indiccopa-hi'] +2024-05-14:16:37:08,441 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 +2024-05-14:16:37:08,441 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/data/cronscript/ckpts//hf_ckpt//global_step20'} +Traceback (most recent call last): + File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main + return _run_code(code, main_globals, None, + File "/usr/lib/python3.10/runpy.py", line 86, in _run_code + exec(code, run_globals) + File "/data/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 417, in + cli_evaluate() + File "/data/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate + results = evaluator.simple_evaluate( + File "/data/cronscript/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper + return fn(*args, **kwargs) + File "/data/cronscript/lm-evaluation-harness/lm_eval/evaluator.py", line 180, in simple_evaluate + lm = lm_eval.api.registry.get_model(model).create_from_arg_string( + File "/data/cronscript/lm-evaluation-harness/lm_eval/api/model.py", line 134, in create_from_arg_string + return cls(**args, **args2) + File "/data/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 190, in __init__ + self._get_config( + File "/data/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 471, in _get_config + self._config = transformers.AutoConfig.from_pretrained( + File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 928, in from_pretrained + config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 631, in get_config_dict + config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 686, in _get_config_dict + resolved_config_file = cached_file( + File "/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py", line 369, in cached_file + raise EnvironmentError( +OSError: /data/cronscript/ckpts//hf_ckpt//global_step20 does not appear to have a file named config.json. Checkout 'https://huggingface.co//data/cronscript/ckpts//hf_ckpt//global_step20/tree/main' for available files. \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/requirements.txt b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..d63edd781bd5bdbb7f67523ac1ba9f0f1ed392dc --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/requirements.txt @@ -0,0 +1,163 @@ +DataProperty==1.0.1 +GitPython==3.1.43 +Jinja2==3.1.3 +Markdown==3.6 +MarkupSafe==2.1.5 +Pillow-SIMD==7.0.0.post3 +PyYAML==6.0 +Werkzeug==3.0.2 +absl-py==2.1.0 +accelerate==0.30.1 +aiohttp==3.9.4 +aiosignal==1.3.1 +antlr4-python3-runtime==4.9.3 +anyio==4.3.0 +async-timeout==4.0.3 +attrs==23.2.0 +av==9.2.0 +cachetools==5.3.3 +certifi==2024.2.2 +cffi==1.15.1 +cfgv==3.4.0 +chardet==5.2.0 +charset-normalizer==3.3.2 +click==8.1.7 +cmake==3.29.2 +colorama==0.4.6 +datasets==2.19.1 +deepspeed==0.12.4+hpu.synapse.v1.15.1 +dill==0.3.8 +distlib==0.3.8 +distro==1.9.0 +docker-pycreds==0.4.0 +einops==0.8.0 +evaluate==0.4.2 +exceptiongroup==1.2.0 +expecttest==0.2.1 +filelock==3.13.4 +frozenlist==1.4.1 +fsspec==2024.3.1 +gitdb==4.0.11 +google-auth-oauthlib==0.4.6 +google-auth==2.29.0 +grpcio==1.62.1 +h11==0.14.0 +habana-media-loader==1.15.1.15 +habana-pyhlml==1.15.1.15 +habana-torch-dataloader==1.15.1.15 +habana-torch-plugin==1.15.1.15 +habana_gpu_migration==1.15.1.15 +habana_quantization_toolkit==1.15.1.15 +hjson==3.1.0 +httpcore==1.0.5 +httpx==0.27.0 +huggingface-hub==0.23.0 +identify==2.5.35 +idna==3.7 +importlib_resources==6.4.0 +iniconfig==2.0.0 +joblib==1.4.2 +jsonlines==4.0.0 +lightning-habana==1.4.0 +lightning-utilities==0.11.2 +lightning==2.2.0.post0 +lm_eval==0.3.0 +lm_eval==0.4.2 +lm_eval==0.4.2 +lm_eval==0.4.2 +mbstrdecoder==1.1.3 +more-itertools==10.2.0 +mpi4py==3.1.4 +mpmath==1.3.0 +multidict==6.0.5 +multiprocess==0.70.16 +networkx==3.3 +ninja==1.11.1.1 +nltk==3.8.1 +nodeenv==1.8.0 +numexpr==2.10.0 +numpy==1.23.5 +oauthlib==3.2.2 +omegaconf==2.3.0 +openai==1.29.0 +packaging==24.0 +pandas==2.0.1 +pathspec==0.12.1 +pathvalidate==3.2.0 +peft==0.10.0 +perfetto==0.7.0 +pip==22.0.2 +pip==23.3.1 +platformdirs==4.2.0 +pluggy==1.4.0 +portalocker==2.8.2 +pre-commit==3.3.3 +protobuf==3.20.3 +psutil==5.9.8 +py-cpuinfo==9.0.0 +pyarrow-hotfix==0.6 +pyarrow==16.0.0 +pyasn1==0.6.0 +pyasn1_modules==0.4.0 +pybind11==2.10.4 +pycountry==23.12.11 +pycparser==2.22 +pydantic==1.10.13 +pynvml==8.0.4 +pytablewriter==1.2.0 +pytest==8.1.1 +python-dateutil==2.9.0.post0 +pytorch-lightning==2.2.2 +pytz==2024.1 +regex==2023.5.5 +requests-oauthlib==2.0.0 +requests==2.31.0 +rouge_score==0.1.2 +rsa==4.9 +sacrebleu==1.5.0 +safetensors==0.4.3 +scikit-learn==1.4.2 +scipy==1.13.0 +sentencepiece==0.2.0 +sentry-sdk==2.1.1 +setproctitle==1.3.3 +setuptools==59.6.0 +setuptools==69.5.1 +six==1.16.0 +smmap==5.0.1 +sniffio==1.3.1 +sqlitedict==2.1.0 +symengine==0.11.0 +sympy==1.12 +tabledata==1.3.3 +tcolorpy==0.1.6 +tdqm==0.0.1 +tensorboard-data-server==0.6.1 +tensorboard-plugin-wit==1.8.1 +tensorboard==2.11.2 +threadpoolctl==3.5.0 +tokenizers==0.19.1 +tomli==2.0.1 +torch==2.2.0a0+git8964477 +torch_tb_profiler==0.4.0 +torchaudio==2.2.0+08901ad +torchdata==0.7.1+5e6f7b7 +torchmetrics==1.3.2 +torchtext==0.17.0+400da5c +torchvision==0.17.0+b2383d4 +tqdm-multiprocess==0.0.11 +tqdm==4.66.2 +transformers==4.40.2 +typepy==1.3.2 +typing_extensions==4.11.0 +tzdata==2024.1 +urllib3==1.26.18 +virtualenv==20.25.1 +wandb==0.17.0 +wheel==0.37.1 +wheel==0.43.0 +word2number==1.1 +xxhash==3.4.1 +yamllint==1.35.1 +yarl==1.9.4 +zstandard==0.22.0 \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-metadata.json b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..cbdebdf255feaec4b865dbdb4b90bcb38e23cd2c --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-metadata.json @@ -0,0 +1,810 @@ +{ + "os": "Linux-5.15.0-92-generic-x86_64-with-glibc2.35", + "python": "3.10.12", + "heartbeatAt": "2024-05-14T16:37:03.790165", + "startedAt": "2024-05-14T16:37:03.344932", + "docker": null, + "cuda": null, + "args": [ + "--model", + "hf", + "--model_args", + "pretrained=/data/cronscript/ckpts//hf_ckpt//global_step20", + "--tasks", + "indiccopa-hi", + "--batch_size", + "auto", + "--wandb_args", + "project=bharatgpt,group=trial_expt" + ], + "state": "running", + "program": "-m lm_eval.__main__", + "codePathLocal": null, + "git": { + "remote": "https://github.com/EleutherAI/lm-evaluation-harness", + "commit": null + }, + "email": null, + "root": "/data/cronscript/lm-evaluation-harness", + "host": "vizzhy-150-3", + "username": "root", + "executable": "/usr/bin/python3", + "cpu_count": 76, + "cpu_count_logical": 152, + "cpu_freq": { + "current": 3393.351355263158, + "min": 800.0, + "max": 3400.0 + }, + "cpu_freq_per_core": [ + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3212.693, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3321.564, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + } + ], + "disk": { + "/": { + "total": 866.4415092468262, + "used": 863.4234619140625 + } + }, + "memory": { + "total": 1007.5000267028809 + } +} diff --git a/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-summary.json b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-summary.json new file mode 100644 index 0000000000000000000000000000000000000000..e682bae6b5eaeba8295fd0fffdc51474a259249e --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-summary.json @@ -0,0 +1 @@ +{"_wandb": {"runtime": 5}} \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/logs/debug-internal.log b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/logs/debug-internal.log new file mode 100644 index 0000000000000000000000000000000000000000..01fe9541a680e9adcf6de8b1a584a7ae456b5e10 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/logs/debug-internal.log @@ -0,0 +1,181 @@ +2024-05-14 16:37:03,358 INFO StreamThr :127591 [internal.py:wandb_internal():85] W&B internal server running at pid: 127591, started at: 2024-05-14 16:37:03.357513 +2024-05-14 16:37:03,360 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: status +2024-05-14 16:37:03,361 INFO WriterThread:127591 [datastore.py:open_for_write():87] open: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/run-cmgffqlk.wandb +2024-05-14 16:37:03,362 DEBUG SenderThread:127591 [sender.py:send():378] send: header +2024-05-14 16:37:03,372 DEBUG SenderThread:127591 [sender.py:send():378] send: run +2024-05-14 16:37:03,592 INFO SenderThread:127591 [dir_watcher.py:__init__():211] watching files in: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files +2024-05-14 16:37:03,593 INFO SenderThread:127591 [sender.py:_start_run_threads():1123] run started: cmgffqlk with start time 1715704623.35729 +2024-05-14 16:37:03,599 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: check_version +2024-05-14 16:37:03,600 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: check_version +2024-05-14 16:37:03,683 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: run_start +2024-05-14 16:37:03,685 DEBUG HandlerThread:127591 [system_info.py:__init__():26] System info init +2024-05-14 16:37:03,685 DEBUG HandlerThread:127591 [system_info.py:__init__():41] System info init done +2024-05-14 16:37:03,685 INFO HandlerThread:127591 [system_monitor.py:start():194] Starting system monitor +2024-05-14 16:37:03,685 INFO SystemMonitor:127591 [system_monitor.py:_start():158] Starting system asset monitoring threads +2024-05-14 16:37:03,685 INFO HandlerThread:127591 [system_monitor.py:probe():214] Collecting system info +2024-05-14 16:37:03,685 INFO SystemMonitor:127591 [interfaces.py:start():188] Started cpu monitoring +2024-05-14 16:37:03,686 INFO SystemMonitor:127591 [interfaces.py:start():188] Started disk monitoring +2024-05-14 16:37:03,686 INFO SystemMonitor:127591 [interfaces.py:start():188] Started memory monitoring +2024-05-14 16:37:03,687 INFO SystemMonitor:127591 [interfaces.py:start():188] Started network monitoring +2024-05-14 16:37:03,790 DEBUG HandlerThread:127591 [system_info.py:probe():150] Probing system +2024-05-14 16:37:03,798 DEBUG HandlerThread:127591 [system_info.py:_probe_git():135] Probing git +2024-05-14 16:37:03,818 ERROR HandlerThread:127591 [gitlib.py:root():92] git root error: Cmd('git') failed due to: exit code(128) + cmdline: git rev-parse --show-toplevel + stderr: 'fatal: detected dubious ownership in repository at '/data/cronscript/lm-evaluation-harness' +To add an exception for this directory, call: + + git config --global --add safe.directory /data/cronscript/lm-evaluation-harness' +2024-05-14 16:37:03,818 DEBUG HandlerThread:127591 [system_info.py:_probe_git():143] Probing git done +2024-05-14 16:37:03,818 DEBUG HandlerThread:127591 [system_info.py:probe():198] Probing system done +2024-05-14 16:37:03,818 DEBUG HandlerThread:127591 [system_monitor.py:probe():223] {'os': 'Linux-5.15.0-92-generic-x86_64-with-glibc2.35', 'python': '3.10.12', 'heartbeatAt': '2024-05-14T16:37:03.790165', 'startedAt': '2024-05-14T16:37:03.344932', 'docker': None, 'cuda': None, 'args': ('--model', 'hf', '--model_args', 'pretrained=/data/cronscript/ckpts//hf_ckpt//global_step20', '--tasks', 'indiccopa-hi', '--batch_size', 'auto', '--wandb_args', 'project=bharatgpt,group=trial_expt'), 'state': 'running', 'program': '-m lm_eval.__main__', 'codePathLocal': None, 'git': {'remote': 'https://github.com/EleutherAI/lm-evaluation-harness', 'commit': None}, 'email': None, 'root': '/data/cronscript/lm-evaluation-harness', 'host': 'vizzhy-150-3', 'username': 'root', 'executable': '/usr/bin/python3', 'cpu_count': 76, 'cpu_count_logical': 152, 'cpu_freq': {'current': 3393.351355263158, 'min': 800.0, 'max': 3400.0}, 'cpu_freq_per_core': [{'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3212.693, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3321.564, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}], 'disk': {'/': {'total': 866.4415092468262, 'used': 863.4234619140625}}, 'memory': {'total': 1007.5000267028809}} +2024-05-14 16:37:03,818 INFO HandlerThread:127591 [system_monitor.py:probe():224] Finished collecting system info +2024-05-14 16:37:03,818 INFO HandlerThread:127591 [system_monitor.py:probe():227] Publishing system info +2024-05-14 16:37:03,819 INFO HandlerThread:127591 [system_monitor.py:probe():229] Finished publishing system info +2024-05-14 16:37:03,823 DEBUG SenderThread:127591 [sender.py:send():378] send: files +2024-05-14 16:37:03,823 INFO SenderThread:127591 [sender.py:_save_file():1389] saving file wandb-metadata.json with policy now +2024-05-14 16:37:03,916 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: python_packages +2024-05-14 16:37:03,916 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: stop_status +2024-05-14 16:37:03,916 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: python_packages +2024-05-14 16:37:03,917 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: stop_status +2024-05-14 16:37:04,113 DEBUG SenderThread:127591 [sender.py:send():378] send: telemetry +2024-05-14 16:37:04,365 INFO wandb-upload_0:127591 [upload_job.py:push():130] Uploaded file /tmp/tmpodkp03kkwandb/7r24f2ty-wandb-metadata.json +2024-05-14 16:37:04,594 INFO Thread-12 :127591 [dir_watcher.py:_on_file_created():271] file/dir created: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-metadata.json +2024-05-14 16:37:04,594 INFO Thread-12 :127591 [dir_watcher.py:_on_file_created():271] file/dir created: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/output.log +2024-05-14 16:37:04,594 INFO Thread-12 :127591 [dir_watcher.py:_on_file_created():271] file/dir created: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/requirements.txt +2024-05-14 16:37:06,594 INFO Thread-12 :127591 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/output.log +2024-05-14 16:37:08,441 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: status_report +2024-05-14 16:37:09,521 DEBUG SenderThread:127591 [sender.py:send():378] send: exit +2024-05-14 16:37:09,521 INFO SenderThread:127591 [sender.py:send_exit():585] handling exit code: 1 +2024-05-14 16:37:09,521 INFO SenderThread:127591 [sender.py:send_exit():587] handling runtime: 5 +2024-05-14 16:37:09,523 INFO SenderThread:127591 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-14 16:37:09,523 INFO SenderThread:127591 [sender.py:send_exit():593] send defer +2024-05-14 16:37:09,523 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:09,523 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 0 +2024-05-14 16:37:09,523 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:09,523 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 0 +2024-05-14 16:37:09,523 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 1 +2024-05-14 16:37:09,523 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:09,523 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 1 +2024-05-14 16:37:09,523 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:09,524 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 1 +2024-05-14 16:37:09,524 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 2 +2024-05-14 16:37:09,524 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:09,524 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 2 +2024-05-14 16:37:09,524 INFO HandlerThread:127591 [system_monitor.py:finish():203] Stopping system monitor +2024-05-14 16:37:09,524 DEBUG SystemMonitor:127591 [system_monitor.py:_start():172] Starting system metrics aggregation loop +2024-05-14 16:37:09,524 DEBUG SystemMonitor:127591 [system_monitor.py:_start():179] Finished system metrics aggregation loop +2024-05-14 16:37:09,524 DEBUG SystemMonitor:127591 [system_monitor.py:_start():183] Publishing last batch of metrics +2024-05-14 16:37:09,525 INFO HandlerThread:127591 [interfaces.py:finish():200] Joined cpu monitor +2024-05-14 16:37:09,525 INFO HandlerThread:127591 [interfaces.py:finish():200] Joined disk monitor +2024-05-14 16:37:09,525 INFO HandlerThread:127591 [interfaces.py:finish():200] Joined memory monitor +2024-05-14 16:37:09,525 INFO HandlerThread:127591 [interfaces.py:finish():200] Joined network monitor +2024-05-14 16:37:09,525 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:09,525 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 2 +2024-05-14 16:37:09,525 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 3 +2024-05-14 16:37:09,525 DEBUG SenderThread:127591 [sender.py:send():378] send: stats +2024-05-14 16:37:09,525 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:09,525 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 3 +2024-05-14 16:37:09,526 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:09,526 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 3 +2024-05-14 16:37:09,526 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 4 +2024-05-14 16:37:09,526 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:09,526 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 4 +2024-05-14 16:37:09,526 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:09,526 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 4 +2024-05-14 16:37:09,526 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 5 +2024-05-14 16:37:09,526 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:09,526 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 5 +2024-05-14 16:37:09,526 DEBUG SenderThread:127591 [sender.py:send():378] send: summary +2024-05-14 16:37:09,527 INFO SenderThread:127591 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-14 16:37:09,527 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:09,527 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 5 +2024-05-14 16:37:09,527 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 6 +2024-05-14 16:37:09,527 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:09,527 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 6 +2024-05-14 16:37:09,527 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:09,527 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 6 +2024-05-14 16:37:09,530 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: status_report +2024-05-14 16:37:09,598 INFO Thread-12 :127591 [dir_watcher.py:_on_file_created():271] file/dir created: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-summary.json +2024-05-14 16:37:09,599 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 7 +2024-05-14 16:37:09,599 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:09,599 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 7 +2024-05-14 16:37:09,600 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:09,600 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 7 +2024-05-14 16:37:10,124 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 8 +2024-05-14 16:37:10,124 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:10,125 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 8 +2024-05-14 16:37:10,125 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:10,125 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 8 +2024-05-14 16:37:10,125 INFO SenderThread:127591 [job_builder.py:build():432] Attempting to build job artifact +2024-05-14 16:37:10,125 INFO SenderThread:127591 [job_builder.py:_get_source_type():576] no source found +2024-05-14 16:37:10,125 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 9 +2024-05-14 16:37:10,125 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:10,125 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 9 +2024-05-14 16:37:10,126 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:10,126 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 9 +2024-05-14 16:37:10,126 INFO SenderThread:127591 [dir_watcher.py:finish():358] shutting down directory watcher +2024-05-14 16:37:10,521 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-14 16:37:10,598 INFO Thread-12 :127591 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/config.yaml +2024-05-14 16:37:10,598 INFO SenderThread:127591 [dir_watcher.py:_on_file_modified():288] file/dir modified: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/output.log +2024-05-14 16:37:10,598 INFO SenderThread:127591 [dir_watcher.py:finish():388] scan: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files +2024-05-14 16:37:10,598 INFO SenderThread:127591 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-summary.json wandb-summary.json +2024-05-14 16:37:10,599 INFO SenderThread:127591 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/config.yaml config.yaml +2024-05-14 16:37:10,599 INFO SenderThread:127591 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/requirements.txt requirements.txt +2024-05-14 16:37:10,599 INFO SenderThread:127591 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-metadata.json wandb-metadata.json +2024-05-14 16:37:10,599 INFO SenderThread:127591 [dir_watcher.py:finish():402] scan save: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/output.log output.log +2024-05-14 16:37:10,599 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 10 +2024-05-14 16:37:10,599 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: poll_exit +2024-05-14 16:37:10,599 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:10,599 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 10 +2024-05-14 16:37:10,603 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:10,603 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 10 +2024-05-14 16:37:10,603 INFO SenderThread:127591 [file_pusher.py:finish():169] shutting down file pusher +2024-05-14 16:37:10,842 INFO wandb-upload_1:127591 [upload_job.py:push():130] Uploaded file /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/config.yaml +2024-05-14 16:37:11,009 INFO wandb-upload_0:127591 [upload_job.py:push():130] Uploaded file /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/wandb-summary.json +2024-05-14 16:37:11,090 INFO wandb-upload_3:127591 [upload_job.py:push():130] Uploaded file /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/output.log +2024-05-14 16:37:11,103 INFO wandb-upload_2:127591 [upload_job.py:push():130] Uploaded file /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/files/requirements.txt +2024-05-14 16:37:11,303 INFO Thread-11 (_thread_body):127591 [sender.py:transition_state():613] send defer: 11 +2024-05-14 16:37:11,303 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:11,303 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 11 +2024-05-14 16:37:11,304 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:11,304 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 11 +2024-05-14 16:37:11,304 INFO SenderThread:127591 [file_pusher.py:join():175] waiting for file pusher +2024-05-14 16:37:11,304 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 12 +2024-05-14 16:37:11,304 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:11,304 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 12 +2024-05-14 16:37:11,304 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:11,305 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 12 +2024-05-14 16:37:11,305 INFO SenderThread:127591 [file_stream.py:finish():601] file stream finish called +2024-05-14 16:37:11,521 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-14 16:37:11,540 INFO SenderThread:127591 [file_stream.py:finish():605] file stream finish is done +2024-05-14 16:37:11,540 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 13 +2024-05-14 16:37:11,540 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: poll_exit +2024-05-14 16:37:11,540 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:11,540 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 13 +2024-05-14 16:37:11,540 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:11,540 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 13 +2024-05-14 16:37:11,540 INFO SenderThread:127591 [sender.py:transition_state():613] send defer: 14 +2024-05-14 16:37:11,540 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: defer +2024-05-14 16:37:11,541 INFO HandlerThread:127591 [handler.py:handle_request_defer():184] handle defer: 14 +2024-05-14 16:37:11,541 DEBUG SenderThread:127591 [sender.py:send():378] send: final +2024-05-14 16:37:11,541 DEBUG SenderThread:127591 [sender.py:send():378] send: footer +2024-05-14 16:37:11,541 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: defer +2024-05-14 16:37:11,541 INFO SenderThread:127591 [sender.py:send_request_defer():609] handle sender defer: 14 +2024-05-14 16:37:11,541 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-14 16:37:11,541 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: poll_exit +2024-05-14 16:37:11,542 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-14 16:37:11,542 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: server_info +2024-05-14 16:37:11,542 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: poll_exit +2024-05-14 16:37:11,542 DEBUG SenderThread:127591 [sender.py:send_request():405] send_request: server_info +2024-05-14 16:37:11,543 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: get_summary +2024-05-14 16:37:11,543 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: sampled_history +2024-05-14 16:37:11,544 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: internal_messages +2024-05-14 16:37:11,596 INFO MainThread:127591 [wandb_run.py:_footer_history_summary_info():3994] rendering history +2024-05-14 16:37:11,596 INFO MainThread:127591 [wandb_run.py:_footer_history_summary_info():4026] rendering summary +2024-05-14 16:37:11,596 INFO MainThread:127591 [wandb_run.py:_footer_sync_info():3953] logging synced files +2024-05-14 16:37:11,597 DEBUG HandlerThread:127591 [handler.py:handle_request():158] handle_request: shutdown +2024-05-14 16:37:11,597 INFO HandlerThread:127591 [handler.py:finish():882] shutting down handler +2024-05-14 16:37:12,542 INFO WriterThread:127591 [datastore.py:close():296] close: /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/run-cmgffqlk.wandb +2024-05-14 16:37:12,596 INFO SenderThread:127591 [sender.py:finish():1545] shutting down sender +2024-05-14 16:37:12,596 INFO SenderThread:127591 [file_pusher.py:finish():169] shutting down file pusher +2024-05-14 16:37:12,596 INFO SenderThread:127591 [file_pusher.py:join():175] waiting for file pusher diff --git a/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/logs/debug.log b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/logs/debug.log new file mode 100644 index 0000000000000000000000000000000000000000..d0c8f6c25c855087d48b9318e58e4df6d5d743f9 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/logs/debug.log @@ -0,0 +1,29 @@ +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0 +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_setup.py:_flush():76] Configure stats pid to 126395 +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_setup.py:_flush():76] Loading settings from /data/cronscript/lm-evaluation-harness/wandb/settings +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_setup.py:_flush():76] Loading settings from environment variables: {} +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False} +2024-05-14 16:37:03,354 WARNING MainThread:126395 [wandb_setup.py:_flush():76] Could not find program at -m lm_eval.__main__ +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program_relpath': None, 'program': '-m lm_eval.__main__'} +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_setup.py:_flush():76] Applying login settings: {} +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_init.py:_log_setup():520] Logging user logs to /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/logs/debug.log +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_init.py:_log_setup():521] Logging internal logs to /data/cronscript/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/logs/debug-internal.log +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_init.py:init():560] calling init triggers +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_init.py:init():567] wandb.init called with sweep_config: {} +config: {} +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_init.py:init():610] starting backend +2024-05-14 16:37:03,354 INFO MainThread:126395 [wandb_init.py:init():614] setting up manager +2024-05-14 16:37:03,356 INFO MainThread:126395 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn +2024-05-14 16:37:03,357 INFO MainThread:126395 [wandb_init.py:init():622] backend started and connected +2024-05-14 16:37:03,360 INFO MainThread:126395 [wandb_init.py:init():711] updated telemetry +2024-05-14 16:37:03,371 INFO MainThread:126395 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout +2024-05-14 16:37:03,599 INFO MainThread:126395 [wandb_run.py:_on_init():2396] communicating current version +2024-05-14 16:37:03,679 INFO MainThread:126395 [wandb_run.py:_on_init():2405] got version response +2024-05-14 16:37:03,679 INFO MainThread:126395 [wandb_init.py:init():795] starting run threads in backend +2024-05-14 16:37:03,916 INFO MainThread:126395 [wandb_run.py:_console_start():2374] atexit reg +2024-05-14 16:37:03,916 INFO MainThread:126395 [wandb_run.py:_redirect():2229] redirect: wrap_raw +2024-05-14 16:37:03,916 INFO MainThread:126395 [wandb_run.py:_redirect():2294] Wrapping output streams. +2024-05-14 16:37:03,916 INFO MainThread:126395 [wandb_run.py:_redirect():2319] Redirects installed. +2024-05-14 16:37:03,917 INFO MainThread:126395 [wandb_init.py:init():838] run started, returning control to user process +2024-05-14 16:37:12,598 WARNING MsgRouterThr:126395 [router.py:message_loop():77] message_loop has been closed diff --git a/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/run-cmgffqlk.wandb b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/run-cmgffqlk.wandb new file mode 100644 index 0000000000000000000000000000000000000000..85b6f24c7156a8eb2af7b48a152eb1c9956976e0 Binary files /dev/null and b/lm-evaluation-harness/wandb/run-20240514_163703-cmgffqlk/run-cmgffqlk.wandb differ diff --git a/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/config.yaml b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9b555602c7c3d7c7725c26367593a655d0780ff7 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/config.yaml @@ -0,0 +1,43 @@ +wandb_version: 1 + +_wandb: + desc: null + value: + python_version: 3.10.12 + cli_version: 0.17.0 + framework: huggingface + huggingface_version: 4.41.0 + is_jupyter_run: false + is_kaggle_kernel: false + start_time: 1716384688 + t: + 1: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 2: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 3: + - 23 + 4: 3.10.12 + 5: 0.17.0 + 6: 4.41.0 + 8: + - 5 + 13: linux-x86_64 diff --git a/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/output.log b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/output.log new file mode 100644 index 0000000000000000000000000000000000000000..50103acbd996c9803d2f0bd2183ddb0dd4e2fcc2 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/output.log @@ -0,0 +1,34 @@ + +2024-05-22:13:31:29,598 INFO [__main__.py:251] Verbosity set to INFO +2024-05-22:13:31:37,997 INFO [__main__.py:335] Selected Tasks: ['indiccopa-hi'] +2024-05-22:13:31:37,998 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 +2024-05-22:13:31:37,998 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step120'} +2024-05-22:13:31:40,287 INFO [huggingface.py:164] Using device 'cuda' +Traceback (most recent call last): + File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main + return _run_code(code, main_globals, None, + File "/usr/lib/python3.10/runpy.py", line 86, in _run_code + exec(code, run_globals) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 417, in + cli_evaluate() + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate + results = evaluator.simple_evaluate( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper + return fn(*args, **kwargs) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/evaluator.py", line 180, in simple_evaluate + lm = lm_eval.api.registry.get_model(model).create_from_arg_string( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/api/model.py", line 134, in create_from_arg_string + return cls(**args, **args2) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 190, in __init__ + self._get_config( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 471, in _get_config + self._config = transformers.AutoConfig.from_pretrained( + File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 934, in from_pretrained + config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 632, in get_config_dict + config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 689, in _get_config_dict + resolved_config_file = cached_file( + File "/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py", line 370, in cached_file + raise EnvironmentError( +OSError: /mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step120 does not appear to have a file named config.json. Checkout 'https://huggingface.co//mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step120/tree/main' for available files. \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/requirements.txt b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..a3573b3bbfd5b190fed4ccaed4ac2846002aec22 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/requirements.txt @@ -0,0 +1,155 @@ +DataProperty==1.0.1 +GitPython==3.1.43 +Jinja2==3.1.4 +Markdown==3.6 +MarkupSafe==2.1.5 +Pillow-SIMD==7.0.0.post3 +PyYAML==6.0 +Werkzeug==3.0.3 +absl-py==2.1.0 +accelerate==0.30.1 +aiohttp==3.9.5 +aiosignal==1.3.1 +async-timeout==4.0.3 +attrs==23.2.0 +av==9.2.0 +cachetools==5.3.3 +certifi==2024.2.2 +cffi==1.15.1 +cfgv==3.4.0 +chardet==5.2.0 +charset-normalizer==3.3.2 +click==8.1.7 +cmake==3.29.2 +colorama==0.4.6 +datasets==2.19.1 +deepspeed==0.12.4+hpu.synapse.v1.15.1 +dill==0.3.8 +distlib==0.3.8 +docker-pycreds==0.4.0 +einops==0.8.0 +evaluate==0.4.2 +exceptiongroup==1.2.1 +expecttest==0.2.1 +filelock==3.14.0 +frozenlist==1.4.1 +fsspec==2024.3.1 +gitdb==4.0.11 +google-auth-oauthlib==0.4.6 +google-auth==2.29.0 +grpcio==1.63.0 +habana-media-loader==1.15.1.15 +habana-pyhlml==1.15.1.15 +habana-torch-dataloader==1.15.1.15 +habana-torch-plugin==1.15.1.15 +habana_gpu_migration==1.15.1.15 +habana_quantization_toolkit==1.15.1.15 +hjson==3.1.0 +huggingface-hub==0.23.1 +identify==2.5.36 +idna==3.7 +iniconfig==2.0.0 +joblib==1.4.2 +jsonlines==4.0.0 +lightning-habana==1.4.0 +lightning-utilities==0.11.2 +lightning==2.2.0.post0 +lm_eval==0.4.2 +lm_eval==0.4.2 +lm_eval==0.4.2 +lxml==5.2.2 +mbstrdecoder==1.1.3 +more-itertools==10.2.0 +mpi4py==3.1.4 +mpmath==1.3.0 +multidict==6.0.5 +multiprocess==0.70.16 +networkx==3.3 +ninja==1.11.1.1 +nltk==3.8.1 +nodeenv==1.8.0 +numexpr==2.10.0 +numpy==1.23.5 +oauthlib==3.2.2 +packaging==24.0 +pandas==2.0.1 +pathspec==0.12.1 +pathvalidate==3.2.0 +peft==0.11.1 +perfetto==0.7.0 +pillow==10.3.0 +pip==22.0.2 +pip==23.3.1 +platformdirs==4.2.1 +pluggy==1.5.0 +portalocker==2.8.2 +pre-commit==3.3.3 +pretty-errors==1.2.25 +protobuf==3.20.3 +psutil==5.9.8 +py-cpuinfo==9.0.0 +pyarrow-hotfix==0.6 +pyarrow==16.1.0 +pyasn1==0.6.0 +pyasn1_modules==0.4.0 +pybind11==2.10.4 +pycparser==2.22 +pydantic==1.10.13 +pynvml==8.0.4 +pytablewriter==1.2.0 +pytest==8.2.0 +python-dateutil==2.9.0.post0 +pytorch-lightning==2.2.4 +pytz==2024.1 +regex==2023.5.5 +requests-oauthlib==2.0.0 +requests==2.31.0 +rouge_score==0.1.2 +rsa==4.9 +sacrebleu==2.4.2 +safetensors==0.4.3 +scikit-learn==1.5.0 +scipy==1.13.0 +sentencepiece==0.2.0 +sentry-sdk==2.2.1 +setproctitle==1.3.3 +setuptools==59.6.0 +setuptools==69.5.1 +six==1.16.0 +smmap==5.0.1 +sqlitedict==2.1.0 +symengine==0.11.0 +sympy==1.12 +tabledata==1.3.3 +tabulate==0.9.0 +tcolorpy==0.1.6 +tdqm==0.0.1 +tensorboard-data-server==0.6.1 +tensorboard-plugin-wit==1.8.1 +tensorboard==2.11.2 +threadpoolctl==3.5.0 +tokenizers==0.19.1 +tomli==2.0.1 +torch==2.2.0a0+git8964477 +torch_tb_profiler==0.4.0 +torchaudio==2.2.0+08901ad +torchdata==0.7.1+5e6f7b7 +torchmetrics==1.4.0 +torchtext==0.17.0+400da5c +torchvision==0.17.0+b2383d4 +tqdm-multiprocess==0.0.11 +tqdm==4.66.4 +transformers==4.41.0 +typepy==1.3.2 +typing_extensions==4.11.0 +tzdata==2024.1 +urllib3==1.26.18 +virtualenv==20.26.1 +wandb==0.17.0 +wheel==0.37.1 +wheel==0.43.0 +word2number==1.1 +xxhash==3.4.1 +yamllint==1.35.1 +yarl==1.9.4 +zstandard==0.22.0 \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-metadata.json b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..ccc71eba1be8a39c5eca9f2e0811c86e7d0f65a2 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-metadata.json @@ -0,0 +1,850 @@ +{ + "os": "Linux-5.15.0-92-generic-x86_64-with-glibc2.35", + "python": "3.10.12", + "heartbeatAt": "2024-05-22T13:31:29.385670", + "startedAt": "2024-05-22T13:31:28.763163", + "docker": null, + "cuda": null, + "args": [ + "--model", + "hf", + "--model_args", + "pretrained=/mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step120", + "--tasks", + "indiccopa-hi", + "--batch_size", + "auto", + "--wandb_args", + "project=bharatgpt,group=trial_expt_1" + ], + "state": "running", + "program": "-m lm_eval.__main__", + "codePathLocal": null, + "git": { + "remote": "https://github.com/EleutherAI/lm-evaluation-harness", + "commit": null + }, + "email": null, + "root": "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness", + "host": "peacock-evaluation-worker-0", + "username": "root", + "executable": "/usr/bin/python3", + "cpu_count": 80, + "cpu_count_logical": 160, + "cpu_freq": { + "current": 2334.3750124999997, + "min": 800.0, + "max": 3400.0 + }, + "cpu_freq_per_core": [ + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + } + ], + "disk": { + "/": { + "total": 877.6341285705566, + "used": 211.6373291015625 + } + }, + "memory": { + "total": 1007.4379997253418 + } +} diff --git a/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-summary.json b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-summary.json new file mode 100644 index 0000000000000000000000000000000000000000..8bf99d152ad35c3699ec8600ecb8b169d4e35875 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-summary.json @@ -0,0 +1 @@ +{"_wandb": {"runtime": 11}} \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/logs/debug-internal.log b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/logs/debug-internal.log new file mode 100644 index 0000000000000000000000000000000000000000..cfef8bc06442b74edabb6e1781211495fda83313 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/logs/debug-internal.log @@ -0,0 +1,183 @@ +2024-05-22 13:31:28,784 INFO StreamThr :1420 [internal.py:wandb_internal():85] W&B internal server running at pid: 1420, started at: 2024-05-22 13:31:28.781986 +2024-05-22 13:31:28,790 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: status +2024-05-22 13:31:28,791 INFO WriterThread:1420 [datastore.py:open_for_write():87] open: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/run-6ivei7vq.wandb +2024-05-22 13:31:28,792 DEBUG SenderThread:1420 [sender.py:send():378] send: header +2024-05-22 13:31:28,795 DEBUG SenderThread:1420 [sender.py:send():378] send: run +2024-05-22 13:31:29,154 INFO SenderThread:1420 [dir_watcher.py:__init__():211] watching files in: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files +2024-05-22 13:31:29,155 INFO SenderThread:1420 [sender.py:_start_run_threads():1123] run started: 6ivei7vq with start time 1716384688.78314 +2024-05-22 13:31:29,163 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: check_version +2024-05-22 13:31:29,164 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: check_version +2024-05-22 13:31:29,292 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: run_start +2024-05-22 13:31:29,294 DEBUG HandlerThread:1420 [system_info.py:__init__():26] System info init +2024-05-22 13:31:29,294 DEBUG HandlerThread:1420 [system_info.py:__init__():41] System info init done +2024-05-22 13:31:29,294 INFO HandlerThread:1420 [system_monitor.py:start():194] Starting system monitor +2024-05-22 13:31:29,294 INFO SystemMonitor:1420 [system_monitor.py:_start():158] Starting system asset monitoring threads +2024-05-22 13:31:29,294 INFO HandlerThread:1420 [system_monitor.py:probe():214] Collecting system info +2024-05-22 13:31:29,301 INFO SystemMonitor:1420 [interfaces.py:start():188] Started cpu monitoring +2024-05-22 13:31:29,301 INFO SystemMonitor:1420 [interfaces.py:start():188] Started disk monitoring +2024-05-22 13:31:29,303 INFO SystemMonitor:1420 [interfaces.py:start():188] Started memory monitoring +2024-05-22 13:31:29,303 INFO SystemMonitor:1420 [interfaces.py:start():188] Started network monitoring +2024-05-22 13:31:29,385 DEBUG HandlerThread:1420 [system_info.py:probe():150] Probing system +2024-05-22 13:31:29,389 DEBUG HandlerThread:1420 [system_info.py:_probe_git():135] Probing git +2024-05-22 13:31:29,399 ERROR HandlerThread:1420 [gitlib.py:root():92] git root error: Cmd('git') failed due to: exit code(128) + cmdline: git rev-parse --show-toplevel + stderr: 'fatal: detected dubious ownership in repository at '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +To add an exception for this directory, call: + + git config --global --add safe.directory /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +2024-05-22 13:31:29,399 DEBUG HandlerThread:1420 [system_info.py:_probe_git():143] Probing git done +2024-05-22 13:31:29,399 DEBUG HandlerThread:1420 [system_info.py:probe():198] Probing system done +2024-05-22 13:31:29,399 DEBUG HandlerThread:1420 [system_monitor.py:probe():223] {'os': 'Linux-5.15.0-92-generic-x86_64-with-glibc2.35', 'python': '3.10.12', 'heartbeatAt': '2024-05-22T13:31:29.385670', 'startedAt': '2024-05-22T13:31:28.763163', 'docker': None, 'cuda': None, 'args': ('--model', 'hf', '--model_args', 'pretrained=/mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step120', '--tasks', 'indiccopa-hi', '--batch_size', 'auto', '--wandb_args', 'project=bharatgpt,group=trial_expt_1'), 'state': 'running', 'program': '-m lm_eval.__main__', 'codePathLocal': None, 'git': {'remote': 'https://github.com/EleutherAI/lm-evaluation-harness', 'commit': None}, 'email': None, 'root': '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness', 'host': 'peacock-evaluation-worker-0', 'username': 'root', 'executable': '/usr/bin/python3', 'cpu_count': 80, 'cpu_count_logical': 160, 'cpu_freq': {'current': 2334.3750124999997, 'min': 800.0, 'max': 3400.0}, 'cpu_freq_per_core': [{'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}], 'disk': {'/': {'total': 877.6341285705566, 'used': 211.6373291015625}}, 'memory': {'total': 1007.4379997253418}} +2024-05-22 13:31:29,400 INFO HandlerThread:1420 [system_monitor.py:probe():224] Finished collecting system info +2024-05-22 13:31:29,400 INFO HandlerThread:1420 [system_monitor.py:probe():227] Publishing system info +2024-05-22 13:31:29,403 INFO HandlerThread:1420 [system_monitor.py:probe():229] Finished publishing system info +2024-05-22 13:31:29,408 DEBUG SenderThread:1420 [sender.py:send():378] send: files +2024-05-22 13:31:29,408 INFO SenderThread:1420 [sender.py:_save_file():1389] saving file wandb-metadata.json with policy now +2024-05-22 13:31:29,590 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: python_packages +2024-05-22 13:31:29,590 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: python_packages +2024-05-22 13:31:29,591 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: stop_status +2024-05-22 13:31:29,593 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: stop_status +2024-05-22 13:31:29,663 DEBUG SenderThread:1420 [sender.py:send():378] send: telemetry +2024-05-22 13:31:30,023 INFO wandb-upload_0:1420 [upload_job.py:push():130] Uploaded file /tmp/tmp4ek37_gfwandb/081kdoei-wandb-metadata.json +2024-05-22 13:31:30,158 INFO Thread-12 :1420 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-metadata.json +2024-05-22 13:31:30,158 INFO Thread-12 :1420 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/output.log +2024-05-22 13:31:30,158 INFO Thread-12 :1420 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/requirements.txt +2024-05-22 13:31:32,158 INFO Thread-12 :1420 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/output.log +2024-05-22 13:31:34,665 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: status_report +2024-05-22 13:31:39,999 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: status_report +2024-05-22 13:31:40,172 INFO Thread-12 :1420 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/output.log +2024-05-22 13:31:40,304 DEBUG SenderThread:1420 [sender.py:send():378] send: exit +2024-05-22 13:31:40,304 INFO SenderThread:1420 [sender.py:send_exit():585] handling exit code: 1 +2024-05-22 13:31:40,304 INFO SenderThread:1420 [sender.py:send_exit():587] handling runtime: 11 +2024-05-22 13:31:40,305 INFO SenderThread:1420 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-22 13:31:40,306 INFO SenderThread:1420 [sender.py:send_exit():593] send defer +2024-05-22 13:31:40,306 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:40,306 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 0 +2024-05-22 13:31:40,306 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:40,306 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 0 +2024-05-22 13:31:40,306 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 1 +2024-05-22 13:31:40,306 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:40,306 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 1 +2024-05-22 13:31:40,306 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:40,306 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 1 +2024-05-22 13:31:40,306 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 2 +2024-05-22 13:31:40,306 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:40,306 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 2 +2024-05-22 13:31:40,306 INFO HandlerThread:1420 [system_monitor.py:finish():203] Stopping system monitor +2024-05-22 13:31:40,307 DEBUG SystemMonitor:1420 [system_monitor.py:_start():172] Starting system metrics aggregation loop +2024-05-22 13:31:40,307 DEBUG SystemMonitor:1420 [system_monitor.py:_start():179] Finished system metrics aggregation loop +2024-05-22 13:31:40,307 DEBUG SystemMonitor:1420 [system_monitor.py:_start():183] Publishing last batch of metrics +2024-05-22 13:31:40,308 INFO HandlerThread:1420 [interfaces.py:finish():200] Joined cpu monitor +2024-05-22 13:31:40,308 INFO HandlerThread:1420 [interfaces.py:finish():200] Joined disk monitor +2024-05-22 13:31:40,308 INFO HandlerThread:1420 [interfaces.py:finish():200] Joined memory monitor +2024-05-22 13:31:40,308 INFO HandlerThread:1420 [interfaces.py:finish():200] Joined network monitor +2024-05-22 13:31:40,308 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:40,308 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 2 +2024-05-22 13:31:40,308 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 3 +2024-05-22 13:31:40,308 DEBUG SenderThread:1420 [sender.py:send():378] send: stats +2024-05-22 13:31:40,309 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:40,309 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 3 +2024-05-22 13:31:40,309 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:40,309 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 3 +2024-05-22 13:31:40,309 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 4 +2024-05-22 13:31:40,309 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:40,309 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 4 +2024-05-22 13:31:40,309 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:40,309 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 4 +2024-05-22 13:31:40,309 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 5 +2024-05-22 13:31:40,309 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:40,309 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 5 +2024-05-22 13:31:40,309 DEBUG SenderThread:1420 [sender.py:send():378] send: summary +2024-05-22 13:31:40,310 INFO SenderThread:1420 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-22 13:31:40,310 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:40,310 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 5 +2024-05-22 13:31:40,310 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 6 +2024-05-22 13:31:40,310 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:40,311 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 6 +2024-05-22 13:31:40,311 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:40,311 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 6 +2024-05-22 13:31:40,315 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: status_report +2024-05-22 13:31:40,402 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 7 +2024-05-22 13:31:40,402 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:40,402 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 7 +2024-05-22 13:31:40,402 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:40,402 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 7 +2024-05-22 13:31:41,173 INFO Thread-12 :1420 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/config.yaml +2024-05-22 13:31:41,174 INFO Thread-12 :1420 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-summary.json +2024-05-22 13:31:41,304 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-22 13:31:41,686 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 8 +2024-05-22 13:31:41,687 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: poll_exit +2024-05-22 13:31:41,687 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:41,687 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 8 +2024-05-22 13:31:41,687 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:41,687 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 8 +2024-05-22 13:31:41,687 INFO SenderThread:1420 [job_builder.py:build():432] Attempting to build job artifact +2024-05-22 13:31:41,688 INFO SenderThread:1420 [job_builder.py:_get_source_type():576] no source found +2024-05-22 13:31:41,688 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 9 +2024-05-22 13:31:41,688 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:41,688 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 9 +2024-05-22 13:31:41,688 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:41,688 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 9 +2024-05-22 13:31:41,688 INFO SenderThread:1420 [dir_watcher.py:finish():358] shutting down directory watcher +2024-05-22 13:31:42,175 INFO SenderThread:1420 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/output.log +2024-05-22 13:31:42,175 INFO SenderThread:1420 [dir_watcher.py:finish():388] scan: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files +2024-05-22 13:31:42,175 INFO SenderThread:1420 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-metadata.json wandb-metadata.json +2024-05-22 13:31:42,175 INFO SenderThread:1420 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/requirements.txt requirements.txt +2024-05-22 13:31:42,175 INFO SenderThread:1420 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/config.yaml config.yaml +2024-05-22 13:31:42,178 INFO SenderThread:1420 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/output.log output.log +2024-05-22 13:31:42,180 INFO SenderThread:1420 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-summary.json wandb-summary.json +2024-05-22 13:31:42,180 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 10 +2024-05-22 13:31:42,180 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:42,180 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 10 +2024-05-22 13:31:42,183 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:42,183 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 10 +2024-05-22 13:31:42,183 INFO SenderThread:1420 [file_pusher.py:finish():169] shutting down file pusher +2024-05-22 13:31:42,304 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-22 13:31:42,304 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: poll_exit +2024-05-22 13:31:42,430 INFO wandb-upload_0:1420 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/requirements.txt +2024-05-22 13:31:42,760 INFO wandb-upload_2:1420 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/output.log +2024-05-22 13:31:42,770 INFO wandb-upload_1:1420 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/config.yaml +2024-05-22 13:31:42,779 INFO wandb-upload_3:1420 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/files/wandb-summary.json +2024-05-22 13:31:42,979 INFO Thread-11 (_thread_body):1420 [sender.py:transition_state():613] send defer: 11 +2024-05-22 13:31:42,979 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:42,979 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 11 +2024-05-22 13:31:42,979 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:42,980 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 11 +2024-05-22 13:31:42,980 INFO SenderThread:1420 [file_pusher.py:join():175] waiting for file pusher +2024-05-22 13:31:42,980 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 12 +2024-05-22 13:31:42,980 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:42,980 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 12 +2024-05-22 13:31:42,980 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:42,980 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 12 +2024-05-22 13:31:42,980 INFO SenderThread:1420 [file_stream.py:finish():601] file stream finish called +2024-05-22 13:31:43,061 INFO SenderThread:1420 [file_stream.py:finish():605] file stream finish is done +2024-05-22 13:31:43,061 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 13 +2024-05-22 13:31:43,061 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:43,061 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 13 +2024-05-22 13:31:43,061 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:43,061 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 13 +2024-05-22 13:31:43,061 INFO SenderThread:1420 [sender.py:transition_state():613] send defer: 14 +2024-05-22 13:31:43,061 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: defer +2024-05-22 13:31:43,061 INFO HandlerThread:1420 [handler.py:handle_request_defer():184] handle defer: 14 +2024-05-22 13:31:43,061 DEBUG SenderThread:1420 [sender.py:send():378] send: final +2024-05-22 13:31:43,061 DEBUG SenderThread:1420 [sender.py:send():378] send: footer +2024-05-22 13:31:43,061 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: defer +2024-05-22 13:31:43,061 INFO SenderThread:1420 [sender.py:send_request_defer():609] handle sender defer: 14 +2024-05-22 13:31:43,062 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-22 13:31:43,062 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-22 13:31:43,062 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: server_info +2024-05-22 13:31:43,062 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: get_summary +2024-05-22 13:31:43,062 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: sampled_history +2024-05-22 13:31:43,062 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: internal_messages +2024-05-22 13:31:43,063 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: poll_exit +2024-05-22 13:31:43,063 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: poll_exit +2024-05-22 13:31:43,063 DEBUG SenderThread:1420 [sender.py:send_request():405] send_request: server_info +2024-05-22 13:31:43,118 INFO MainThread:1420 [wandb_run.py:_footer_history_summary_info():3994] rendering history +2024-05-22 13:31:43,118 INFO MainThread:1420 [wandb_run.py:_footer_history_summary_info():4026] rendering summary +2024-05-22 13:31:43,118 INFO MainThread:1420 [wandb_run.py:_footer_sync_info():3953] logging synced files +2024-05-22 13:31:43,118 DEBUG HandlerThread:1420 [handler.py:handle_request():158] handle_request: shutdown +2024-05-22 13:31:43,118 INFO HandlerThread:1420 [handler.py:finish():882] shutting down handler +2024-05-22 13:31:44,063 INFO WriterThread:1420 [datastore.py:close():296] close: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/run-6ivei7vq.wandb +2024-05-22 13:31:44,118 INFO SenderThread:1420 [sender.py:finish():1545] shutting down sender +2024-05-22 13:31:44,118 INFO SenderThread:1420 [file_pusher.py:finish():169] shutting down file pusher +2024-05-22 13:31:44,118 INFO SenderThread:1420 [file_pusher.py:join():175] waiting for file pusher diff --git a/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/logs/debug.log b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/logs/debug.log new file mode 100644 index 0000000000000000000000000000000000000000..6e1eb5fbef2216a154252a5a05b09cd99ae61409 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/logs/debug.log @@ -0,0 +1,29 @@ +2024-05-22 13:31:28,776 INFO MainThread:1265 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0 +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_setup.py:_flush():76] Configure stats pid to 1265 +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_setup.py:_flush():76] Loading settings from /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/settings +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_setup.py:_flush():76] Loading settings from environment variables: {} +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False} +2024-05-22 13:31:28,777 WARNING MainThread:1265 [wandb_setup.py:_flush():76] Could not find program at -m lm_eval.__main__ +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program_relpath': None, 'program': '-m lm_eval.__main__'} +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_setup.py:_flush():76] Applying login settings: {} +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_init.py:_log_setup():520] Logging user logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/logs/debug.log +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_init.py:_log_setup():521] Logging internal logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/logs/debug-internal.log +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_init.py:init():560] calling init triggers +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_init.py:init():567] wandb.init called with sweep_config: {} +config: {} +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_init.py:init():610] starting backend +2024-05-22 13:31:28,777 INFO MainThread:1265 [wandb_init.py:init():614] setting up manager +2024-05-22 13:31:28,781 INFO MainThread:1265 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn +2024-05-22 13:31:28,782 INFO MainThread:1265 [wandb_init.py:init():622] backend started and connected +2024-05-22 13:31:28,786 INFO MainThread:1265 [wandb_init.py:init():711] updated telemetry +2024-05-22 13:31:28,794 INFO MainThread:1265 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout +2024-05-22 13:31:29,163 INFO MainThread:1265 [wandb_run.py:_on_init():2396] communicating current version +2024-05-22 13:31:29,285 INFO MainThread:1265 [wandb_run.py:_on_init():2405] got version response +2024-05-22 13:31:29,285 INFO MainThread:1265 [wandb_init.py:init():795] starting run threads in backend +2024-05-22 13:31:29,591 INFO MainThread:1265 [wandb_run.py:_console_start():2374] atexit reg +2024-05-22 13:31:29,591 INFO MainThread:1265 [wandb_run.py:_redirect():2229] redirect: wrap_raw +2024-05-22 13:31:29,591 INFO MainThread:1265 [wandb_run.py:_redirect():2294] Wrapping output streams. +2024-05-22 13:31:29,591 INFO MainThread:1265 [wandb_run.py:_redirect():2319] Redirects installed. +2024-05-22 13:31:29,595 INFO MainThread:1265 [wandb_init.py:init():838] run started, returning control to user process +2024-05-22 13:31:44,119 WARNING MsgRouterThr:1265 [router.py:message_loop():77] message_loop has been closed diff --git a/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/run-6ivei7vq.wandb b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/run-6ivei7vq.wandb new file mode 100644 index 0000000000000000000000000000000000000000..3d4d965e366f595f9b08a29bc9fcc10de2d6aa66 Binary files /dev/null and b/lm-evaluation-harness/wandb/run-20240522_133128-6ivei7vq/run-6ivei7vq.wandb differ diff --git a/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/output.log b/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/output.log new file mode 100644 index 0000000000000000000000000000000000000000..472588cbdd080fe55f4679e336feb9e75b9a94d6 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/output.log @@ -0,0 +1,34 @@ + +2024-05-22:18:47:38,717 INFO [__main__.py:251] Verbosity set to INFO +2024-05-22:18:47:47,269 INFO [__main__.py:335] Selected Tasks: ['arc_easy', 'hellaswag', 'mrpc', 'openbookqa', 'sst2', 'winogrande'] +2024-05-22:18:47:47,270 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 +2024-05-22:18:47:47,270 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step100'} +2024-05-22:18:47:49,634 INFO [huggingface.py:164] Using device 'cuda' +Traceback (most recent call last): + File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main + return _run_code(code, main_globals, None, + File "/usr/lib/python3.10/runpy.py", line 86, in _run_code + exec(code, run_globals) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 417, in + cli_evaluate() + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate + results = evaluator.simple_evaluate( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper + return fn(*args, **kwargs) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/evaluator.py", line 180, in simple_evaluate + lm = lm_eval.api.registry.get_model(model).create_from_arg_string( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/api/model.py", line 134, in create_from_arg_string + return cls(**args, **args2) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 190, in __init__ + self._get_config( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 471, in _get_config + self._config = transformers.AutoConfig.from_pretrained( + File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 934, in from_pretrained + config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 632, in get_config_dict + config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 689, in _get_config_dict + resolved_config_file = cached_file( + File "/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py", line 370, in cached_file + raise EnvironmentError( +OSError: /mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step100 does not appear to have a file named config.json. Checkout 'https://huggingface.co//mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step100/tree/main' for available files. \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/wandb-metadata.json b/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/wandb-metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..951c73005d5ffbe27650816f3f5e47a0b03ddd59 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/wandb-metadata.json @@ -0,0 +1,850 @@ +{ + "os": "Linux-5.15.0-92-generic-x86_64-with-glibc2.35", + "python": "3.10.12", + "heartbeatAt": "2024-05-22T18:47:38.501204", + "startedAt": "2024-05-22T18:47:37.933369", + "docker": null, + "cuda": null, + "args": [ + "--model", + "hf", + "--model_args", + "pretrained=/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step100", + "--tasks", + "hellaswag,arc_easy,openbookqa,winogrande,sst2,mrpc", + "--batch_size", + "auto", + "--wandb_args", + "project=bharatgpt,group=trial_expt_2" + ], + "state": "running", + "program": "-m lm_eval.__main__", + "codePathLocal": null, + "git": { + "remote": "https://github.com/EleutherAI/lm-evaluation-harness", + "commit": null + }, + "email": null, + "root": "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness", + "host": "peacock-evaluation-worker-0", + "username": "root", + "executable": "/usr/bin/python3", + "cpu_count": 80, + "cpu_count_logical": 160, + "cpu_freq": { + "current": 2334.2887875, + "min": 800.0, + "max": 3400.0 + }, + "cpu_freq_per_core": [ + { + "current": 3400.002, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3399.997, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3399.02, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + } + ], + "disk": { + "/": { + "total": 877.6341285705566, + "used": 211.64532089233398 + } + }, + "memory": { + "total": 1007.4379997253418 + } +} diff --git a/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/logs/debug-internal.log b/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/logs/debug-internal.log new file mode 100644 index 0000000000000000000000000000000000000000..f886f290ae673b052d944d0df9c950bbf6dc371a --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/logs/debug-internal.log @@ -0,0 +1,182 @@ +2024-05-22 18:47:37,956 INFO StreamThr :811 [internal.py:wandb_internal():85] W&B internal server running at pid: 811, started at: 2024-05-22 18:47:37.952549 +2024-05-22 18:47:37,959 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: status +2024-05-22 18:47:37,959 INFO WriterThread:811 [datastore.py:open_for_write():87] open: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/run-460fnitv.wandb +2024-05-22 18:47:37,961 DEBUG SenderThread:811 [sender.py:send():378] send: header +2024-05-22 18:47:37,965 DEBUG SenderThread:811 [sender.py:send():378] send: run +2024-05-22 18:47:38,284 INFO SenderThread:811 [dir_watcher.py:__init__():211] watching files in: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files +2024-05-22 18:47:38,284 INFO SenderThread:811 [sender.py:_start_run_threads():1123] run started: 460fnitv with start time 1716403657.952407 +2024-05-22 18:47:38,287 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: check_version +2024-05-22 18:47:38,288 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: check_version +2024-05-22 18:47:38,404 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: run_start +2024-05-22 18:47:38,406 DEBUG HandlerThread:811 [system_info.py:__init__():26] System info init +2024-05-22 18:47:38,406 DEBUG HandlerThread:811 [system_info.py:__init__():41] System info init done +2024-05-22 18:47:38,406 INFO HandlerThread:811 [system_monitor.py:start():194] Starting system monitor +2024-05-22 18:47:38,406 INFO SystemMonitor:811 [system_monitor.py:_start():158] Starting system asset monitoring threads +2024-05-22 18:47:38,406 INFO HandlerThread:811 [system_monitor.py:probe():214] Collecting system info +2024-05-22 18:47:38,413 INFO SystemMonitor:811 [interfaces.py:start():188] Started cpu monitoring +2024-05-22 18:47:38,414 INFO SystemMonitor:811 [interfaces.py:start():188] Started disk monitoring +2024-05-22 18:47:38,414 INFO SystemMonitor:811 [interfaces.py:start():188] Started memory monitoring +2024-05-22 18:47:38,416 INFO SystemMonitor:811 [interfaces.py:start():188] Started network monitoring +2024-05-22 18:47:38,501 DEBUG HandlerThread:811 [system_info.py:probe():150] Probing system +2024-05-22 18:47:38,504 DEBUG HandlerThread:811 [system_info.py:_probe_git():135] Probing git +2024-05-22 18:47:38,514 ERROR HandlerThread:811 [gitlib.py:root():92] git root error: Cmd('git') failed due to: exit code(128) + cmdline: git rev-parse --show-toplevel + stderr: 'fatal: detected dubious ownership in repository at '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +To add an exception for this directory, call: + + git config --global --add safe.directory /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +2024-05-22 18:47:38,514 DEBUG HandlerThread:811 [system_info.py:_probe_git():143] Probing git done +2024-05-22 18:47:38,514 DEBUG HandlerThread:811 [system_info.py:probe():198] Probing system done +2024-05-22 18:47:38,514 DEBUG HandlerThread:811 [system_monitor.py:probe():223] {'os': 'Linux-5.15.0-92-generic-x86_64-with-glibc2.35', 'python': '3.10.12', 'heartbeatAt': '2024-05-22T18:47:38.501204', 'startedAt': '2024-05-22T18:47:37.933369', 'docker': None, 'cuda': None, 'args': ('--model', 'hf', '--model_args', 'pretrained=/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step100', '--tasks', 'hellaswag,arc_easy,openbookqa,winogrande,sst2,mrpc', '--batch_size', 'auto', '--wandb_args', 'project=bharatgpt,group=trial_expt_2'), 'state': 'running', 'program': '-m lm_eval.__main__', 'codePathLocal': None, 'git': {'remote': 'https://github.com/EleutherAI/lm-evaluation-harness', 'commit': None}, 'email': None, 'root': '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness', 'host': 'peacock-evaluation-worker-0', 'username': 'root', 'executable': '/usr/bin/python3', 'cpu_count': 80, 'cpu_count_logical': 160, 'cpu_freq': {'current': 2334.2887875, 'min': 800.0, 'max': 3400.0}, 'cpu_freq_per_core': [{'current': 3400.002, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3399.997, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3399.02, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}], 'disk': {'/': {'total': 877.6341285705566, 'used': 211.64532089233398}}, 'memory': {'total': 1007.4379997253418}} +2024-05-22 18:47:38,514 INFO HandlerThread:811 [system_monitor.py:probe():224] Finished collecting system info +2024-05-22 18:47:38,514 INFO HandlerThread:811 [system_monitor.py:probe():227] Publishing system info +2024-05-22 18:47:38,517 INFO HandlerThread:811 [system_monitor.py:probe():229] Finished publishing system info +2024-05-22 18:47:38,523 DEBUG SenderThread:811 [sender.py:send():378] send: files +2024-05-22 18:47:38,523 INFO SenderThread:811 [sender.py:_save_file():1389] saving file wandb-metadata.json with policy now +2024-05-22 18:47:38,710 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: python_packages +2024-05-22 18:47:38,710 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: python_packages +2024-05-22 18:47:38,712 DEBUG SenderThread:811 [sender.py:send():378] send: telemetry +2024-05-22 18:47:38,713 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: stop_status +2024-05-22 18:47:38,713 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: stop_status +2024-05-22 18:47:39,087 INFO wandb-upload_0:811 [upload_job.py:push():130] Uploaded file /tmp/tmpwqgp0y12wandb/tjvuxbl0-wandb-metadata.json +2024-05-22 18:47:39,286 INFO Thread-12 :811 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/requirements.txt +2024-05-22 18:47:39,286 INFO Thread-12 :811 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/output.log +2024-05-22 18:47:39,286 INFO Thread-12 :811 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/wandb-metadata.json +2024-05-22 18:47:41,286 INFO Thread-12 :811 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/output.log +2024-05-22 18:47:43,869 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: status_report +2024-05-22 18:47:49,271 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: status_report +2024-05-22 18:47:49,293 INFO Thread-12 :811 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/output.log +2024-05-22 18:47:49,641 DEBUG SenderThread:811 [sender.py:send():378] send: exit +2024-05-22 18:47:49,641 INFO SenderThread:811 [sender.py:send_exit():585] handling exit code: 1 +2024-05-22 18:47:49,641 INFO SenderThread:811 [sender.py:send_exit():587] handling runtime: 11 +2024-05-22 18:47:49,643 INFO SenderThread:811 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-22 18:47:49,643 INFO SenderThread:811 [sender.py:send_exit():593] send defer +2024-05-22 18:47:49,643 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:49,643 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 0 +2024-05-22 18:47:49,643 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:49,643 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 0 +2024-05-22 18:47:49,643 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 1 +2024-05-22 18:47:49,643 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:49,643 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 1 +2024-05-22 18:47:49,643 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:49,643 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 1 +2024-05-22 18:47:49,644 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 2 +2024-05-22 18:47:49,644 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:49,644 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 2 +2024-05-22 18:47:49,644 INFO HandlerThread:811 [system_monitor.py:finish():203] Stopping system monitor +2024-05-22 18:47:49,644 DEBUG SystemMonitor:811 [system_monitor.py:_start():172] Starting system metrics aggregation loop +2024-05-22 18:47:49,644 DEBUG SystemMonitor:811 [system_monitor.py:_start():179] Finished system metrics aggregation loop +2024-05-22 18:47:49,644 DEBUG SystemMonitor:811 [system_monitor.py:_start():183] Publishing last batch of metrics +2024-05-22 18:47:49,645 INFO HandlerThread:811 [interfaces.py:finish():200] Joined cpu monitor +2024-05-22 18:47:49,645 INFO HandlerThread:811 [interfaces.py:finish():200] Joined disk monitor +2024-05-22 18:47:49,645 INFO HandlerThread:811 [interfaces.py:finish():200] Joined memory monitor +2024-05-22 18:47:49,645 INFO HandlerThread:811 [interfaces.py:finish():200] Joined network monitor +2024-05-22 18:47:49,645 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:49,645 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 2 +2024-05-22 18:47:49,645 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 3 +2024-05-22 18:47:49,645 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:49,645 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 3 +2024-05-22 18:47:49,645 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:49,646 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 3 +2024-05-22 18:47:49,646 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 4 +2024-05-22 18:47:49,646 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:49,646 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 4 +2024-05-22 18:47:49,646 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:49,646 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 4 +2024-05-22 18:47:49,646 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 5 +2024-05-22 18:47:49,646 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:49,646 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 5 +2024-05-22 18:47:49,646 DEBUG SenderThread:811 [sender.py:send():378] send: summary +2024-05-22 18:47:49,647 INFO SenderThread:811 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-22 18:47:49,647 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:49,647 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 5 +2024-05-22 18:47:49,647 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 6 +2024-05-22 18:47:49,647 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:49,647 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 6 +2024-05-22 18:47:49,647 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:49,647 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 6 +2024-05-22 18:47:49,652 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: status_report +2024-05-22 18:47:49,723 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 7 +2024-05-22 18:47:49,723 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:49,723 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 7 +2024-05-22 18:47:49,723 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:49,723 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 7 +2024-05-22 18:47:50,294 INFO Thread-12 :811 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/config.yaml +2024-05-22 18:47:50,294 INFO Thread-12 :811 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/wandb-summary.json +2024-05-22 18:47:50,641 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-22 18:47:50,889 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 8 +2024-05-22 18:47:50,890 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: poll_exit +2024-05-22 18:47:50,890 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:50,890 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 8 +2024-05-22 18:47:50,890 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:50,890 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 8 +2024-05-22 18:47:50,890 INFO SenderThread:811 [job_builder.py:build():432] Attempting to build job artifact +2024-05-22 18:47:50,891 INFO SenderThread:811 [job_builder.py:_get_source_type():576] no source found +2024-05-22 18:47:50,891 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 9 +2024-05-22 18:47:50,891 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:50,891 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 9 +2024-05-22 18:47:50,891 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:50,891 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 9 +2024-05-22 18:47:50,891 INFO SenderThread:811 [dir_watcher.py:finish():358] shutting down directory watcher +2024-05-22 18:47:51,296 INFO SenderThread:811 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/output.log +2024-05-22 18:47:51,296 INFO SenderThread:811 [dir_watcher.py:finish():388] scan: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files +2024-05-22 18:47:51,296 INFO SenderThread:811 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/output.log output.log +2024-05-22 18:47:51,296 INFO SenderThread:811 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/requirements.txt requirements.txt +2024-05-22 18:47:51,299 INFO SenderThread:811 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/wandb-metadata.json wandb-metadata.json +2024-05-22 18:47:51,300 INFO SenderThread:811 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/config.yaml config.yaml +2024-05-22 18:47:51,301 INFO SenderThread:811 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/wandb-summary.json wandb-summary.json +2024-05-22 18:47:51,303 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 10 +2024-05-22 18:47:51,303 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:51,303 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 10 +2024-05-22 18:47:51,306 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:51,306 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 10 +2024-05-22 18:47:51,306 INFO SenderThread:811 [file_pusher.py:finish():169] shutting down file pusher +2024-05-22 18:47:51,540 INFO wandb-upload_0:811 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/output.log +2024-05-22 18:47:51,641 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-22 18:47:51,642 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: poll_exit +2024-05-22 18:47:51,945 INFO wandb-upload_1:811 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/requirements.txt +2024-05-22 18:47:51,946 INFO wandb-upload_2:811 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/config.yaml +2024-05-22 18:47:51,958 INFO wandb-upload_3:811 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/files/wandb-summary.json +2024-05-22 18:47:52,159 INFO Thread-11 (_thread_body):811 [sender.py:transition_state():613] send defer: 11 +2024-05-22 18:47:52,159 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:52,159 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 11 +2024-05-22 18:47:52,159 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:52,159 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 11 +2024-05-22 18:47:52,159 INFO SenderThread:811 [file_pusher.py:join():175] waiting for file pusher +2024-05-22 18:47:52,159 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 12 +2024-05-22 18:47:52,159 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:52,160 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 12 +2024-05-22 18:47:52,160 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:52,160 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 12 +2024-05-22 18:47:52,160 INFO SenderThread:811 [file_stream.py:finish():601] file stream finish called +2024-05-22 18:47:52,234 INFO SenderThread:811 [file_stream.py:finish():605] file stream finish is done +2024-05-22 18:47:52,234 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 13 +2024-05-22 18:47:52,234 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:52,234 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 13 +2024-05-22 18:47:52,234 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:52,234 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 13 +2024-05-22 18:47:52,234 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 14 +2024-05-22 18:47:52,234 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-22 18:47:52,234 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 14 +2024-05-22 18:47:52,234 DEBUG SenderThread:811 [sender.py:send():378] send: final +2024-05-22 18:47:52,235 DEBUG SenderThread:811 [sender.py:send():378] send: footer +2024-05-22 18:47:52,235 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-22 18:47:52,235 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 14 +2024-05-22 18:47:52,235 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-22 18:47:52,235 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-22 18:47:52,235 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: server_info +2024-05-22 18:47:52,236 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: get_summary +2024-05-22 18:47:52,236 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: sampled_history +2024-05-22 18:47:52,236 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: internal_messages +2024-05-22 18:47:52,236 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: poll_exit +2024-05-22 18:47:52,236 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: poll_exit +2024-05-22 18:47:52,236 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: server_info +2024-05-22 18:47:52,289 INFO MainThread:811 [wandb_run.py:_footer_history_summary_info():3994] rendering history +2024-05-22 18:47:52,289 INFO MainThread:811 [wandb_run.py:_footer_history_summary_info():4026] rendering summary +2024-05-22 18:47:52,289 INFO MainThread:811 [wandb_run.py:_footer_sync_info():3953] logging synced files +2024-05-22 18:47:52,289 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: shutdown +2024-05-22 18:47:52,289 INFO HandlerThread:811 [handler.py:finish():882] shutting down handler +2024-05-22 18:47:53,236 INFO WriterThread:811 [datastore.py:close():296] close: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/run-460fnitv.wandb +2024-05-22 18:47:53,289 INFO SenderThread:811 [sender.py:finish():1545] shutting down sender +2024-05-22 18:47:53,289 INFO SenderThread:811 [file_pusher.py:finish():169] shutting down file pusher +2024-05-22 18:47:53,289 INFO SenderThread:811 [file_pusher.py:join():175] waiting for file pusher diff --git a/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/logs/debug.log b/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/logs/debug.log new file mode 100644 index 0000000000000000000000000000000000000000..dd5be3b4f5548e9b8c6f76163a53851e14219208 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/logs/debug.log @@ -0,0 +1,29 @@ +2024-05-22 18:47:37,946 INFO MainThread:655 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0 +2024-05-22 18:47:37,946 INFO MainThread:655 [wandb_setup.py:_flush():76] Configure stats pid to 655 +2024-05-22 18:47:37,946 INFO MainThread:655 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings +2024-05-22 18:47:37,946 INFO MainThread:655 [wandb_setup.py:_flush():76] Loading settings from /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/settings +2024-05-22 18:47:37,946 INFO MainThread:655 [wandb_setup.py:_flush():76] Loading settings from environment variables: {} +2024-05-22 18:47:37,946 INFO MainThread:655 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False} +2024-05-22 18:47:37,946 WARNING MainThread:655 [wandb_setup.py:_flush():76] Could not find program at -m lm_eval.__main__ +2024-05-22 18:47:37,946 INFO MainThread:655 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program_relpath': None, 'program': '-m lm_eval.__main__'} +2024-05-22 18:47:37,947 INFO MainThread:655 [wandb_setup.py:_flush():76] Applying login settings: {} +2024-05-22 18:47:37,947 INFO MainThread:655 [wandb_init.py:_log_setup():520] Logging user logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/logs/debug.log +2024-05-22 18:47:37,947 INFO MainThread:655 [wandb_init.py:_log_setup():521] Logging internal logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240522_184737-460fnitv/logs/debug-internal.log +2024-05-22 18:47:37,947 INFO MainThread:655 [wandb_init.py:init():560] calling init triggers +2024-05-22 18:47:37,947 INFO MainThread:655 [wandb_init.py:init():567] wandb.init called with sweep_config: {} +config: {} +2024-05-22 18:47:37,947 INFO MainThread:655 [wandb_init.py:init():610] starting backend +2024-05-22 18:47:37,947 INFO MainThread:655 [wandb_init.py:init():614] setting up manager +2024-05-22 18:47:37,951 INFO MainThread:655 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn +2024-05-22 18:47:37,952 INFO MainThread:655 [wandb_init.py:init():622] backend started and connected +2024-05-22 18:47:37,955 INFO MainThread:655 [wandb_init.py:init():711] updated telemetry +2024-05-22 18:47:37,964 INFO MainThread:655 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout +2024-05-22 18:47:38,287 INFO MainThread:655 [wandb_run.py:_on_init():2396] communicating current version +2024-05-22 18:47:38,398 INFO MainThread:655 [wandb_run.py:_on_init():2405] got version response +2024-05-22 18:47:38,398 INFO MainThread:655 [wandb_init.py:init():795] starting run threads in backend +2024-05-22 18:47:38,711 INFO MainThread:655 [wandb_run.py:_console_start():2374] atexit reg +2024-05-22 18:47:38,711 INFO MainThread:655 [wandb_run.py:_redirect():2229] redirect: wrap_raw +2024-05-22 18:47:38,711 INFO MainThread:655 [wandb_run.py:_redirect():2294] Wrapping output streams. +2024-05-22 18:47:38,711 INFO MainThread:655 [wandb_run.py:_redirect():2319] Redirects installed. +2024-05-22 18:47:38,714 INFO MainThread:655 [wandb_init.py:init():838] run started, returning control to user process +2024-05-22 18:47:53,290 WARNING MsgRouterThr:655 [router.py:message_loop():77] message_loop has been closed diff --git a/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/config.yaml b/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..881bc47dd2c6d3465b1adf81c1f26984e786ec44 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/config.yaml @@ -0,0 +1,43 @@ +wandb_version: 1 + +_wandb: + desc: null + value: + python_version: 3.10.12 + cli_version: 0.17.0 + framework: huggingface + huggingface_version: 4.41.1 + is_jupyter_run: false + is_kaggle_kernel: false + start_time: 1716437295 + t: + 1: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 2: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 3: + - 23 + 4: 3.10.12 + 5: 0.17.0 + 6: 4.41.1 + 8: + - 5 + 13: linux-x86_64 diff --git a/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/output.log b/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/output.log new file mode 100644 index 0000000000000000000000000000000000000000..ea50c080dae34649403c10802fc3847749069d8e --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/output.log @@ -0,0 +1,34 @@ + +2024-05-23:04:08:16,321 INFO [__main__.py:251] Verbosity set to INFO +2024-05-23:04:08:25,689 INFO [__main__.py:335] Selected Tasks: ['arc_easy', 'hellaswag', 'mrpc', 'openbookqa', 'sst2', 'winogrande'] +2024-05-23:04:08:25,691 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 +2024-05-23:04:08:25,691 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step100'} +2024-05-23:04:08:28,161 INFO [huggingface.py:164] Using device 'cuda' +Traceback (most recent call last): + File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main + return _run_code(code, main_globals, None, + File "/usr/lib/python3.10/runpy.py", line 86, in _run_code + exec(code, run_globals) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 417, in + cli_evaluate() + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate + results = evaluator.simple_evaluate( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper + return fn(*args, **kwargs) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/evaluator.py", line 180, in simple_evaluate + lm = lm_eval.api.registry.get_model(model).create_from_arg_string( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/api/model.py", line 134, in create_from_arg_string + return cls(**args, **args2) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 190, in __init__ + self._get_config( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 471, in _get_config + self._config = transformers.AutoConfig.from_pretrained( + File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 934, in from_pretrained + config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 632, in get_config_dict + config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 689, in _get_config_dict + resolved_config_file = cached_file( + File "/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py", line 370, in cached_file + raise EnvironmentError( +OSError: /mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step100 does not appear to have a file named config.json. Checkout 'https://huggingface.co//mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step100/tree/main' for available files. \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/logs/debug-internal.log b/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/logs/debug-internal.log new file mode 100644 index 0000000000000000000000000000000000000000..bf12abb5188d351e112d886676bfbc9f0a26fe56 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/logs/debug-internal.log @@ -0,0 +1,182 @@ +2024-05-23 04:08:15,604 INFO StreamThr :811 [internal.py:wandb_internal():85] W&B internal server running at pid: 811, started at: 2024-05-23 04:08:15.602254 +2024-05-23 04:08:15,608 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: status +2024-05-23 04:08:15,609 INFO WriterThread:811 [datastore.py:open_for_write():87] open: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/run-tp6gmuhp.wandb +2024-05-23 04:08:15,612 DEBUG SenderThread:811 [sender.py:send():378] send: header +2024-05-23 04:08:15,616 DEBUG SenderThread:811 [sender.py:send():378] send: run +2024-05-23 04:08:15,875 INFO SenderThread:811 [dir_watcher.py:__init__():211] watching files in: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files +2024-05-23 04:08:15,875 INFO SenderThread:811 [sender.py:_start_run_threads():1123] run started: tp6gmuhp with start time 1716437295.602096 +2024-05-23 04:08:15,877 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: check_version +2024-05-23 04:08:15,877 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: check_version +2024-05-23 04:08:15,993 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: run_start +2024-05-23 04:08:15,996 DEBUG HandlerThread:811 [system_info.py:__init__():26] System info init +2024-05-23 04:08:15,996 DEBUG HandlerThread:811 [system_info.py:__init__():41] System info init done +2024-05-23 04:08:15,996 INFO HandlerThread:811 [system_monitor.py:start():194] Starting system monitor +2024-05-23 04:08:15,996 INFO SystemMonitor:811 [system_monitor.py:_start():158] Starting system asset monitoring threads +2024-05-23 04:08:15,996 INFO HandlerThread:811 [system_monitor.py:probe():214] Collecting system info +2024-05-23 04:08:16,003 INFO SystemMonitor:811 [interfaces.py:start():188] Started cpu monitoring +2024-05-23 04:08:16,003 INFO SystemMonitor:811 [interfaces.py:start():188] Started disk monitoring +2024-05-23 04:08:16,005 INFO SystemMonitor:811 [interfaces.py:start():188] Started memory monitoring +2024-05-23 04:08:16,010 INFO SystemMonitor:811 [interfaces.py:start():188] Started network monitoring +2024-05-23 04:08:16,110 DEBUG HandlerThread:811 [system_info.py:probe():150] Probing system +2024-05-23 04:08:16,113 DEBUG HandlerThread:811 [system_info.py:_probe_git():135] Probing git +2024-05-23 04:08:16,123 ERROR HandlerThread:811 [gitlib.py:root():92] git root error: Cmd('git') failed due to: exit code(128) + cmdline: git rev-parse --show-toplevel + stderr: 'fatal: detected dubious ownership in repository at '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +To add an exception for this directory, call: + + git config --global --add safe.directory /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +2024-05-23 04:08:16,123 DEBUG HandlerThread:811 [system_info.py:_probe_git():143] Probing git done +2024-05-23 04:08:16,123 DEBUG HandlerThread:811 [system_info.py:probe():198] Probing system done +2024-05-23 04:08:16,123 DEBUG HandlerThread:811 [system_monitor.py:probe():223] {'os': 'Linux-5.15.0-92-generic-x86_64-with-glibc2.35', 'python': '3.10.12', 'heartbeatAt': '2024-05-23T04:08:16.110227', 'startedAt': '2024-05-23T04:08:15.580708', 'docker': None, 'cuda': None, 'args': ('--model', 'hf', '--model_args', 'pretrained=/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step100', '--tasks', 'hellaswag,arc_easy,openbookqa,winogrande,sst2,mrpc', '--batch_size', 'auto', '--wandb_args', 'project=bharatgpt,group=trial_expt_2'), 'state': 'running', 'program': '-m lm_eval.__main__', 'codePathLocal': None, 'git': {'remote': 'https://github.com/EleutherAI/lm-evaluation-harness', 'commit': None}, 'email': None, 'root': '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness', 'host': 'peacock-evaluation-worker-0', 'username': 'root', 'executable': '/usr/bin/python3', 'cpu_count': 80, 'cpu_count_logical': 160, 'cpu_freq': {'current': 2327.355975, 'min': 800.0, 'max': 3400.0}, 'cpu_freq_per_core': [{'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3368.704, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.002, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}], 'disk': {'/': {'total': 877.6341285705566, 'used': 209.5428123474121}}, 'memory': {'total': 1007.4379463195801}} +2024-05-23 04:08:16,123 INFO HandlerThread:811 [system_monitor.py:probe():224] Finished collecting system info +2024-05-23 04:08:16,123 INFO HandlerThread:811 [system_monitor.py:probe():227] Publishing system info +2024-05-23 04:08:16,127 INFO HandlerThread:811 [system_monitor.py:probe():229] Finished publishing system info +2024-05-23 04:08:16,132 DEBUG SenderThread:811 [sender.py:send():378] send: files +2024-05-23 04:08:16,132 INFO SenderThread:811 [sender.py:_save_file():1389] saving file wandb-metadata.json with policy now +2024-05-23 04:08:16,314 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: python_packages +2024-05-23 04:08:16,314 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: python_packages +2024-05-23 04:08:16,315 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: stop_status +2024-05-23 04:08:16,317 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: stop_status +2024-05-23 04:08:16,432 DEBUG SenderThread:811 [sender.py:send():378] send: telemetry +2024-05-23 04:08:16,774 INFO wandb-upload_0:811 [upload_job.py:push():130] Uploaded file /tmp/tmp801m1zg8wandb/7d24p7ri-wandb-metadata.json +2024-05-23 04:08:16,878 INFO Thread-12 :811 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/output.log +2024-05-23 04:08:16,878 INFO Thread-12 :811 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/requirements.txt +2024-05-23 04:08:16,878 INFO Thread-12 :811 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/wandb-metadata.json +2024-05-23 04:08:18,878 INFO Thread-12 :811 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/output.log +2024-05-23 04:08:21,446 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: status_report +2024-05-23 04:08:26,692 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: status_report +2024-05-23 04:08:26,885 INFO Thread-12 :811 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/output.log +2024-05-23 04:08:28,180 DEBUG SenderThread:811 [sender.py:send():378] send: exit +2024-05-23 04:08:28,180 INFO SenderThread:811 [sender.py:send_exit():585] handling exit code: 1 +2024-05-23 04:08:28,180 INFO SenderThread:811 [sender.py:send_exit():587] handling runtime: 12 +2024-05-23 04:08:28,181 INFO SenderThread:811 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-23 04:08:28,182 INFO SenderThread:811 [sender.py:send_exit():593] send defer +2024-05-23 04:08:28,182 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:28,182 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 0 +2024-05-23 04:08:28,182 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:28,182 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 0 +2024-05-23 04:08:28,182 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 1 +2024-05-23 04:08:28,182 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:28,182 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 1 +2024-05-23 04:08:28,182 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:28,182 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 1 +2024-05-23 04:08:28,182 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 2 +2024-05-23 04:08:28,182 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:28,182 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 2 +2024-05-23 04:08:28,182 INFO HandlerThread:811 [system_monitor.py:finish():203] Stopping system monitor +2024-05-23 04:08:28,182 DEBUG SystemMonitor:811 [system_monitor.py:_start():172] Starting system metrics aggregation loop +2024-05-23 04:08:28,182 DEBUG SystemMonitor:811 [system_monitor.py:_start():179] Finished system metrics aggregation loop +2024-05-23 04:08:28,182 DEBUG SystemMonitor:811 [system_monitor.py:_start():183] Publishing last batch of metrics +2024-05-23 04:08:28,183 INFO HandlerThread:811 [interfaces.py:finish():200] Joined cpu monitor +2024-05-23 04:08:28,183 INFO HandlerThread:811 [interfaces.py:finish():200] Joined disk monitor +2024-05-23 04:08:28,184 INFO HandlerThread:811 [interfaces.py:finish():200] Joined memory monitor +2024-05-23 04:08:28,184 INFO HandlerThread:811 [interfaces.py:finish():200] Joined network monitor +2024-05-23 04:08:28,184 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:28,184 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 2 +2024-05-23 04:08:28,184 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 3 +2024-05-23 04:08:28,184 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:28,184 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 3 +2024-05-23 04:08:28,184 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:28,184 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 3 +2024-05-23 04:08:28,184 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 4 +2024-05-23 04:08:28,184 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:28,184 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 4 +2024-05-23 04:08:28,184 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:28,184 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 4 +2024-05-23 04:08:28,184 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 5 +2024-05-23 04:08:28,185 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:28,185 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 5 +2024-05-23 04:08:28,185 DEBUG SenderThread:811 [sender.py:send():378] send: summary +2024-05-23 04:08:28,185 INFO SenderThread:811 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-23 04:08:28,186 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:28,186 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 5 +2024-05-23 04:08:28,186 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 6 +2024-05-23 04:08:28,186 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:28,186 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 6 +2024-05-23 04:08:28,186 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:28,186 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 6 +2024-05-23 04:08:28,191 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: status_report +2024-05-23 04:08:28,326 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 7 +2024-05-23 04:08:28,326 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:28,326 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 7 +2024-05-23 04:08:28,326 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:28,326 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 7 +2024-05-23 04:08:28,887 INFO Thread-12 :811 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/config.yaml +2024-05-23 04:08:28,887 INFO Thread-12 :811 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/wandb-summary.json +2024-05-23 04:08:29,180 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 04:08:30,469 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 8 +2024-05-23 04:08:30,469 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 04:08:30,469 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:30,469 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 8 +2024-05-23 04:08:30,469 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:30,469 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 8 +2024-05-23 04:08:30,470 INFO SenderThread:811 [job_builder.py:build():432] Attempting to build job artifact +2024-05-23 04:08:30,470 INFO SenderThread:811 [job_builder.py:_get_source_type():576] no source found +2024-05-23 04:08:30,470 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 9 +2024-05-23 04:08:30,470 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:30,470 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 9 +2024-05-23 04:08:30,470 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:30,470 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 9 +2024-05-23 04:08:30,470 INFO SenderThread:811 [dir_watcher.py:finish():358] shutting down directory watcher +2024-05-23 04:08:30,889 INFO SenderThread:811 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/output.log +2024-05-23 04:08:30,889 INFO SenderThread:811 [dir_watcher.py:finish():388] scan: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files +2024-05-23 04:08:30,890 INFO SenderThread:811 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/requirements.txt requirements.txt +2024-05-23 04:08:30,890 INFO SenderThread:811 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/wandb-metadata.json wandb-metadata.json +2024-05-23 04:08:30,892 INFO SenderThread:811 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/config.yaml config.yaml +2024-05-23 04:08:30,892 INFO SenderThread:811 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/wandb-summary.json wandb-summary.json +2024-05-23 04:08:30,893 INFO SenderThread:811 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/output.log output.log +2024-05-23 04:08:30,893 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 10 +2024-05-23 04:08:30,893 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:30,893 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 10 +2024-05-23 04:08:30,893 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:30,893 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 10 +2024-05-23 04:08:30,893 INFO SenderThread:811 [file_pusher.py:finish():169] shutting down file pusher +2024-05-23 04:08:31,161 INFO wandb-upload_0:811 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/requirements.txt +2024-05-23 04:08:31,186 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 04:08:31,186 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 04:08:31,512 INFO wandb-upload_1:811 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/config.yaml +2024-05-23 04:08:31,516 INFO wandb-upload_3:811 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/output.log +2024-05-23 04:08:31,519 INFO wandb-upload_2:811 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/files/wandb-summary.json +2024-05-23 04:08:31,719 INFO Thread-11 (_thread_body):811 [sender.py:transition_state():613] send defer: 11 +2024-05-23 04:08:31,719 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:31,719 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 11 +2024-05-23 04:08:31,719 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:31,719 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 11 +2024-05-23 04:08:31,719 INFO SenderThread:811 [file_pusher.py:join():175] waiting for file pusher +2024-05-23 04:08:31,720 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 12 +2024-05-23 04:08:31,720 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:31,720 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 12 +2024-05-23 04:08:31,720 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:31,720 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 12 +2024-05-23 04:08:31,720 INFO SenderThread:811 [file_stream.py:finish():601] file stream finish called +2024-05-23 04:08:31,928 INFO SenderThread:811 [file_stream.py:finish():605] file stream finish is done +2024-05-23 04:08:31,928 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 13 +2024-05-23 04:08:31,929 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:31,929 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 13 +2024-05-23 04:08:31,929 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:31,929 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 13 +2024-05-23 04:08:31,929 INFO SenderThread:811 [sender.py:transition_state():613] send defer: 14 +2024-05-23 04:08:31,929 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: defer +2024-05-23 04:08:31,929 INFO HandlerThread:811 [handler.py:handle_request_defer():184] handle defer: 14 +2024-05-23 04:08:31,929 DEBUG SenderThread:811 [sender.py:send():378] send: final +2024-05-23 04:08:31,929 DEBUG SenderThread:811 [sender.py:send():378] send: footer +2024-05-23 04:08:31,929 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: defer +2024-05-23 04:08:31,929 INFO SenderThread:811 [sender.py:send_request_defer():609] handle sender defer: 14 +2024-05-23 04:08:31,930 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 04:08:31,930 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 04:08:31,930 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 04:08:31,931 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: server_info +2024-05-23 04:08:31,931 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: get_summary +2024-05-23 04:08:31,931 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: sampled_history +2024-05-23 04:08:31,931 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: internal_messages +2024-05-23 04:08:31,931 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 04:08:31,931 DEBUG SenderThread:811 [sender.py:send_request():405] send_request: server_info +2024-05-23 04:08:31,993 INFO MainThread:811 [wandb_run.py:_footer_history_summary_info():3994] rendering history +2024-05-23 04:08:31,993 INFO MainThread:811 [wandb_run.py:_footer_history_summary_info():4026] rendering summary +2024-05-23 04:08:31,993 INFO MainThread:811 [wandb_run.py:_footer_sync_info():3953] logging synced files +2024-05-23 04:08:31,994 DEBUG HandlerThread:811 [handler.py:handle_request():158] handle_request: shutdown +2024-05-23 04:08:31,994 INFO HandlerThread:811 [handler.py:finish():882] shutting down handler +2024-05-23 04:08:32,931 INFO WriterThread:811 [datastore.py:close():296] close: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/run-tp6gmuhp.wandb +2024-05-23 04:08:32,993 INFO SenderThread:811 [sender.py:finish():1545] shutting down sender +2024-05-23 04:08:32,993 INFO SenderThread:811 [file_pusher.py:finish():169] shutting down file pusher +2024-05-23 04:08:32,993 INFO SenderThread:811 [file_pusher.py:join():175] waiting for file pusher diff --git a/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/logs/debug.log b/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/logs/debug.log new file mode 100644 index 0000000000000000000000000000000000000000..321140678bbc0934f064c331be805ecb65d34e23 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/logs/debug.log @@ -0,0 +1,29 @@ +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0 +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_setup.py:_flush():76] Configure stats pid to 655 +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_setup.py:_flush():76] Loading settings from /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/settings +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_setup.py:_flush():76] Loading settings from environment variables: {} +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False} +2024-05-23 04:08:15,596 WARNING MainThread:655 [wandb_setup.py:_flush():76] Could not find program at -m lm_eval.__main__ +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program_relpath': None, 'program': '-m lm_eval.__main__'} +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_setup.py:_flush():76] Applying login settings: {} +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_init.py:_log_setup():520] Logging user logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/logs/debug.log +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_init.py:_log_setup():521] Logging internal logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/logs/debug-internal.log +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_init.py:init():560] calling init triggers +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_init.py:init():567] wandb.init called with sweep_config: {} +config: {} +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_init.py:init():610] starting backend +2024-05-23 04:08:15,596 INFO MainThread:655 [wandb_init.py:init():614] setting up manager +2024-05-23 04:08:15,600 INFO MainThread:655 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn +2024-05-23 04:08:15,601 INFO MainThread:655 [wandb_init.py:init():622] backend started and connected +2024-05-23 04:08:15,605 INFO MainThread:655 [wandb_init.py:init():711] updated telemetry +2024-05-23 04:08:15,615 INFO MainThread:655 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout +2024-05-23 04:08:15,877 INFO MainThread:655 [wandb_run.py:_on_init():2396] communicating current version +2024-05-23 04:08:15,987 INFO MainThread:655 [wandb_run.py:_on_init():2405] got version response +2024-05-23 04:08:15,987 INFO MainThread:655 [wandb_init.py:init():795] starting run threads in backend +2024-05-23 04:08:16,315 INFO MainThread:655 [wandb_run.py:_console_start():2374] atexit reg +2024-05-23 04:08:16,315 INFO MainThread:655 [wandb_run.py:_redirect():2229] redirect: wrap_raw +2024-05-23 04:08:16,316 INFO MainThread:655 [wandb_run.py:_redirect():2294] Wrapping output streams. +2024-05-23 04:08:16,316 INFO MainThread:655 [wandb_run.py:_redirect():2319] Redirects installed. +2024-05-23 04:08:16,319 INFO MainThread:655 [wandb_init.py:init():838] run started, returning control to user process +2024-05-23 04:08:32,994 WARNING MsgRouterThr:655 [router.py:message_loop():77] message_loop has been closed diff --git a/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/run-tp6gmuhp.wandb b/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/run-tp6gmuhp.wandb new file mode 100644 index 0000000000000000000000000000000000000000..e2cf0c7f72fbb26e1625618ecbb93586ba0fbf12 Binary files /dev/null and b/lm-evaluation-harness/wandb/run-20240523_040815-tp6gmuhp/run-tp6gmuhp.wandb differ diff --git a/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/config.yaml b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..93c6aeb9c3e63c6344e731b10a782d2e46579513 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/config.yaml @@ -0,0 +1,43 @@ +wandb_version: 1 + +_wandb: + desc: null + value: + python_version: 3.10.12 + cli_version: 0.17.0 + framework: huggingface + huggingface_version: 4.41.1 + is_jupyter_run: false + is_kaggle_kernel: false + start_time: 1716467639 + t: + 1: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 2: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 3: + - 23 + 4: 3.10.12 + 5: 0.17.0 + 6: 4.41.1 + 8: + - 5 + 13: linux-x86_64 diff --git a/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/output.log b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/output.log new file mode 100644 index 0000000000000000000000000000000000000000..a9785f7bda8a6cab872c080359a73dcf447c2cbb --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/output.log @@ -0,0 +1,34 @@ + +2024-05-23:12:34:00,179 INFO [__main__.py:251] Verbosity set to INFO +2024-05-23:12:34:08,683 INFO [__main__.py:335] Selected Tasks: ['arc_easy', 'hellaswag', 'mrpc', 'openbookqa', 'sst2', 'winogrande'] +2024-05-23:12:34:08,684 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 +2024-05-23:12:34:08,685 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step20000'} +2024-05-23:12:34:10,986 INFO [huggingface.py:164] Using device 'cuda' +Traceback (most recent call last): + File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main + return _run_code(code, main_globals, None, + File "/usr/lib/python3.10/runpy.py", line 86, in _run_code + exec(code, run_globals) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 417, in + cli_evaluate() + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate + results = evaluator.simple_evaluate( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper + return fn(*args, **kwargs) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/evaluator.py", line 180, in simple_evaluate + lm = lm_eval.api.registry.get_model(model).create_from_arg_string( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/api/model.py", line 134, in create_from_arg_string + return cls(**args, **args2) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 190, in __init__ + self._get_config( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 471, in _get_config + self._config = transformers.AutoConfig.from_pretrained( + File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 934, in from_pretrained + config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 632, in get_config_dict + config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 689, in _get_config_dict + resolved_config_file = cached_file( + File "/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py", line 370, in cached_file + raise EnvironmentError( +OSError: /mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step20000 does not appear to have a file named config.json. Checkout 'https://huggingface.co//mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step20000/tree/main' for available files. \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/requirements.txt b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..f675c3016b5332c1acf28f436e0b60adeead9c12 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/requirements.txt @@ -0,0 +1,155 @@ +DataProperty==1.0.1 +GitPython==3.1.43 +Jinja2==3.1.4 +Markdown==3.6 +MarkupSafe==2.1.5 +Pillow-SIMD==7.0.0.post3 +PyYAML==6.0 +Werkzeug==3.0.3 +absl-py==2.1.0 +accelerate==0.30.1 +aiohttp==3.9.5 +aiosignal==1.3.1 +async-timeout==4.0.3 +attrs==23.2.0 +av==9.2.0 +cachetools==5.3.3 +certifi==2024.2.2 +cffi==1.15.1 +cfgv==3.4.0 +chardet==5.2.0 +charset-normalizer==3.3.2 +click==8.1.7 +cmake==3.29.2 +colorama==0.4.6 +datasets==2.19.1 +deepspeed==0.12.4+hpu.synapse.v1.15.1 +dill==0.3.8 +distlib==0.3.8 +docker-pycreds==0.4.0 +einops==0.8.0 +evaluate==0.4.2 +exceptiongroup==1.2.1 +expecttest==0.2.1 +filelock==3.14.0 +frozenlist==1.4.1 +fsspec==2024.3.1 +gitdb==4.0.11 +google-auth-oauthlib==0.4.6 +google-auth==2.29.0 +grpcio==1.63.0 +habana-media-loader==1.15.1.15 +habana-pyhlml==1.15.1.15 +habana-torch-dataloader==1.15.1.15 +habana-torch-plugin==1.15.1.15 +habana_gpu_migration==1.15.1.15 +habana_quantization_toolkit==1.15.1.15 +hjson==3.1.0 +huggingface-hub==0.23.1 +identify==2.5.36 +idna==3.7 +iniconfig==2.0.0 +joblib==1.4.2 +jsonlines==4.0.0 +lightning-habana==1.4.0 +lightning-utilities==0.11.2 +lightning==2.2.0.post0 +lm_eval==0.4.2 +lm_eval==0.4.2 +lm_eval==0.4.2 +lxml==5.2.2 +mbstrdecoder==1.1.3 +more-itertools==10.2.0 +mpi4py==3.1.4 +mpmath==1.3.0 +multidict==6.0.5 +multiprocess==0.70.16 +networkx==3.3 +ninja==1.11.1.1 +nltk==3.8.1 +nodeenv==1.8.0 +numexpr==2.10.0 +numpy==1.23.5 +oauthlib==3.2.2 +packaging==24.0 +pandas==2.0.1 +pathspec==0.12.1 +pathvalidate==3.2.0 +peft==0.11.1 +perfetto==0.7.0 +pillow==10.3.0 +pip==22.0.2 +pip==23.3.1 +platformdirs==4.2.1 +pluggy==1.5.0 +portalocker==2.8.2 +pre-commit==3.3.3 +pretty-errors==1.2.25 +protobuf==3.20.3 +psutil==5.9.8 +py-cpuinfo==9.0.0 +pyarrow-hotfix==0.6 +pyarrow==16.1.0 +pyasn1==0.6.0 +pyasn1_modules==0.4.0 +pybind11==2.10.4 +pycparser==2.22 +pydantic==1.10.13 +pynvml==8.0.4 +pytablewriter==1.2.0 +pytest==8.2.0 +python-dateutil==2.9.0.post0 +pytorch-lightning==2.2.4 +pytz==2024.1 +regex==2023.5.5 +requests-oauthlib==2.0.0 +requests==2.31.0 +rouge_score==0.1.2 +rsa==4.9 +sacrebleu==2.4.2 +safetensors==0.4.3 +scikit-learn==1.5.0 +scipy==1.13.1 +sentencepiece==0.2.0 +sentry-sdk==2.3.0 +setproctitle==1.3.3 +setuptools==59.6.0 +setuptools==69.5.1 +six==1.16.0 +smmap==5.0.1 +sqlitedict==2.1.0 +symengine==0.11.0 +sympy==1.12 +tabledata==1.3.3 +tabulate==0.9.0 +tcolorpy==0.1.6 +tdqm==0.0.1 +tensorboard-data-server==0.6.1 +tensorboard-plugin-wit==1.8.1 +tensorboard==2.11.2 +threadpoolctl==3.5.0 +tokenizers==0.19.1 +tomli==2.0.1 +torch==2.2.0a0+git8964477 +torch_tb_profiler==0.4.0 +torchaudio==2.2.0+08901ad +torchdata==0.7.1+5e6f7b7 +torchmetrics==1.4.0 +torchtext==0.17.0+400da5c +torchvision==0.17.0+b2383d4 +tqdm-multiprocess==0.0.11 +tqdm==4.66.4 +transformers==4.41.1 +typepy==1.3.2 +typing_extensions==4.11.0 +tzdata==2024.1 +urllib3==1.26.18 +virtualenv==20.26.1 +wandb==0.17.0 +wheel==0.37.1 +wheel==0.43.0 +word2number==1.1 +xxhash==3.4.1 +yamllint==1.35.1 +yarl==1.9.4 +zstandard==0.22.0 \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-metadata.json b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..35f36833e885c3bb0a3401e4aa92f822645f3687 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-metadata.json @@ -0,0 +1,850 @@ +{ + "os": "Linux-5.15.0-92-generic-x86_64-with-glibc2.35", + "python": "3.10.12", + "heartbeatAt": "2024-05-23T12:33:59.970258", + "startedAt": "2024-05-23T12:33:59.467177", + "docker": null, + "cuda": null, + "args": [ + "--model", + "hf", + "--model_args", + "pretrained=/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step20000", + "--tasks", + "hellaswag,arc_easy,openbookqa,winogrande,sst2,mrpc", + "--batch_size", + "auto", + "--wandb_args", + "project=bharatgpt,group=trial_expt_2" + ], + "state": "running", + "program": "-m lm_eval.__main__", + "codePathLocal": null, + "git": { + "remote": "https://github.com/EleutherAI/lm-evaluation-harness", + "commit": null + }, + "email": null, + "root": "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness", + "host": "peacock-evaluation-worker-0", + "username": "root", + "executable": "/usr/bin/python3", + "cpu_count": 80, + "cpu_count_logical": 160, + "cpu_freq": { + "current": 2326.9682000000003, + "min": 800.0, + "max": 3400.0 + }, + "cpu_freq_per_core": [ + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3399.997, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + } + ], + "disk": { + "/": { + "total": 877.6341285705566, + "used": 209.58390045166016 + } + }, + "memory": { + "total": 1007.4379425048828 + } +} diff --git a/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-summary.json b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-summary.json new file mode 100644 index 0000000000000000000000000000000000000000..8bf99d152ad35c3699ec8600ecb8b169d4e35875 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-summary.json @@ -0,0 +1 @@ +{"_wandb": {"runtime": 11}} \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/logs/debug-internal.log b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/logs/debug-internal.log new file mode 100644 index 0000000000000000000000000000000000000000..5aff6d4f9f0543383e3d537017a00cd9760b96bc --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/logs/debug-internal.log @@ -0,0 +1,183 @@ +2024-05-23 12:33:59,490 INFO StreamThr :2598 [internal.py:wandb_internal():85] W&B internal server running at pid: 2598, started at: 2024-05-23 12:33:59.487087 +2024-05-23 12:33:59,493 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: status +2024-05-23 12:33:59,494 INFO WriterThread:2598 [datastore.py:open_for_write():87] open: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/run-snmu4g0j.wandb +2024-05-23 12:33:59,500 DEBUG SenderThread:2598 [sender.py:send():378] send: header +2024-05-23 12:33:59,500 DEBUG SenderThread:2598 [sender.py:send():378] send: run +2024-05-23 12:33:59,757 INFO SenderThread:2598 [dir_watcher.py:__init__():211] watching files in: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files +2024-05-23 12:33:59,757 INFO SenderThread:2598 [sender.py:_start_run_threads():1123] run started: snmu4g0j with start time 1716467639.486946 +2024-05-23 12:33:59,761 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: check_version +2024-05-23 12:33:59,761 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: check_version +2024-05-23 12:33:59,875 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: run_start +2024-05-23 12:33:59,877 DEBUG HandlerThread:2598 [system_info.py:__init__():26] System info init +2024-05-23 12:33:59,877 DEBUG HandlerThread:2598 [system_info.py:__init__():41] System info init done +2024-05-23 12:33:59,877 INFO HandlerThread:2598 [system_monitor.py:start():194] Starting system monitor +2024-05-23 12:33:59,877 INFO SystemMonitor:2598 [system_monitor.py:_start():158] Starting system asset monitoring threads +2024-05-23 12:33:59,877 INFO HandlerThread:2598 [system_monitor.py:probe():214] Collecting system info +2024-05-23 12:33:59,884 INFO SystemMonitor:2598 [interfaces.py:start():188] Started cpu monitoring +2024-05-23 12:33:59,884 INFO SystemMonitor:2598 [interfaces.py:start():188] Started disk monitoring +2024-05-23 12:33:59,885 INFO SystemMonitor:2598 [interfaces.py:start():188] Started memory monitoring +2024-05-23 12:33:59,887 INFO SystemMonitor:2598 [interfaces.py:start():188] Started network monitoring +2024-05-23 12:33:59,970 DEBUG HandlerThread:2598 [system_info.py:probe():150] Probing system +2024-05-23 12:33:59,974 DEBUG HandlerThread:2598 [system_info.py:_probe_git():135] Probing git +2024-05-23 12:33:59,984 ERROR HandlerThread:2598 [gitlib.py:root():92] git root error: Cmd('git') failed due to: exit code(128) + cmdline: git rev-parse --show-toplevel + stderr: 'fatal: detected dubious ownership in repository at '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +To add an exception for this directory, call: + + git config --global --add safe.directory /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +2024-05-23 12:33:59,984 DEBUG HandlerThread:2598 [system_info.py:_probe_git():143] Probing git done +2024-05-23 12:33:59,984 DEBUG HandlerThread:2598 [system_info.py:probe():198] Probing system done +2024-05-23 12:33:59,984 DEBUG HandlerThread:2598 [system_monitor.py:probe():223] {'os': 'Linux-5.15.0-92-generic-x86_64-with-glibc2.35', 'python': '3.10.12', 'heartbeatAt': '2024-05-23T12:33:59.970258', 'startedAt': '2024-05-23T12:33:59.467177', 'docker': None, 'cuda': None, 'args': ('--model', 'hf', '--model_args', 'pretrained=/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step20000', '--tasks', 'hellaswag,arc_easy,openbookqa,winogrande,sst2,mrpc', '--batch_size', 'auto', '--wandb_args', 'project=bharatgpt,group=trial_expt_2'), 'state': 'running', 'program': '-m lm_eval.__main__', 'codePathLocal': None, 'git': {'remote': 'https://github.com/EleutherAI/lm-evaluation-harness', 'commit': None}, 'email': None, 'root': '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness', 'host': 'peacock-evaluation-worker-0', 'username': 'root', 'executable': '/usr/bin/python3', 'cpu_count': 80, 'cpu_count_logical': 160, 'cpu_freq': {'current': 2326.9682000000003, 'min': 800.0, 'max': 3400.0}, 'cpu_freq_per_core': [{'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3399.997, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}], 'disk': {'/': {'total': 877.6341285705566, 'used': 209.58390045166016}}, 'memory': {'total': 1007.4379425048828}} +2024-05-23 12:33:59,984 INFO HandlerThread:2598 [system_monitor.py:probe():224] Finished collecting system info +2024-05-23 12:33:59,984 INFO HandlerThread:2598 [system_monitor.py:probe():227] Publishing system info +2024-05-23 12:33:59,987 INFO HandlerThread:2598 [system_monitor.py:probe():229] Finished publishing system info +2024-05-23 12:33:59,992 DEBUG SenderThread:2598 [sender.py:send():378] send: files +2024-05-23 12:33:59,992 INFO SenderThread:2598 [sender.py:_save_file():1389] saving file wandb-metadata.json with policy now +2024-05-23 12:34:00,173 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: python_packages +2024-05-23 12:34:00,173 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: python_packages +2024-05-23 12:34:00,176 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: stop_status +2024-05-23 12:34:00,176 DEBUG SenderThread:2598 [sender.py:send():378] send: telemetry +2024-05-23 12:34:00,177 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: stop_status +2024-05-23 12:34:00,601 INFO wandb-upload_0:2598 [upload_job.py:push():130] Uploaded file /tmp/tmppn7f_d5zwandb/oeyql3cn-wandb-metadata.json +2024-05-23 12:34:00,759 INFO Thread-12 :2598 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/requirements.txt +2024-05-23 12:34:00,759 INFO Thread-12 :2598 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/output.log +2024-05-23 12:34:00,760 INFO Thread-12 :2598 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-metadata.json +2024-05-23 12:34:02,759 INFO Thread-12 :2598 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/output.log +2024-05-23 12:34:05,274 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: status_report +2024-05-23 12:34:10,685 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: status_report +2024-05-23 12:34:10,766 INFO Thread-12 :2598 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/output.log +2024-05-23 12:34:10,993 DEBUG SenderThread:2598 [sender.py:send():378] send: exit +2024-05-23 12:34:10,994 INFO SenderThread:2598 [sender.py:send_exit():585] handling exit code: 1 +2024-05-23 12:34:10,994 INFO SenderThread:2598 [sender.py:send_exit():587] handling runtime: 11 +2024-05-23 12:34:10,995 INFO SenderThread:2598 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-23 12:34:10,995 INFO SenderThread:2598 [sender.py:send_exit():593] send defer +2024-05-23 12:34:10,996 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:10,996 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 0 +2024-05-23 12:34:10,996 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:10,996 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 0 +2024-05-23 12:34:10,996 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 1 +2024-05-23 12:34:10,996 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:10,996 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 1 +2024-05-23 12:34:10,996 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:10,996 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 1 +2024-05-23 12:34:10,996 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 2 +2024-05-23 12:34:10,996 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:10,996 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 2 +2024-05-23 12:34:10,996 INFO HandlerThread:2598 [system_monitor.py:finish():203] Stopping system monitor +2024-05-23 12:34:10,996 DEBUG SystemMonitor:2598 [system_monitor.py:_start():172] Starting system metrics aggregation loop +2024-05-23 12:34:10,997 DEBUG SystemMonitor:2598 [system_monitor.py:_start():179] Finished system metrics aggregation loop +2024-05-23 12:34:10,997 DEBUG SystemMonitor:2598 [system_monitor.py:_start():183] Publishing last batch of metrics +2024-05-23 12:34:10,998 INFO HandlerThread:2598 [interfaces.py:finish():200] Joined cpu monitor +2024-05-23 12:34:10,998 INFO HandlerThread:2598 [interfaces.py:finish():200] Joined disk monitor +2024-05-23 12:34:10,998 INFO HandlerThread:2598 [interfaces.py:finish():200] Joined memory monitor +2024-05-23 12:34:10,998 INFO HandlerThread:2598 [interfaces.py:finish():200] Joined network monitor +2024-05-23 12:34:10,998 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:10,998 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 2 +2024-05-23 12:34:10,998 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 3 +2024-05-23 12:34:10,999 DEBUG SenderThread:2598 [sender.py:send():378] send: stats +2024-05-23 12:34:10,999 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:10,999 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 3 +2024-05-23 12:34:10,999 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:10,999 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 3 +2024-05-23 12:34:10,999 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 4 +2024-05-23 12:34:10,999 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:10,999 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 4 +2024-05-23 12:34:10,999 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:10,999 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 4 +2024-05-23 12:34:10,999 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 5 +2024-05-23 12:34:10,999 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:10,999 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 5 +2024-05-23 12:34:11,000 DEBUG SenderThread:2598 [sender.py:send():378] send: summary +2024-05-23 12:34:11,000 INFO SenderThread:2598 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-23 12:34:11,001 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:11,001 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 5 +2024-05-23 12:34:11,001 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 6 +2024-05-23 12:34:11,001 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:11,001 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 6 +2024-05-23 12:34:11,001 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:11,001 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 6 +2024-05-23 12:34:11,006 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: status_report +2024-05-23 12:34:11,070 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 7 +2024-05-23 12:34:11,070 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:11,070 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 7 +2024-05-23 12:34:11,070 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:11,070 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 7 +2024-05-23 12:34:11,767 INFO Thread-12 :2598 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/config.yaml +2024-05-23 12:34:11,767 INFO Thread-12 :2598 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-summary.json +2024-05-23 12:34:11,993 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 12:34:12,296 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 8 +2024-05-23 12:34:12,296 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 12:34:12,297 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:12,297 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 8 +2024-05-23 12:34:12,297 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:12,297 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 8 +2024-05-23 12:34:12,297 INFO SenderThread:2598 [job_builder.py:build():432] Attempting to build job artifact +2024-05-23 12:34:12,298 INFO SenderThread:2598 [job_builder.py:_get_source_type():576] no source found +2024-05-23 12:34:12,298 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 9 +2024-05-23 12:34:12,298 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:12,298 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 9 +2024-05-23 12:34:12,298 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:12,298 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 9 +2024-05-23 12:34:12,298 INFO SenderThread:2598 [dir_watcher.py:finish():358] shutting down directory watcher +2024-05-23 12:34:12,769 INFO SenderThread:2598 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/output.log +2024-05-23 12:34:12,769 INFO SenderThread:2598 [dir_watcher.py:finish():388] scan: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files +2024-05-23 12:34:12,769 INFO SenderThread:2598 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/requirements.txt requirements.txt +2024-05-23 12:34:12,769 INFO SenderThread:2598 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/output.log output.log +2024-05-23 12:34:12,771 INFO SenderThread:2598 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-metadata.json wandb-metadata.json +2024-05-23 12:34:12,772 INFO SenderThread:2598 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/config.yaml config.yaml +2024-05-23 12:34:12,772 INFO SenderThread:2598 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-summary.json wandb-summary.json +2024-05-23 12:34:12,772 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 10 +2024-05-23 12:34:12,772 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:12,772 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 10 +2024-05-23 12:34:12,774 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:12,774 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 10 +2024-05-23 12:34:12,774 INFO SenderThread:2598 [file_pusher.py:finish():169] shutting down file pusher +2024-05-23 12:34:12,994 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 12:34:12,994 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 12:34:13,012 INFO wandb-upload_0:2598 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/requirements.txt +2024-05-23 12:34:13,484 INFO wandb-upload_2:2598 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/config.yaml +2024-05-23 12:34:13,489 INFO wandb-upload_1:2598 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/output.log +2024-05-23 12:34:13,659 INFO wandb-upload_3:2598 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/files/wandb-summary.json +2024-05-23 12:34:13,859 INFO Thread-11 (_thread_body):2598 [sender.py:transition_state():613] send defer: 11 +2024-05-23 12:34:13,860 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:13,860 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 11 +2024-05-23 12:34:13,860 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:13,860 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 11 +2024-05-23 12:34:13,860 INFO SenderThread:2598 [file_pusher.py:join():175] waiting for file pusher +2024-05-23 12:34:13,860 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 12 +2024-05-23 12:34:13,860 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:13,860 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 12 +2024-05-23 12:34:13,860 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:13,861 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 12 +2024-05-23 12:34:13,861 INFO SenderThread:2598 [file_stream.py:finish():601] file stream finish called +2024-05-23 12:34:13,924 INFO SenderThread:2598 [file_stream.py:finish():605] file stream finish is done +2024-05-23 12:34:13,924 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 13 +2024-05-23 12:34:13,924 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:13,924 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 13 +2024-05-23 12:34:13,924 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:13,924 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 13 +2024-05-23 12:34:13,924 INFO SenderThread:2598 [sender.py:transition_state():613] send defer: 14 +2024-05-23 12:34:13,924 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: defer +2024-05-23 12:34:13,924 INFO HandlerThread:2598 [handler.py:handle_request_defer():184] handle defer: 14 +2024-05-23 12:34:13,924 DEBUG SenderThread:2598 [sender.py:send():378] send: final +2024-05-23 12:34:13,924 DEBUG SenderThread:2598 [sender.py:send():378] send: footer +2024-05-23 12:34:13,924 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: defer +2024-05-23 12:34:13,925 INFO SenderThread:2598 [sender.py:send_request_defer():609] handle sender defer: 14 +2024-05-23 12:34:13,925 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 12:34:13,925 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 12:34:13,925 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: server_info +2024-05-23 12:34:13,925 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: get_summary +2024-05-23 12:34:13,926 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: sampled_history +2024-05-23 12:34:13,926 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: internal_messages +2024-05-23 12:34:13,926 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 12:34:13,926 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 12:34:13,926 DEBUG SenderThread:2598 [sender.py:send_request():405] send_request: server_info +2024-05-23 12:34:13,988 INFO MainThread:2598 [wandb_run.py:_footer_history_summary_info():3994] rendering history +2024-05-23 12:34:13,988 INFO MainThread:2598 [wandb_run.py:_footer_history_summary_info():4026] rendering summary +2024-05-23 12:34:13,988 INFO MainThread:2598 [wandb_run.py:_footer_sync_info():3953] logging synced files +2024-05-23 12:34:13,988 DEBUG HandlerThread:2598 [handler.py:handle_request():158] handle_request: shutdown +2024-05-23 12:34:13,988 INFO HandlerThread:2598 [handler.py:finish():882] shutting down handler +2024-05-23 12:34:14,926 INFO WriterThread:2598 [datastore.py:close():296] close: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/run-snmu4g0j.wandb +2024-05-23 12:34:14,988 INFO SenderThread:2598 [sender.py:finish():1545] shutting down sender +2024-05-23 12:34:14,988 INFO SenderThread:2598 [file_pusher.py:finish():169] shutting down file pusher +2024-05-23 12:34:14,988 INFO SenderThread:2598 [file_pusher.py:join():175] waiting for file pusher diff --git a/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/logs/debug.log b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/logs/debug.log new file mode 100644 index 0000000000000000000000000000000000000000..708651e5856141b408b457fedd3197be8c579bf0 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/logs/debug.log @@ -0,0 +1,29 @@ +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0 +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_setup.py:_flush():76] Configure stats pid to 2443 +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_setup.py:_flush():76] Loading settings from /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/settings +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_setup.py:_flush():76] Loading settings from environment variables: {} +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False} +2024-05-23 12:33:59,482 WARNING MainThread:2443 [wandb_setup.py:_flush():76] Could not find program at -m lm_eval.__main__ +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program_relpath': None, 'program': '-m lm_eval.__main__'} +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_setup.py:_flush():76] Applying login settings: {} +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_init.py:_log_setup():520] Logging user logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/logs/debug.log +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_init.py:_log_setup():521] Logging internal logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/logs/debug-internal.log +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_init.py:init():560] calling init triggers +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_init.py:init():567] wandb.init called with sweep_config: {} +config: {} +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_init.py:init():610] starting backend +2024-05-23 12:33:59,482 INFO MainThread:2443 [wandb_init.py:init():614] setting up manager +2024-05-23 12:33:59,485 INFO MainThread:2443 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn +2024-05-23 12:33:59,486 INFO MainThread:2443 [wandb_init.py:init():622] backend started and connected +2024-05-23 12:33:59,490 INFO MainThread:2443 [wandb_init.py:init():711] updated telemetry +2024-05-23 12:33:59,498 INFO MainThread:2443 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout +2024-05-23 12:33:59,761 INFO MainThread:2443 [wandb_run.py:_on_init():2396] communicating current version +2024-05-23 12:33:59,868 INFO MainThread:2443 [wandb_run.py:_on_init():2405] got version response +2024-05-23 12:33:59,869 INFO MainThread:2443 [wandb_init.py:init():795] starting run threads in backend +2024-05-23 12:34:00,174 INFO MainThread:2443 [wandb_run.py:_console_start():2374] atexit reg +2024-05-23 12:34:00,174 INFO MainThread:2443 [wandb_run.py:_redirect():2229] redirect: wrap_raw +2024-05-23 12:34:00,174 INFO MainThread:2443 [wandb_run.py:_redirect():2294] Wrapping output streams. +2024-05-23 12:34:00,174 INFO MainThread:2443 [wandb_run.py:_redirect():2319] Redirects installed. +2024-05-23 12:34:00,176 INFO MainThread:2443 [wandb_init.py:init():838] run started, returning control to user process +2024-05-23 12:34:14,989 WARNING MsgRouterThr:2443 [router.py:message_loop():77] message_loop has been closed diff --git a/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/run-snmu4g0j.wandb b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/run-snmu4g0j.wandb new file mode 100644 index 0000000000000000000000000000000000000000..c7e7c88ceff06e31e51464968c24daf0732f1c77 Binary files /dev/null and b/lm-evaluation-harness/wandb/run-20240523_123359-snmu4g0j/run-snmu4g0j.wandb differ diff --git a/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/config.yaml b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f36d93c851581aecaf6003a0f416fe9ab1036d7b --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/config.yaml @@ -0,0 +1,43 @@ +wandb_version: 1 + +_wandb: + desc: null + value: + python_version: 3.10.12 + cli_version: 0.17.0 + framework: huggingface + huggingface_version: 4.41.1 + is_jupyter_run: false + is_kaggle_kernel: false + start_time: 1716469651 + t: + 1: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 2: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 3: + - 23 + 4: 3.10.12 + 5: 0.17.0 + 6: 4.41.1 + 8: + - 5 + 13: linux-x86_64 diff --git a/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/output.log b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/output.log new file mode 100644 index 0000000000000000000000000000000000000000..eddbf55ffe512de410d9f53eda606eb1ae68021d --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/output.log @@ -0,0 +1,34 @@ + +2024-05-23:13:07:32,685 INFO [__main__.py:251] Verbosity set to INFO +2024-05-23:13:07:41,147 INFO [__main__.py:335] Selected Tasks: ['arc_easy', 'hellaswag', 'mrpc', 'openbookqa', 'sst2', 'winogrande'] +2024-05-23:13:07:41,148 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 +2024-05-23:13:07:41,148 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step20000'} +2024-05-23:13:07:43,436 INFO [huggingface.py:164] Using device 'cuda' +Traceback (most recent call last): + File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main + return _run_code(code, main_globals, None, + File "/usr/lib/python3.10/runpy.py", line 86, in _run_code + exec(code, run_globals) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 417, in + cli_evaluate() + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate + results = evaluator.simple_evaluate( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper + return fn(*args, **kwargs) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/evaluator.py", line 180, in simple_evaluate + lm = lm_eval.api.registry.get_model(model).create_from_arg_string( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/api/model.py", line 134, in create_from_arg_string + return cls(**args, **args2) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 190, in __init__ + self._get_config( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 471, in _get_config + self._config = transformers.AutoConfig.from_pretrained( + File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 934, in from_pretrained + config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 632, in get_config_dict + config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 689, in _get_config_dict + resolved_config_file = cached_file( + File "/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py", line 370, in cached_file + raise EnvironmentError( +OSError: /mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step20000 does not appear to have a file named config.json. Checkout 'https://huggingface.co//mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step20000/tree/main' for available files. \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/requirements.txt b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..f675c3016b5332c1acf28f436e0b60adeead9c12 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/requirements.txt @@ -0,0 +1,155 @@ +DataProperty==1.0.1 +GitPython==3.1.43 +Jinja2==3.1.4 +Markdown==3.6 +MarkupSafe==2.1.5 +Pillow-SIMD==7.0.0.post3 +PyYAML==6.0 +Werkzeug==3.0.3 +absl-py==2.1.0 +accelerate==0.30.1 +aiohttp==3.9.5 +aiosignal==1.3.1 +async-timeout==4.0.3 +attrs==23.2.0 +av==9.2.0 +cachetools==5.3.3 +certifi==2024.2.2 +cffi==1.15.1 +cfgv==3.4.0 +chardet==5.2.0 +charset-normalizer==3.3.2 +click==8.1.7 +cmake==3.29.2 +colorama==0.4.6 +datasets==2.19.1 +deepspeed==0.12.4+hpu.synapse.v1.15.1 +dill==0.3.8 +distlib==0.3.8 +docker-pycreds==0.4.0 +einops==0.8.0 +evaluate==0.4.2 +exceptiongroup==1.2.1 +expecttest==0.2.1 +filelock==3.14.0 +frozenlist==1.4.1 +fsspec==2024.3.1 +gitdb==4.0.11 +google-auth-oauthlib==0.4.6 +google-auth==2.29.0 +grpcio==1.63.0 +habana-media-loader==1.15.1.15 +habana-pyhlml==1.15.1.15 +habana-torch-dataloader==1.15.1.15 +habana-torch-plugin==1.15.1.15 +habana_gpu_migration==1.15.1.15 +habana_quantization_toolkit==1.15.1.15 +hjson==3.1.0 +huggingface-hub==0.23.1 +identify==2.5.36 +idna==3.7 +iniconfig==2.0.0 +joblib==1.4.2 +jsonlines==4.0.0 +lightning-habana==1.4.0 +lightning-utilities==0.11.2 +lightning==2.2.0.post0 +lm_eval==0.4.2 +lm_eval==0.4.2 +lm_eval==0.4.2 +lxml==5.2.2 +mbstrdecoder==1.1.3 +more-itertools==10.2.0 +mpi4py==3.1.4 +mpmath==1.3.0 +multidict==6.0.5 +multiprocess==0.70.16 +networkx==3.3 +ninja==1.11.1.1 +nltk==3.8.1 +nodeenv==1.8.0 +numexpr==2.10.0 +numpy==1.23.5 +oauthlib==3.2.2 +packaging==24.0 +pandas==2.0.1 +pathspec==0.12.1 +pathvalidate==3.2.0 +peft==0.11.1 +perfetto==0.7.0 +pillow==10.3.0 +pip==22.0.2 +pip==23.3.1 +platformdirs==4.2.1 +pluggy==1.5.0 +portalocker==2.8.2 +pre-commit==3.3.3 +pretty-errors==1.2.25 +protobuf==3.20.3 +psutil==5.9.8 +py-cpuinfo==9.0.0 +pyarrow-hotfix==0.6 +pyarrow==16.1.0 +pyasn1==0.6.0 +pyasn1_modules==0.4.0 +pybind11==2.10.4 +pycparser==2.22 +pydantic==1.10.13 +pynvml==8.0.4 +pytablewriter==1.2.0 +pytest==8.2.0 +python-dateutil==2.9.0.post0 +pytorch-lightning==2.2.4 +pytz==2024.1 +regex==2023.5.5 +requests-oauthlib==2.0.0 +requests==2.31.0 +rouge_score==0.1.2 +rsa==4.9 +sacrebleu==2.4.2 +safetensors==0.4.3 +scikit-learn==1.5.0 +scipy==1.13.1 +sentencepiece==0.2.0 +sentry-sdk==2.3.0 +setproctitle==1.3.3 +setuptools==59.6.0 +setuptools==69.5.1 +six==1.16.0 +smmap==5.0.1 +sqlitedict==2.1.0 +symengine==0.11.0 +sympy==1.12 +tabledata==1.3.3 +tabulate==0.9.0 +tcolorpy==0.1.6 +tdqm==0.0.1 +tensorboard-data-server==0.6.1 +tensorboard-plugin-wit==1.8.1 +tensorboard==2.11.2 +threadpoolctl==3.5.0 +tokenizers==0.19.1 +tomli==2.0.1 +torch==2.2.0a0+git8964477 +torch_tb_profiler==0.4.0 +torchaudio==2.2.0+08901ad +torchdata==0.7.1+5e6f7b7 +torchmetrics==1.4.0 +torchtext==0.17.0+400da5c +torchvision==0.17.0+b2383d4 +tqdm-multiprocess==0.0.11 +tqdm==4.66.4 +transformers==4.41.1 +typepy==1.3.2 +typing_extensions==4.11.0 +tzdata==2024.1 +urllib3==1.26.18 +virtualenv==20.26.1 +wandb==0.17.0 +wheel==0.37.1 +wheel==0.43.0 +word2number==1.1 +xxhash==3.4.1 +yamllint==1.35.1 +yarl==1.9.4 +zstandard==0.22.0 \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-metadata.json b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..eb7867c320d885de6c631c9f4927a74a096731d0 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-metadata.json @@ -0,0 +1,850 @@ +{ + "os": "Linux-5.15.0-92-generic-x86_64-with-glibc2.35", + "python": "3.10.12", + "heartbeatAt": "2024-05-23T13:07:32.473993", + "startedAt": "2024-05-23T13:07:31.939616", + "docker": null, + "cuda": null, + "args": [ + "--model", + "hf", + "--model_args", + "pretrained=/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step20000", + "--tasks", + "hellaswag,arc_easy,openbookqa,winogrande,sst2,mrpc", + "--batch_size", + "auto", + "--wandb_args", + "project=bharatgpt,group=trial_expt_2" + ], + "state": "running", + "program": "-m lm_eval.__main__", + "codePathLocal": null, + "git": { + "remote": "https://github.com/EleutherAI/lm-evaluation-harness", + "commit": null + }, + "email": null, + "root": "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness", + "host": "peacock-evaluation-worker-0", + "username": "root", + "executable": "/usr/bin/python3", + "cpu_count": 80, + "cpu_count_logical": 160, + "cpu_freq": { + "current": 2359.009925, + "min": 800.0, + "max": 3400.0 + }, + "cpu_freq_per_core": [ + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3399.997, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + } + ], + "disk": { + "/": { + "total": 877.6341285705566, + "used": 211.62949752807617 + } + }, + "memory": { + "total": 1007.4379539489746 + } +} diff --git a/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-summary.json b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-summary.json new file mode 100644 index 0000000000000000000000000000000000000000..8bf99d152ad35c3699ec8600ecb8b169d4e35875 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-summary.json @@ -0,0 +1 @@ +{"_wandb": {"runtime": 11}} \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/logs/debug-internal.log b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/logs/debug-internal.log new file mode 100644 index 0000000000000000000000000000000000000000..f92911383c5163a2ee3d2d67fdaa7b2753f2c232 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/logs/debug-internal.log @@ -0,0 +1,183 @@ +2024-05-23 13:07:31,962 INFO StreamThr :2614 [internal.py:wandb_internal():85] W&B internal server running at pid: 2614, started at: 2024-05-23 13:07:31.960053 +2024-05-23 13:07:31,967 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: status +2024-05-23 13:07:31,967 INFO WriterThread:2614 [datastore.py:open_for_write():87] open: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/run-rkhtref1.wandb +2024-05-23 13:07:31,971 DEBUG SenderThread:2614 [sender.py:send():378] send: header +2024-05-23 13:07:31,972 DEBUG SenderThread:2614 [sender.py:send():378] send: run +2024-05-23 13:07:32,274 INFO SenderThread:2614 [dir_watcher.py:__init__():211] watching files in: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files +2024-05-23 13:07:32,274 INFO SenderThread:2614 [sender.py:_start_run_threads():1123] run started: rkhtref1 with start time 1716469651.960121 +2024-05-23 13:07:32,275 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: check_version +2024-05-23 13:07:32,275 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: check_version +2024-05-23 13:07:32,396 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: run_start +2024-05-23 13:07:32,398 DEBUG HandlerThread:2614 [system_info.py:__init__():26] System info init +2024-05-23 13:07:32,398 DEBUG HandlerThread:2614 [system_info.py:__init__():41] System info init done +2024-05-23 13:07:32,398 INFO HandlerThread:2614 [system_monitor.py:start():194] Starting system monitor +2024-05-23 13:07:32,398 INFO SystemMonitor:2614 [system_monitor.py:_start():158] Starting system asset monitoring threads +2024-05-23 13:07:32,398 INFO HandlerThread:2614 [system_monitor.py:probe():214] Collecting system info +2024-05-23 13:07:32,406 INFO SystemMonitor:2614 [interfaces.py:start():188] Started cpu monitoring +2024-05-23 13:07:32,406 INFO SystemMonitor:2614 [interfaces.py:start():188] Started disk monitoring +2024-05-23 13:07:32,412 INFO SystemMonitor:2614 [interfaces.py:start():188] Started memory monitoring +2024-05-23 13:07:32,412 INFO SystemMonitor:2614 [interfaces.py:start():188] Started network monitoring +2024-05-23 13:07:32,473 DEBUG HandlerThread:2614 [system_info.py:probe():150] Probing system +2024-05-23 13:07:32,477 DEBUG HandlerThread:2614 [system_info.py:_probe_git():135] Probing git +2024-05-23 13:07:32,487 ERROR HandlerThread:2614 [gitlib.py:root():92] git root error: Cmd('git') failed due to: exit code(128) + cmdline: git rev-parse --show-toplevel + stderr: 'fatal: detected dubious ownership in repository at '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +To add an exception for this directory, call: + + git config --global --add safe.directory /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +2024-05-23 13:07:32,487 DEBUG HandlerThread:2614 [system_info.py:_probe_git():143] Probing git done +2024-05-23 13:07:32,487 DEBUG HandlerThread:2614 [system_info.py:probe():198] Probing system done +2024-05-23 13:07:32,487 DEBUG HandlerThread:2614 [system_monitor.py:probe():223] {'os': 'Linux-5.15.0-92-generic-x86_64-with-glibc2.35', 'python': '3.10.12', 'heartbeatAt': '2024-05-23T13:07:32.473993', 'startedAt': '2024-05-23T13:07:31.939616', 'docker': None, 'cuda': None, 'args': ('--model', 'hf', '--model_args', 'pretrained=/mnt/weka/peacock/experiments/llama/checkpoint/llamav2-3b//hf_ckpt//global_step20000', '--tasks', 'hellaswag,arc_easy,openbookqa,winogrande,sst2,mrpc', '--batch_size', 'auto', '--wandb_args', 'project=bharatgpt,group=trial_expt_2'), 'state': 'running', 'program': '-m lm_eval.__main__', 'codePathLocal': None, 'git': {'remote': 'https://github.com/EleutherAI/lm-evaluation-harness', 'commit': None}, 'email': None, 'root': '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness', 'host': 'peacock-evaluation-worker-0', 'username': 'root', 'executable': '/usr/bin/python3', 'cpu_count': 80, 'cpu_count_logical': 160, 'cpu_freq': {'current': 2359.009925, 'min': 800.0, 'max': 3400.0}, 'cpu_freq_per_core': [{'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3399.997, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}], 'disk': {'/': {'total': 877.6341285705566, 'used': 211.62949752807617}}, 'memory': {'total': 1007.4379539489746}} +2024-05-23 13:07:32,487 INFO HandlerThread:2614 [system_monitor.py:probe():224] Finished collecting system info +2024-05-23 13:07:32,487 INFO HandlerThread:2614 [system_monitor.py:probe():227] Publishing system info +2024-05-23 13:07:32,490 INFO HandlerThread:2614 [system_monitor.py:probe():229] Finished publishing system info +2024-05-23 13:07:32,495 DEBUG SenderThread:2614 [sender.py:send():378] send: files +2024-05-23 13:07:32,495 INFO SenderThread:2614 [sender.py:_save_file():1389] saving file wandb-metadata.json with policy now +2024-05-23 13:07:32,677 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: python_packages +2024-05-23 13:07:32,678 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: python_packages +2024-05-23 13:07:32,679 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: stop_status +2024-05-23 13:07:32,681 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: stop_status +2024-05-23 13:07:32,781 DEBUG SenderThread:2614 [sender.py:send():378] send: telemetry +2024-05-23 13:07:33,102 INFO wandb-upload_0:2614 [upload_job.py:push():130] Uploaded file /tmp/tmpxrfg6vm9wandb/gq06j3hi-wandb-metadata.json +2024-05-23 13:07:33,275 INFO Thread-12 :2614 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-metadata.json +2024-05-23 13:07:33,276 INFO Thread-12 :2614 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/output.log +2024-05-23 13:07:33,276 INFO Thread-12 :2614 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/requirements.txt +2024-05-23 13:07:35,275 INFO Thread-12 :2614 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/output.log +2024-05-23 13:07:37,784 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: status_report +2024-05-23 13:07:43,149 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: status_report +2024-05-23 13:07:43,288 INFO Thread-12 :2614 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/output.log +2024-05-23 13:07:43,451 DEBUG SenderThread:2614 [sender.py:send():378] send: exit +2024-05-23 13:07:43,451 INFO SenderThread:2614 [sender.py:send_exit():585] handling exit code: 1 +2024-05-23 13:07:43,451 INFO SenderThread:2614 [sender.py:send_exit():587] handling runtime: 11 +2024-05-23 13:07:43,453 INFO SenderThread:2614 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-23 13:07:43,453 INFO SenderThread:2614 [sender.py:send_exit():593] send defer +2024-05-23 13:07:43,453 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:43,453 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 0 +2024-05-23 13:07:43,453 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:43,453 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 0 +2024-05-23 13:07:43,453 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 1 +2024-05-23 13:07:43,453 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:43,453 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 1 +2024-05-23 13:07:43,453 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:43,454 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 1 +2024-05-23 13:07:43,454 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 2 +2024-05-23 13:07:43,454 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:43,454 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 2 +2024-05-23 13:07:43,454 INFO HandlerThread:2614 [system_monitor.py:finish():203] Stopping system monitor +2024-05-23 13:07:43,454 DEBUG SystemMonitor:2614 [system_monitor.py:_start():172] Starting system metrics aggregation loop +2024-05-23 13:07:43,454 DEBUG SystemMonitor:2614 [system_monitor.py:_start():179] Finished system metrics aggregation loop +2024-05-23 13:07:43,454 DEBUG SystemMonitor:2614 [system_monitor.py:_start():183] Publishing last batch of metrics +2024-05-23 13:07:43,457 INFO HandlerThread:2614 [interfaces.py:finish():200] Joined cpu monitor +2024-05-23 13:07:43,457 INFO HandlerThread:2614 [interfaces.py:finish():200] Joined disk monitor +2024-05-23 13:07:43,457 INFO HandlerThread:2614 [interfaces.py:finish():200] Joined memory monitor +2024-05-23 13:07:43,457 INFO HandlerThread:2614 [interfaces.py:finish():200] Joined network monitor +2024-05-23 13:07:43,458 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:43,458 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 2 +2024-05-23 13:07:43,458 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 3 +2024-05-23 13:07:43,458 DEBUG SenderThread:2614 [sender.py:send():378] send: stats +2024-05-23 13:07:43,459 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:43,459 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 3 +2024-05-23 13:07:43,459 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:43,459 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 3 +2024-05-23 13:07:43,459 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 4 +2024-05-23 13:07:43,459 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:43,459 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 4 +2024-05-23 13:07:43,459 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:43,459 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 4 +2024-05-23 13:07:43,459 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 5 +2024-05-23 13:07:43,459 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:43,459 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 5 +2024-05-23 13:07:43,460 DEBUG SenderThread:2614 [sender.py:send():378] send: summary +2024-05-23 13:07:43,460 INFO SenderThread:2614 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-23 13:07:43,461 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:43,461 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 5 +2024-05-23 13:07:43,461 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 6 +2024-05-23 13:07:43,461 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:43,461 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 6 +2024-05-23 13:07:43,461 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:43,461 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 6 +2024-05-23 13:07:43,466 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: status_report +2024-05-23 13:07:43,548 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 7 +2024-05-23 13:07:43,548 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:43,548 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 7 +2024-05-23 13:07:43,548 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:43,548 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 7 +2024-05-23 13:07:44,289 INFO Thread-12 :2614 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/config.yaml +2024-05-23 13:07:44,289 INFO Thread-12 :2614 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-summary.json +2024-05-23 13:07:44,451 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 13:07:44,807 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 8 +2024-05-23 13:07:44,807 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 13:07:44,807 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:44,807 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 8 +2024-05-23 13:07:44,807 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:44,807 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 8 +2024-05-23 13:07:44,807 INFO SenderThread:2614 [job_builder.py:build():432] Attempting to build job artifact +2024-05-23 13:07:44,808 INFO SenderThread:2614 [job_builder.py:_get_source_type():576] no source found +2024-05-23 13:07:44,808 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 9 +2024-05-23 13:07:44,808 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:44,808 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 9 +2024-05-23 13:07:44,808 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:44,808 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 9 +2024-05-23 13:07:44,808 INFO SenderThread:2614 [dir_watcher.py:finish():358] shutting down directory watcher +2024-05-23 13:07:45,290 INFO SenderThread:2614 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/output.log +2024-05-23 13:07:45,291 INFO SenderThread:2614 [dir_watcher.py:finish():388] scan: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files +2024-05-23 13:07:45,291 INFO SenderThread:2614 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-summary.json wandb-summary.json +2024-05-23 13:07:45,291 INFO SenderThread:2614 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-metadata.json wandb-metadata.json +2024-05-23 13:07:45,293 INFO SenderThread:2614 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/output.log output.log +2024-05-23 13:07:45,293 INFO SenderThread:2614 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/requirements.txt requirements.txt +2024-05-23 13:07:45,294 INFO SenderThread:2614 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/config.yaml config.yaml +2024-05-23 13:07:45,294 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 10 +2024-05-23 13:07:45,294 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:45,294 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 10 +2024-05-23 13:07:45,294 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:45,294 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 10 +2024-05-23 13:07:45,294 INFO SenderThread:2614 [file_pusher.py:finish():169] shutting down file pusher +2024-05-23 13:07:45,451 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 13:07:45,452 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 13:07:45,543 INFO wandb-upload_0:2614 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/wandb-summary.json +2024-05-23 13:07:45,756 INFO wandb-upload_1:2614 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/output.log +2024-05-23 13:07:46,076 INFO wandb-upload_3:2614 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/config.yaml +2024-05-23 13:07:46,083 INFO wandb-upload_2:2614 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/files/requirements.txt +2024-05-23 13:07:46,283 INFO Thread-11 (_thread_body):2614 [sender.py:transition_state():613] send defer: 11 +2024-05-23 13:07:46,283 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:46,283 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 11 +2024-05-23 13:07:46,283 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:46,284 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 11 +2024-05-23 13:07:46,284 INFO SenderThread:2614 [file_pusher.py:join():175] waiting for file pusher +2024-05-23 13:07:46,284 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 12 +2024-05-23 13:07:46,284 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:46,284 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 12 +2024-05-23 13:07:46,284 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:46,284 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 12 +2024-05-23 13:07:46,284 INFO SenderThread:2614 [file_stream.py:finish():601] file stream finish called +2024-05-23 13:07:46,348 INFO SenderThread:2614 [file_stream.py:finish():605] file stream finish is done +2024-05-23 13:07:46,348 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 13 +2024-05-23 13:07:46,349 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:46,349 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 13 +2024-05-23 13:07:46,349 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:46,349 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 13 +2024-05-23 13:07:46,349 INFO SenderThread:2614 [sender.py:transition_state():613] send defer: 14 +2024-05-23 13:07:46,349 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: defer +2024-05-23 13:07:46,349 INFO HandlerThread:2614 [handler.py:handle_request_defer():184] handle defer: 14 +2024-05-23 13:07:46,349 DEBUG SenderThread:2614 [sender.py:send():378] send: final +2024-05-23 13:07:46,349 DEBUG SenderThread:2614 [sender.py:send():378] send: footer +2024-05-23 13:07:46,349 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: defer +2024-05-23 13:07:46,349 INFO SenderThread:2614 [sender.py:send_request_defer():609] handle sender defer: 14 +2024-05-23 13:07:46,350 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 13:07:46,350 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 13:07:46,350 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-23 13:07:46,350 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: server_info +2024-05-23 13:07:46,350 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: get_summary +2024-05-23 13:07:46,351 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: sampled_history +2024-05-23 13:07:46,351 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: internal_messages +2024-05-23 13:07:46,351 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: poll_exit +2024-05-23 13:07:46,351 DEBUG SenderThread:2614 [sender.py:send_request():405] send_request: server_info +2024-05-23 13:07:46,416 INFO MainThread:2614 [wandb_run.py:_footer_history_summary_info():3994] rendering history +2024-05-23 13:07:46,416 INFO MainThread:2614 [wandb_run.py:_footer_history_summary_info():4026] rendering summary +2024-05-23 13:07:46,417 INFO MainThread:2614 [wandb_run.py:_footer_sync_info():3953] logging synced files +2024-05-23 13:07:46,419 DEBUG HandlerThread:2614 [handler.py:handle_request():158] handle_request: shutdown +2024-05-23 13:07:46,419 INFO HandlerThread:2614 [handler.py:finish():882] shutting down handler +2024-05-23 13:07:47,351 INFO WriterThread:2614 [datastore.py:close():296] close: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/run-rkhtref1.wandb +2024-05-23 13:07:47,416 INFO SenderThread:2614 [sender.py:finish():1545] shutting down sender +2024-05-23 13:07:47,416 INFO SenderThread:2614 [file_pusher.py:finish():169] shutting down file pusher +2024-05-23 13:07:47,417 INFO SenderThread:2614 [file_pusher.py:join():175] waiting for file pusher diff --git a/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/logs/debug.log b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/logs/debug.log new file mode 100644 index 0000000000000000000000000000000000000000..0030836eb00e576fe267f416133c0f5c28cfa39e --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/logs/debug.log @@ -0,0 +1,29 @@ +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0 +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_setup.py:_flush():76] Configure stats pid to 2459 +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_setup.py:_flush():76] Loading settings from /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/settings +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_setup.py:_flush():76] Loading settings from environment variables: {} +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False} +2024-05-23 13:07:31,955 WARNING MainThread:2459 [wandb_setup.py:_flush():76] Could not find program at -m lm_eval.__main__ +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program_relpath': None, 'program': '-m lm_eval.__main__'} +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_setup.py:_flush():76] Applying login settings: {} +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_init.py:_log_setup():520] Logging user logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/logs/debug.log +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_init.py:_log_setup():521] Logging internal logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/logs/debug-internal.log +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_init.py:init():560] calling init triggers +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_init.py:init():567] wandb.init called with sweep_config: {} +config: {} +2024-05-23 13:07:31,955 INFO MainThread:2459 [wandb_init.py:init():610] starting backend +2024-05-23 13:07:31,956 INFO MainThread:2459 [wandb_init.py:init():614] setting up manager +2024-05-23 13:07:31,959 INFO MainThread:2459 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn +2024-05-23 13:07:31,959 INFO MainThread:2459 [wandb_init.py:init():622] backend started and connected +2024-05-23 13:07:31,963 INFO MainThread:2459 [wandb_init.py:init():711] updated telemetry +2024-05-23 13:07:31,971 INFO MainThread:2459 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout +2024-05-23 13:07:32,274 INFO MainThread:2459 [wandb_run.py:_on_init():2396] communicating current version +2024-05-23 13:07:32,388 INFO MainThread:2459 [wandb_run.py:_on_init():2405] got version response +2024-05-23 13:07:32,389 INFO MainThread:2459 [wandb_init.py:init():795] starting run threads in backend +2024-05-23 13:07:32,678 INFO MainThread:2459 [wandb_run.py:_console_start():2374] atexit reg +2024-05-23 13:07:32,679 INFO MainThread:2459 [wandb_run.py:_redirect():2229] redirect: wrap_raw +2024-05-23 13:07:32,679 INFO MainThread:2459 [wandb_run.py:_redirect():2294] Wrapping output streams. +2024-05-23 13:07:32,679 INFO MainThread:2459 [wandb_run.py:_redirect():2319] Redirects installed. +2024-05-23 13:07:32,683 INFO MainThread:2459 [wandb_init.py:init():838] run started, returning control to user process +2024-05-23 13:07:47,420 WARNING MsgRouterThr:2459 [router.py:message_loop():77] message_loop has been closed diff --git a/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/run-rkhtref1.wandb b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/run-rkhtref1.wandb new file mode 100644 index 0000000000000000000000000000000000000000..a388588b6b32ff376f0a8158f2cd44c8c2adf5f6 Binary files /dev/null and b/lm-evaluation-harness/wandb/run-20240523_130731-rkhtref1/run-rkhtref1.wandb differ diff --git a/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/config.yaml b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8eddde0121f54aa57b9b106ca7f563b43f43bfaa --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/config.yaml @@ -0,0 +1,44 @@ +wandb_version: 1 + +_wandb: + desc: null + value: + python_version: 3.10.12 + cli_version: 0.17.0 + framework: huggingface + huggingface_version: 4.36.2 + is_jupyter_run: false + is_kaggle_kernel: false + start_time: 1717056945 + t: + 1: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 2: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 3: + - 13 + - 23 + 4: 3.10.12 + 5: 0.17.0 + 6: 4.36.2 + 8: + - 5 + 13: linux-x86_64 diff --git a/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log new file mode 100644 index 0000000000000000000000000000000000000000..f9655c263d36e65088599730763e88b7e1712c67 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log @@ -0,0 +1,32 @@ + +2024-05-30:08:15:46,147 INFO [__main__.py:251] Verbosity set to INFO +2024-05-30:08:15:55,645 INFO [__main__.py:335] Selected Tasks: ['arc_easy', 'boolq', 'copa', 'mrpc', 'piqa', 'sst2', 'winogrande'] +2024-05-30:08:15:55,646 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 +2024-05-30:08:15:55,647 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/mnt/weka/peacock/experiments/llama/eval/checkpoint-english/llamav2-3b/hf/global_step10000', 'tokenizer': '/mnt/weka/peacock/experiments/llama/eval/checkpoint-english/llamav2-3b/hf/global_step10000'} +2024-05-30:08:15:57,942 INFO [huggingface.py:164] Using device 'cuda' +/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/gpu_migration/torch/cuda/memory.py:36: UserWarning: No need to call empty_cache on HPU. It manages the memory internally in an effcient way. + warnings.warn( +Traceback (most recent call last): + File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main + return _run_code(code, main_globals, None, + File "/usr/lib/python3.10/runpy.py", line 86, in _run_code + exec(code, run_globals) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 417, in + cli_evaluate() + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate + results = evaluator.simple_evaluate( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper + return fn(*args, **kwargs) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/evaluator.py", line 180, in simple_evaluate + lm = lm_eval.api.registry.get_model(model).create_from_arg_string( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/api/model.py", line 134, in create_from_arg_string + return cls(**args, **args2) + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 237, in __init__ + self._create_tokenizer( + File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 630, in _create_tokenizer + self.tokenizer = transformers.AutoTokenizer.from_pretrained( + File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/tokenization_auto.py", line 805, in from_pretrained + return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) + File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 2012, in from_pretrained + raise EnvironmentError( +OSError: Can't load tokenizer for '/mnt/weka/peacock/experiments/llama/eval/checkpoint-english/llamav2-3b/hf/global_step10000'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/mnt/weka/peacock/experiments/llama/eval/checkpoint-english/llamav2-3b/hf/global_step10000' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer. \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/requirements.txt b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..7391d0fb5302364497bd6017486c36f0caae0613 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/requirements.txt @@ -0,0 +1,154 @@ +DataProperty==1.0.1 +GitPython==3.1.43 +Jinja2==3.1.4 +Markdown==3.6 +MarkupSafe==2.1.5 +Pillow-SIMD==7.0.0.post3 +PyYAML==6.0 +Werkzeug==3.0.3 +absl-py==2.1.0 +accelerate==0.30.1 +aiohttp==3.9.5 +aiosignal==1.3.1 +async-timeout==4.0.3 +attrs==23.2.0 +av==9.2.0 +cachetools==5.3.3 +certifi==2024.2.2 +cffi==1.15.1 +cfgv==3.4.0 +chardet==5.2.0 +charset-normalizer==3.3.2 +click==8.1.7 +cmake==3.29.2 +colorama==0.4.6 +datasets==2.19.1 +deepspeed==0.12.4+hpu.synapse.v1.15.1 +dill==0.3.8 +distlib==0.3.8 +docker-pycreds==0.4.0 +einops==0.8.0 +evaluate==0.4.2 +exceptiongroup==1.2.1 +expecttest==0.2.1 +filelock==3.14.0 +frozenlist==1.4.1 +fsspec==2024.3.1 +gitdb==4.0.11 +google-auth-oauthlib==0.4.6 +google-auth==2.29.0 +grpcio==1.63.0 +habana-media-loader==1.15.1.15 +habana-pyhlml==1.15.1.15 +habana-torch-dataloader==1.15.1.15 +habana-torch-plugin==1.15.1.15 +habana_gpu_migration==1.15.1.15 +habana_quantization_toolkit==1.15.1.15 +hjson==3.1.0 +huggingface-hub==0.23.2 +identify==2.5.36 +idna==3.7 +iniconfig==2.0.0 +joblib==1.4.2 +jsonlines==4.0.0 +lightning-habana==1.4.0 +lightning-utilities==0.11.2 +lightning==2.2.0.post0 +lm_eval==0.4.2 +lm_eval==0.4.2 +lm_eval==0.4.2 +lxml==5.2.2 +mbstrdecoder==1.1.3 +more-itertools==10.2.0 +mpi4py==3.1.4 +mpmath==1.3.0 +multidict==6.0.5 +multiprocess==0.70.16 +networkx==3.3 +ninja==1.11.1.1 +nltk==3.8.1 +nodeenv==1.8.0 +numexpr==2.10.0 +numpy==1.23.5 +oauthlib==3.2.2 +packaging==24.0 +pandas==2.0.1 +pathspec==0.12.1 +pathvalidate==3.2.0 +peft==0.11.1 +perfetto==0.7.0 +pip==22.0.2 +pip==23.3.1 +platformdirs==4.2.1 +pluggy==1.5.0 +portalocker==2.8.2 +pre-commit==3.3.3 +pretty-errors==1.2.25 +protobuf==3.20.3 +psutil==5.9.8 +py-cpuinfo==9.0.0 +pyarrow-hotfix==0.6 +pyarrow==16.1.0 +pyasn1==0.6.0 +pyasn1_modules==0.4.0 +pybind11==2.10.4 +pycparser==2.22 +pydantic==1.10.13 +pynvml==8.0.4 +pytablewriter==1.2.0 +pytest==8.2.0 +python-dateutil==2.9.0.post0 +pytorch-lightning==2.2.4 +pytz==2024.1 +regex==2023.5.5 +requests-oauthlib==2.0.0 +requests==2.31.0 +rouge_score==0.1.2 +rsa==4.9 +sacrebleu==2.4.2 +safetensors==0.4.3 +scikit-learn==1.5.0 +scipy==1.13.1 +sentencepiece==0.2.0 +sentry-sdk==2.3.1 +setproctitle==1.3.3 +setuptools==59.6.0 +setuptools==69.5.1 +six==1.16.0 +smmap==5.0.1 +sqlitedict==2.1.0 +symengine==0.11.0 +sympy==1.12 +tabledata==1.3.3 +tabulate==0.9.0 +tcolorpy==0.1.6 +tdqm==0.0.1 +tensorboard-data-server==0.6.1 +tensorboard-plugin-wit==1.8.1 +tensorboard==2.11.2 +threadpoolctl==3.5.0 +tokenizers==0.15.2 +tomli==2.0.1 +torch==2.2.0a0+git8964477 +torch_tb_profiler==0.4.0 +torchaudio==2.2.0+08901ad +torchdata==0.7.1+5e6f7b7 +torchmetrics==1.4.0 +torchtext==0.17.0+400da5c +torchvision==0.17.0+b2383d4 +tqdm-multiprocess==0.0.11 +tqdm==4.66.4 +transformers==4.36.2 +typepy==1.3.2 +typing_extensions==4.11.0 +tzdata==2024.1 +urllib3==1.26.18 +virtualenv==20.26.1 +wandb==0.17.0 +wheel==0.37.1 +wheel==0.43.0 +word2number==1.1 +xxhash==3.4.1 +yamllint==1.35.1 +yarl==1.9.4 +zstandard==0.22.0 \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-metadata.json b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..6630fe5679652ea2e74e8361dc8190e6cefe3fe0 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-metadata.json @@ -0,0 +1,850 @@ +{ + "os": "Linux-5.15.0-92-generic-x86_64-with-glibc2.35", + "python": "3.10.12", + "heartbeatAt": "2024-05-30T08:15:45.935097", + "startedAt": "2024-05-30T08:15:45.439346", + "docker": null, + "cuda": null, + "args": [ + "--model", + "hf", + "--model_args", + "pretrained=/mnt/weka/peacock/experiments/llama/eval/checkpoint-english/llamav2-3b/hf/global_step10000,tokenizer=/mnt/weka/peacock/experiments/llama/eval/checkpoint-english/llamav2-3b/hf/global_step10000", + "--tasks", + "winogrande,sst2,mrpc,arc_easy,copa,piqa,boolq", + "--batch_size", + "auto", + "--wandb_args", + "project=english-eval,group=exp2,name=global_step10000" + ], + "state": "running", + "program": "-m lm_eval.__main__", + "codePathLocal": null, + "git": { + "remote": "https://github.com/EleutherAI/lm-evaluation-harness", + "commit": null + }, + "email": null, + "root": "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness", + "host": "peacock-evaluation-debug-5-worker-0", + "username": "root", + "executable": "/usr/bin/python3", + "cpu_count": 80, + "cpu_count_logical": 160, + "cpu_freq": { + "current": 2327.18258125, + "min": 800.0, + "max": 3400.0 + }, + "cpu_freq_per_core": [ + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3398.526, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 3400.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + }, + { + "current": 2300.0, + "min": 800.0, + "max": 3400.0 + } + ], + "disk": { + "/": { + "total": 877.6341285705566, + "used": 211.9215545654297 + } + }, + "memory": { + "total": 1007.4379844665527 + } +} diff --git a/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-summary.json b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-summary.json new file mode 100644 index 0000000000000000000000000000000000000000..76f5f3b20e09ce747e6d6bb389f737c4bb0be6e7 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-summary.json @@ -0,0 +1 @@ +{"_wandb": {"runtime": 40}} \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/logs/debug-internal.log b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/logs/debug-internal.log new file mode 100644 index 0000000000000000000000000000000000000000..a739f7b2f94080793664650c9399086fc9241add --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/logs/debug-internal.log @@ -0,0 +1,196 @@ +2024-05-30 08:15:45,461 INFO StreamThr :901 [internal.py:wandb_internal():85] W&B internal server running at pid: 901, started at: 2024-05-30 08:15:45.458977 +2024-05-30 08:15:45,465 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: status +2024-05-30 08:15:45,465 INFO WriterThread:901 [datastore.py:open_for_write():87] open: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/run-rucruhje.wandb +2024-05-30 08:15:45,468 DEBUG SenderThread:901 [sender.py:send():378] send: header +2024-05-30 08:15:45,471 DEBUG SenderThread:901 [sender.py:send():378] send: run +2024-05-30 08:15:45,736 INFO SenderThread:901 [dir_watcher.py:__init__():211] watching files in: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files +2024-05-30 08:15:45,736 INFO SenderThread:901 [sender.py:_start_run_threads():1123] run started: rucruhje with start time 1717056945.459077 +2024-05-30 08:15:45,744 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: check_version +2024-05-30 08:15:45,744 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: check_version +2024-05-30 08:15:45,861 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: run_start +2024-05-30 08:15:45,863 DEBUG HandlerThread:901 [system_info.py:__init__():26] System info init +2024-05-30 08:15:45,863 DEBUG HandlerThread:901 [system_info.py:__init__():41] System info init done +2024-05-30 08:15:45,863 INFO HandlerThread:901 [system_monitor.py:start():194] Starting system monitor +2024-05-30 08:15:45,863 INFO SystemMonitor:901 [system_monitor.py:_start():158] Starting system asset monitoring threads +2024-05-30 08:15:45,863 INFO HandlerThread:901 [system_monitor.py:probe():214] Collecting system info +2024-05-30 08:15:45,870 INFO SystemMonitor:901 [interfaces.py:start():188] Started cpu monitoring +2024-05-30 08:15:45,870 INFO SystemMonitor:901 [interfaces.py:start():188] Started disk monitoring +2024-05-30 08:15:45,876 INFO SystemMonitor:901 [interfaces.py:start():188] Started memory monitoring +2024-05-30 08:15:45,877 INFO SystemMonitor:901 [interfaces.py:start():188] Started network monitoring +2024-05-30 08:15:45,935 DEBUG HandlerThread:901 [system_info.py:probe():150] Probing system +2024-05-30 08:15:45,938 DEBUG HandlerThread:901 [system_info.py:_probe_git():135] Probing git +2024-05-30 08:15:45,947 ERROR HandlerThread:901 [gitlib.py:root():92] git root error: Cmd('git') failed due to: exit code(128) + cmdline: git rev-parse --show-toplevel + stderr: 'fatal: detected dubious ownership in repository at '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +To add an exception for this directory, call: + + git config --global --add safe.directory /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness' +2024-05-30 08:15:45,947 DEBUG HandlerThread:901 [system_info.py:_probe_git():143] Probing git done +2024-05-30 08:15:45,947 DEBUG HandlerThread:901 [system_info.py:probe():198] Probing system done +2024-05-30 08:15:45,947 DEBUG HandlerThread:901 [system_monitor.py:probe():223] {'os': 'Linux-5.15.0-92-generic-x86_64-with-glibc2.35', 'python': '3.10.12', 'heartbeatAt': '2024-05-30T08:15:45.935097', 'startedAt': '2024-05-30T08:15:45.439346', 'docker': None, 'cuda': None, 'args': ('--model', 'hf', '--model_args', 'pretrained=/mnt/weka/peacock/experiments/llama/eval/checkpoint-english/llamav2-3b/hf/global_step10000,tokenizer=/mnt/weka/peacock/experiments/llama/eval/checkpoint-english/llamav2-3b/hf/global_step10000', '--tasks', 'winogrande,sst2,mrpc,arc_easy,copa,piqa,boolq', '--batch_size', 'auto', '--wandb_args', 'project=english-eval,group=exp2,name=global_step10000'), 'state': 'running', 'program': '-m lm_eval.__main__', 'codePathLocal': None, 'git': {'remote': 'https://github.com/EleutherAI/lm-evaluation-harness', 'commit': None}, 'email': None, 'root': '/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness', 'host': 'peacock-evaluation-debug-5-worker-0', 'username': 'root', 'executable': '/usr/bin/python3', 'cpu_count': 80, 'cpu_count_logical': 160, 'cpu_freq': {'current': 2327.18258125, 'min': 800.0, 'max': 3400.0}, 'cpu_freq_per_core': [{'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3398.526, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 3400.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}, {'current': 2300.0, 'min': 800.0, 'max': 3400.0}], 'disk': {'/': {'total': 877.6341285705566, 'used': 211.9215545654297}}, 'memory': {'total': 1007.4379844665527}} +2024-05-30 08:15:45,948 INFO HandlerThread:901 [system_monitor.py:probe():224] Finished collecting system info +2024-05-30 08:15:45,948 INFO HandlerThread:901 [system_monitor.py:probe():227] Publishing system info +2024-05-30 08:15:45,950 INFO HandlerThread:901 [system_monitor.py:probe():229] Finished publishing system info +2024-05-30 08:15:45,957 DEBUG SenderThread:901 [sender.py:send():378] send: files +2024-05-30 08:15:45,957 INFO SenderThread:901 [sender.py:_save_file():1389] saving file wandb-metadata.json with policy now +2024-05-30 08:15:46,139 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: python_packages +2024-05-30 08:15:46,139 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: python_packages +2024-05-30 08:15:46,140 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: stop_status +2024-05-30 08:15:46,143 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: stop_status +2024-05-30 08:15:46,300 DEBUG SenderThread:901 [sender.py:send():378] send: telemetry +2024-05-30 08:15:46,637 INFO wandb-upload_0:901 [upload_job.py:push():130] Uploaded file /tmp/tmpqvpycj47wandb/jokiccc8-wandb-metadata.json +2024-05-30 08:15:46,737 INFO Thread-12 :901 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-metadata.json +2024-05-30 08:15:46,738 INFO Thread-12 :901 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log +2024-05-30 08:15:46,738 INFO Thread-12 :901 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/requirements.txt +2024-05-30 08:15:48,741 INFO Thread-12 :901 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log +2024-05-30 08:15:51,315 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: status_report +2024-05-30 08:15:56,647 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: status_report +2024-05-30 08:15:56,747 INFO Thread-12 :901 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log +2024-05-30 08:15:58,750 INFO Thread-12 :901 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log +2024-05-30 08:16:01,141 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: stop_status +2024-05-30 08:16:01,141 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: stop_status +2024-05-30 08:16:02,283 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: status_report +2024-05-30 08:16:07,284 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: status_report +2024-05-30 08:16:12,284 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: status_report +2024-05-30 08:16:16,141 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: stop_status +2024-05-30 08:16:16,142 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: stop_status +2024-05-30 08:16:18,236 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: status_report +2024-05-30 08:16:18,789 INFO Thread-12 :901 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/config.yaml +2024-05-30 08:16:23,320 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: status_report +2024-05-30 08:16:24,912 INFO Thread-12 :901 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log +2024-05-30 08:16:26,775 DEBUG SenderThread:901 [sender.py:send():378] send: exit +2024-05-30 08:16:26,775 INFO SenderThread:901 [sender.py:send_exit():585] handling exit code: 1 +2024-05-30 08:16:26,775 INFO SenderThread:901 [sender.py:send_exit():587] handling runtime: 40 +2024-05-30 08:16:26,778 INFO SenderThread:901 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-30 08:16:26,778 INFO SenderThread:901 [sender.py:send_exit():593] send defer +2024-05-30 08:16:26,778 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:26,778 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 0 +2024-05-30 08:16:26,778 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:26,778 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 0 +2024-05-30 08:16:26,778 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 1 +2024-05-30 08:16:26,778 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:26,778 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 1 +2024-05-30 08:16:26,779 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:26,779 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 1 +2024-05-30 08:16:26,779 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 2 +2024-05-30 08:16:26,779 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:26,779 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 2 +2024-05-30 08:16:26,779 INFO HandlerThread:901 [system_monitor.py:finish():203] Stopping system monitor +2024-05-30 08:16:26,779 DEBUG SystemMonitor:901 [system_monitor.py:_start():172] Starting system metrics aggregation loop +2024-05-30 08:16:26,779 DEBUG SystemMonitor:901 [system_monitor.py:_start():179] Finished system metrics aggregation loop +2024-05-30 08:16:26,779 DEBUG SystemMonitor:901 [system_monitor.py:_start():183] Publishing last batch of metrics +2024-05-30 08:16:26,782 INFO HandlerThread:901 [interfaces.py:finish():200] Joined cpu monitor +2024-05-30 08:16:26,782 INFO HandlerThread:901 [interfaces.py:finish():200] Joined disk monitor +2024-05-30 08:16:26,782 INFO HandlerThread:901 [interfaces.py:finish():200] Joined memory monitor +2024-05-30 08:16:26,782 INFO HandlerThread:901 [interfaces.py:finish():200] Joined network monitor +2024-05-30 08:16:26,783 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:26,783 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 2 +2024-05-30 08:16:26,783 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 3 +2024-05-30 08:16:26,783 DEBUG SenderThread:901 [sender.py:send():378] send: stats +2024-05-30 08:16:26,784 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:26,784 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 3 +2024-05-30 08:16:26,784 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:26,784 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 3 +2024-05-30 08:16:26,784 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 4 +2024-05-30 08:16:26,784 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:26,784 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 4 +2024-05-30 08:16:26,784 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:26,784 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 4 +2024-05-30 08:16:26,784 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 5 +2024-05-30 08:16:26,784 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:26,784 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 5 +2024-05-30 08:16:26,785 DEBUG SenderThread:901 [sender.py:send():378] send: summary +2024-05-30 08:16:26,786 INFO SenderThread:901 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end +2024-05-30 08:16:26,786 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:26,786 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 5 +2024-05-30 08:16:26,786 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 6 +2024-05-30 08:16:26,786 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:26,786 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 6 +2024-05-30 08:16:26,786 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:26,786 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 6 +2024-05-30 08:16:26,786 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 7 +2024-05-30 08:16:26,786 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: status_report +2024-05-30 08:16:26,786 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:26,786 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 7 +2024-05-30 08:16:26,786 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:26,787 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 7 +2024-05-30 08:16:27,178 INFO Thread-12 :901 [dir_watcher.py:_on_file_created():271] file/dir created: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-summary.json +2024-05-30 08:16:27,775 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-30 08:16:28,545 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 8 +2024-05-30 08:16:28,545 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: poll_exit +2024-05-30 08:16:28,545 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:28,546 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 8 +2024-05-30 08:16:28,546 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:28,546 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 8 +2024-05-30 08:16:28,546 INFO SenderThread:901 [job_builder.py:build():432] Attempting to build job artifact +2024-05-30 08:16:28,547 INFO SenderThread:901 [job_builder.py:_get_source_type():576] no source found +2024-05-30 08:16:28,547 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 9 +2024-05-30 08:16:28,547 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:28,547 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 9 +2024-05-30 08:16:28,547 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:28,547 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 9 +2024-05-30 08:16:28,547 INFO SenderThread:901 [dir_watcher.py:finish():358] shutting down directory watcher +2024-05-30 08:16:28,775 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-30 08:16:29,179 INFO SenderThread:901 [dir_watcher.py:_on_file_modified():288] file/dir modified: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log +2024-05-30 08:16:29,180 INFO SenderThread:901 [dir_watcher.py:finish():388] scan: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files +2024-05-30 08:16:29,180 INFO SenderThread:901 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/config.yaml config.yaml +2024-05-30 08:16:29,180 INFO SenderThread:901 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/requirements.txt requirements.txt +2024-05-30 08:16:29,183 INFO SenderThread:901 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-summary.json wandb-summary.json +2024-05-30 08:16:29,183 INFO SenderThread:901 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-metadata.json wandb-metadata.json +2024-05-30 08:16:29,183 INFO SenderThread:901 [dir_watcher.py:finish():402] scan save: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log output.log +2024-05-30 08:16:29,183 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 10 +2024-05-30 08:16:29,183 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: poll_exit +2024-05-30 08:16:29,183 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:29,183 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 10 +2024-05-30 08:16:29,184 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:29,184 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 10 +2024-05-30 08:16:29,184 INFO SenderThread:901 [file_pusher.py:finish():169] shutting down file pusher +2024-05-30 08:16:29,591 INFO wandb-upload_0:901 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/config.yaml +2024-05-30 08:16:29,776 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-30 08:16:29,776 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: poll_exit +2024-05-30 08:16:29,811 INFO wandb-upload_1:901 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/requirements.txt +2024-05-30 08:16:29,820 INFO wandb-upload_2:901 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/wandb-summary.json +2024-05-30 08:16:29,834 INFO wandb-upload_3:901 [upload_job.py:push():130] Uploaded file /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/files/output.log +2024-05-30 08:16:30,034 INFO Thread-11 (_thread_body):901 [sender.py:transition_state():613] send defer: 11 +2024-05-30 08:16:30,034 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:30,034 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 11 +2024-05-30 08:16:30,034 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:30,034 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 11 +2024-05-30 08:16:30,034 INFO SenderThread:901 [file_pusher.py:join():175] waiting for file pusher +2024-05-30 08:16:30,035 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 12 +2024-05-30 08:16:30,035 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:30,035 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 12 +2024-05-30 08:16:30,035 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:30,035 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 12 +2024-05-30 08:16:30,035 INFO SenderThread:901 [file_stream.py:finish():601] file stream finish called +2024-05-30 08:16:30,113 INFO SenderThread:901 [file_stream.py:finish():605] file stream finish is done +2024-05-30 08:16:30,113 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 13 +2024-05-30 08:16:30,113 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:30,113 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 13 +2024-05-30 08:16:30,114 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:30,114 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 13 +2024-05-30 08:16:30,114 INFO SenderThread:901 [sender.py:transition_state():613] send defer: 14 +2024-05-30 08:16:30,114 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: defer +2024-05-30 08:16:30,114 INFO HandlerThread:901 [handler.py:handle_request_defer():184] handle defer: 14 +2024-05-30 08:16:30,114 DEBUG SenderThread:901 [sender.py:send():378] send: final +2024-05-30 08:16:30,114 DEBUG SenderThread:901 [sender.py:send():378] send: footer +2024-05-30 08:16:30,114 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: defer +2024-05-30 08:16:30,114 INFO SenderThread:901 [sender.py:send_request_defer():609] handle sender defer: 14 +2024-05-30 08:16:30,114 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-30 08:16:30,115 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: poll_exit +2024-05-30 08:16:30,115 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: poll_exit +2024-05-30 08:16:30,115 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: server_info +2024-05-30 08:16:30,115 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: get_summary +2024-05-30 08:16:30,115 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: sampled_history +2024-05-30 08:16:30,115 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: internal_messages +2024-05-30 08:16:30,115 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: poll_exit +2024-05-30 08:16:30,116 DEBUG SenderThread:901 [sender.py:send_request():405] send_request: server_info +2024-05-30 08:16:30,177 INFO MainThread:901 [wandb_run.py:_footer_history_summary_info():3994] rendering history +2024-05-30 08:16:30,177 INFO MainThread:901 [wandb_run.py:_footer_history_summary_info():4026] rendering summary +2024-05-30 08:16:30,177 INFO MainThread:901 [wandb_run.py:_footer_sync_info():3953] logging synced files +2024-05-30 08:16:30,177 DEBUG HandlerThread:901 [handler.py:handle_request():158] handle_request: shutdown +2024-05-30 08:16:30,177 INFO HandlerThread:901 [handler.py:finish():882] shutting down handler +2024-05-30 08:16:31,116 INFO WriterThread:901 [datastore.py:close():296] close: /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/run-rucruhje.wandb +2024-05-30 08:16:31,177 INFO SenderThread:901 [sender.py:finish():1545] shutting down sender +2024-05-30 08:16:31,177 INFO SenderThread:901 [file_pusher.py:finish():169] shutting down file pusher +2024-05-30 08:16:31,177 INFO SenderThread:901 [file_pusher.py:join():175] waiting for file pusher diff --git a/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/logs/debug.log b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/logs/debug.log new file mode 100644 index 0000000000000000000000000000000000000000..c1202c9ae0166bfaa129b492ea077a9936bdef25 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/logs/debug.log @@ -0,0 +1,29 @@ +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0 +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_setup.py:_flush():76] Configure stats pid to 745 +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_setup.py:_flush():76] Loading settings from /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/settings +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_setup.py:_flush():76] Loading settings from environment variables: {} +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False} +2024-05-30 08:15:45,453 WARNING MainThread:745 [wandb_setup.py:_flush():76] Could not find program at -m lm_eval.__main__ +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program_relpath': None, 'program': '-m lm_eval.__main__'} +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_setup.py:_flush():76] Applying login settings: {} +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_init.py:_log_setup():520] Logging user logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/logs/debug.log +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_init.py:_log_setup():521] Logging internal logs to /mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/logs/debug-internal.log +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_init.py:init():560] calling init triggers +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_init.py:init():567] wandb.init called with sweep_config: {} +config: {} +2024-05-30 08:15:45,453 INFO MainThread:745 [wandb_init.py:init():610] starting backend +2024-05-30 08:15:45,454 INFO MainThread:745 [wandb_init.py:init():614] setting up manager +2024-05-30 08:15:45,457 INFO MainThread:745 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn +2024-05-30 08:15:45,458 INFO MainThread:745 [wandb_init.py:init():622] backend started and connected +2024-05-30 08:15:45,462 INFO MainThread:745 [wandb_init.py:init():711] updated telemetry +2024-05-30 08:15:45,470 INFO MainThread:745 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout +2024-05-30 08:15:45,743 INFO MainThread:745 [wandb_run.py:_on_init():2396] communicating current version +2024-05-30 08:15:45,854 INFO MainThread:745 [wandb_run.py:_on_init():2405] got version response +2024-05-30 08:15:45,855 INFO MainThread:745 [wandb_init.py:init():795] starting run threads in backend +2024-05-30 08:15:46,141 INFO MainThread:745 [wandb_run.py:_console_start():2374] atexit reg +2024-05-30 08:15:46,141 INFO MainThread:745 [wandb_run.py:_redirect():2229] redirect: wrap_raw +2024-05-30 08:15:46,141 INFO MainThread:745 [wandb_run.py:_redirect():2294] Wrapping output streams. +2024-05-30 08:15:46,141 INFO MainThread:745 [wandb_run.py:_redirect():2319] Redirects installed. +2024-05-30 08:15:46,144 INFO MainThread:745 [wandb_init.py:init():838] run started, returning control to user process +2024-05-30 08:16:31,178 WARNING MsgRouterThr:745 [router.py:message_loop():77] message_loop has been closed diff --git a/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/run-rucruhje.wandb b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/run-rucruhje.wandb new file mode 100644 index 0000000000000000000000000000000000000000..66dbc9f9c9b78efbf1c05408e3a66e05bc17f7f6 Binary files /dev/null and b/lm-evaluation-harness/wandb/run-20240530_081545-rucruhje/run-rucruhje.wandb differ diff --git a/lm-evaluation-harness/wandb/run-20240530_125846-7jq42oxl/files/config.yaml b/lm-evaluation-harness/wandb/run-20240530_125846-7jq42oxl/files/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c2c936191ad18895ad1d07672b59ba1947889e37 --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240530_125846-7jq42oxl/files/config.yaml @@ -0,0 +1,284 @@ +wandb_version: 1 + +_wandb: + desc: null + value: + python_version: 3.10.12 + cli_version: 0.17.0 + framework: huggingface + huggingface_version: 4.36.2 + is_jupyter_run: false + is_kaggle_kernel: false + start_time: 1717073926 + t: + 1: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 2: + - 1 + - 5 + - 11 + - 49 + - 51 + - 53 + - 55 + - 71 + - 98 + - 100 + 3: + - 2 + - 13 + - 23 + - 62 + 4: 3.10.12 + 5: 0.17.0 + 6: 4.36.2 + 8: + - 5 + 13: linux-x86_64 +task_configs: + desc: null + value: + arc_easy: + task: arc_easy + group: + - ai2_arc + dataset_path: allenai/ai2_arc + dataset_name: ARC-Easy + training_split: train + validation_split: validation + test_split: test + doc_to_text: 'Question: {{question}} + + Answer:' + doc_to_target: '{{choices.label.index(answerKey)}}' + doc_to_choice: '{{choices.text}}' + description: '' + target_delimiter: ' ' + fewshot_delimiter: ' + + + ' + num_fewshot: 0 + metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true + output_type: multiple_choice + repeats: 1 + should_decontaminate: true + doc_to_decontamination_query: 'Question: {{question}} + + Answer:' + metadata: + version: 1.0 + boolq: + task: boolq + group: + - super-glue-lm-eval-v1 + dataset_path: super_glue + dataset_name: boolq + training_split: train + validation_split: validation + doc_to_text: '{{passage}} + + Question: {{question}}? + + Answer:' + doc_to_target: label + doc_to_choice: + - 'no' + - 'yes' + description: '' + target_delimiter: ' ' + fewshot_delimiter: ' + + + ' + num_fewshot: 0 + metric_list: + - metric: acc + output_type: multiple_choice + repeats: 1 + should_decontaminate: true + doc_to_decontamination_query: passage + metadata: + version: 2.0 + copa: + task: copa + group: + - super-glue-lm-eval-v1 + dataset_path: super_glue + dataset_name: copa + training_split: train + validation_split: validation + doc_to_text: "def doc_to_text(doc):\n # Drop the period\n connector =\ + \ {\n \"cause\": \"because\",\n \"effect\": \"therefore\",\n\ + \ }[doc[\"question\"]]\n return doc[\"premise\"].strip()[:-1] + f\"\ + \ {connector}\"\n" + doc_to_target: "def doc_to_target(doc):\n correct_choice = doc[\"choice1\"\ + ] if doc[\"label\"] == 0 else doc[\"choice2\"]\n # Connect the sentences\n\ + \ return \" \" + convert_choice(correct_choice)\n" + doc_to_choice: "def doc_to_choice(doc):\n return [\" \" + convert_choice(doc[\"\ + choice1\"]), \" \" + convert_choice(doc[\"choice2\"])]\n" + description: '' + target_delimiter: ' ' + fewshot_delimiter: ' + + + ' + num_fewshot: 0 + metric_list: + - metric: acc + output_type: multiple_choice + repeats: 1 + should_decontaminate: false + metadata: + version: 1.0 + mrpc: + task: mrpc + group: glue + dataset_path: glue + dataset_name: mrpc + training_split: train + validation_split: validation + doc_to_text: 'Sentence 1: {{sentence1}} + + Sentence 2: {{sentence2}} + + Question: Do both sentences mean the same thing? + + Answer:' + doc_to_target: label + doc_to_choice: + - 'no' + - 'yes' + description: '' + target_delimiter: ' ' + fewshot_delimiter: ' + + + ' + num_fewshot: 0 + metric_list: + - metric: acc + - metric: f1 + output_type: multiple_choice + repeats: 1 + should_decontaminate: false + metadata: + version: 1.0 + piqa: + task: piqa + dataset_path: piqa + training_split: train + validation_split: validation + doc_to_text: 'Question: {{goal}} + + Answer:' + doc_to_target: label + doc_to_choice: '{{[sol1, sol2]}}' + description: '' + target_delimiter: ' ' + fewshot_delimiter: ' + + + ' + num_fewshot: 0 + metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true + output_type: multiple_choice + repeats: 1 + should_decontaminate: true + doc_to_decontamination_query: goal + metadata: + version: 1.0 + sst2: + task: sst2 + group: glue + dataset_path: glue + dataset_name: sst2 + training_split: train + validation_split: validation + doc_to_text: '{{sentence}} + + Question: Is this sentence positive or negative? + + Answer:' + doc_to_target: label + doc_to_choice: + - negative + - positive + description: '' + target_delimiter: ' ' + fewshot_delimiter: ' + + + ' + num_fewshot: 0 + metric_list: + - metric: acc + output_type: multiple_choice + repeats: 1 + should_decontaminate: false + metadata: + version: 1.0 + winogrande: + task: winogrande + dataset_path: winogrande + dataset_name: winogrande_xl + training_split: train + validation_split: validation + doc_to_text: "def doc_to_text(doc):\n answer_to_num = {\"1\": 0, \"2\": 1}\n\ + \ return answer_to_num[doc[\"answer\"]]\n" + doc_to_target: "def doc_to_target(doc):\n idx = doc[\"sentence\"].index(\"\ + _\") + 1\n return doc[\"sentence\"][idx:].strip()\n" + doc_to_choice: "def doc_to_choice(doc):\n idx = doc[\"sentence\"].index(\"\ + _\")\n options = [doc[\"option1\"], doc[\"option2\"]]\n return [doc[\"\ + sentence\"][:idx] + opt for opt in options]\n" + description: '' + target_delimiter: ' ' + fewshot_delimiter: ' + + + ' + num_fewshot: 0 + metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + output_type: multiple_choice + repeats: 1 + should_decontaminate: true + doc_to_decontamination_query: sentence + metadata: + version: 1.0 +cli_configs: + desc: null + value: + model: hf + model_args: pretrained=/mnt/weka/peacock/experiments/llama/eval/checkpoint-english/llamav2-3b/hf/global_step30000,tokenizer=/mnt/weka/peacock/tokenization/trained-tokenizer/enhiben_50k_hf/ConvertedTokenizer + batch_size: auto + batch_sizes: + - 64 + device: null + use_cache: null + limit: null + bootstrap_iters: 100000 + gen_kwargs: null diff --git a/lm-evaluation-harness/wandb/run-20240530_125846-7jq42oxl/files/media/table/evaluation/eval_results_1_c8f467e287fa2e4c71ec.table.json b/lm-evaluation-harness/wandb/run-20240530_125846-7jq42oxl/files/media/table/evaluation/eval_results_1_c8f467e287fa2e4c71ec.table.json new file mode 100644 index 0000000000000000000000000000000000000000..2265bce8796400970ae6017d9f5a31be79554a7c --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240530_125846-7jq42oxl/files/media/table/evaluation/eval_results_1_c8f467e287fa2e4c71ec.table.json @@ -0,0 +1 @@ +{"columns": ["Tasks", "Version", "Filter", "num_fewshot", "Metric", "Value", "Stderr"], "data": [["winogrande", 1.0, "none", 0, "acc", "0.4877663772691397", "0.0140"], ["sst2", 1.0, "none", 0, "acc", "0.5137614678899083", "0.0169"], ["piqa", 1.0, "none", 0, "acc", "0.5228509249183896", "0.0117"], ["piqa", 1.0, "none", 0, "acc_norm", "0.4972796517954298", "0.0117"], ["mrpc", 1.0, "none", 0, "acc", "0.3161764705882353", "0.0230"], ["mrpc", 1.0, "none", 0, "f1", "0.0", "0.0000"], ["copa", 1.0, "none", 0, "acc", "0.59", "0.0494"], ["boolq", 2.0, "none", 0, "acc", "0.37706422018348623", "0.0085"], ["arc_easy", 1.0, "none", 0, "acc", "0.25547138047138046", "0.0089"], ["arc_easy", 1.0, "none", 0, "acc_norm", "0.2622053872053872", "0.0090"]]} \ No newline at end of file diff --git a/lm-evaluation-harness/wandb/run-20240530_125846-7jq42oxl/files/output.log b/lm-evaluation-harness/wandb/run-20240530_125846-7jq42oxl/files/output.log new file mode 100644 index 0000000000000000000000000000000000000000..6114d5f492bcc711cff911ac189a0c4de27e13fa --- /dev/null +++ b/lm-evaluation-harness/wandb/run-20240530_125846-7jq42oxl/files/output.log @@ -0,0 +1,565 @@ + +2024-05-30:12:58:46,694 INFO [__main__.py:251] Verbosity set to INFO +2024-05-30:12:58:55,897 INFO [__main__.py:335] Selected Tasks: ['arc_easy', 'boolq', 'copa', 'mrpc', 'piqa', 'sst2', 'winogrande'] +2024-05-30:12:58:55,898 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 +2024-05-30:12:58:55,899 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/mnt/weka/peacock/experiments/llama/eval/checkpoint-english/llamav2-3b/hf/global_step30000', 'tokenizer': '/mnt/weka/peacock/tokenization/trained-tokenizer/enhiben_50k_hf/ConvertedTokenizer'} +2024-05-30:12:58:58,192 INFO [huggingface.py:164] Using device 'cuda' +/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/gpu_migration/torch/cuda/memory.py:36: UserWarning: No need to call empty_cache on HPU. It manages the memory internally in an effcient way. + warnings.warn( +Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. +Downloading readme: 100%|██████████| 9.00k/9.00k [00:00<00:00, 16.5MB/s] +Downloading data: 100%|██████████| 331k/331k [00:00<00:00, 2.03MB/s] +Downloading data: 100%|██████████| 346k/346k [00:00<00:00, 4.29MB/s] +Downloading data: 100%|██████████| 86.1k/86.1k [00:00<00:00, 1.14MB/s] +Generating train split: 100%|██████████| 2251/2251 [00:00<00:00, 47073.44 examples/s] +Generating test split: 100%|██████████| 2376/2376 [00:00<00:00, 328585.39 examples/s] +Generating validation split: 100%|██████████| 570/570 [00:00<00:00, 153884.74 examples/s] +2024-05-30:12:59:32,341 WARNING [task.py:763] [Task: boolq] metric acc is defined, but aggregation is not. using default aggregation=mean +2024-05-30:12:59:32,341 WARNING [task.py:775] [Task: boolq] metric acc is defined, but higher_is_better is not. using default higher_is_better=True +/usr/local/lib/python3.10/dist-packages/datasets/load.py:1486: FutureWarning: The repository for super_glue contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/super_glue +You can avoid this message in future by passing the argument `trust_remote_code=True`. +Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`. + warnings.warn( +Downloading builder script: 100%|██████████| 30.7k/30.7k [00:00<00:00, 40.1MB/s] +Downloading readme: 100%|██████████| 18.2k/18.2k [00:00<00:00, 30.9MB/s] +Downloading data: 100%|██████████| 4.12M/4.12M [00:00<00:00, 15.9MB/s] +Generating train split: 100%|██████████| 9427/9427 [00:00<00:00, 22282.17 examples/s] +Generating validation split: 100%|██████████| 3270/3270 [00:00<00:00, 22704.15 examples/s] +Generating test split: 100%|██████████| 3245/3245 [00:00<00:00, 23226.30 examples/s] +2024-05-30:12:59:36,256 WARNING [task.py:763] [Task: copa] metric acc is defined, but aggregation is not. using default aggregation=mean +2024-05-30:12:59:36,257 WARNING [task.py:775] [Task: copa] metric acc is defined, but higher_is_better is not. using default higher_is_better=True +Downloading data: 100%|██████████| 44.0k/44.0k [00:00<00:00, 46.9MB/s] +Generating train split: 100%|██████████| 400/400 [00:00<00:00, 16363.07 examples/s] +Generating validation split: 100%|██████████| 100/100 [00:00<00:00, 12725.05 examples/s] +Generating test split: 100%|██████████| 500/500 [00:00<00:00, 17155.60 examples/s] +2024-05-30:12:59:38,334 WARNING [task.py:763] [Task: mrpc] metric acc is defined, but aggregation is not. using default aggregation=mean +2024-05-30:12:59:38,334 WARNING [task.py:775] [Task: mrpc] metric acc is defined, but higher_is_better is not. using default higher_is_better=True +2024-05-30:12:59:38,334 WARNING [task.py:763] [Task: mrpc] metric f1 is defined, but aggregation is not. using default aggregation=f1 +2024-05-30:12:59:38,335 WARNING [task.py:775] [Task: mrpc] metric f1 is defined, but higher_is_better is not. using default higher_is_better=True +Downloading readme: 100%|██████████| 35.3k/35.3k [00:00<00:00, 44.0MB/s] +Downloading data: 100%|██████████| 649k/649k [00:00<00:00, 4.43MB/s] +Downloading data: 100%|██████████| 75.7k/75.7k [00:00<00:00, 529kB/s] +Downloading data: 100%|██████████| 308k/308k [00:00<00:00, 2.17MB/s] +Generating train split: 100%|██████████| 3668/3668 [00:00<00:00, 397938.67 examples/s] +Generating validation split: 100%|██████████| 408/408 [00:00<00:00, 175048.69 examples/s] +Generating test split: 100%|██████████| 1725/1725 [00:00<00:00, 383483.03 examples/s] +/usr/local/lib/python3.10/dist-packages/datasets/load.py:1486: FutureWarning: The repository for piqa contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/piqa +You can avoid this message in future by passing the argument `trust_remote_code=True`. +Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`. + warnings.warn( +Downloading builder script: 100%|██████████| 5.36k/5.36k [00:00<00:00, 11.7MB/s] +Downloading readme: 100%|██████████| 8.41k/8.41k [00:00<00:00, 17.0MB/s] +Downloading data: 100%|██████████| 1.82M/1.82M [00:00<00:00, 4.15MB/s] +Downloading data: 100%|██████████| 815k/815k [00:00<00:00, 20.0MB/s] +Generating train split: 100%|██████████| 16113/16113 [00:00<00:00, 23955.06 examples/s] +Generating test split: 100%|██████████| 3084/3084 [00:00<00:00, 24108.92 examples/s] +Generating validation split: 100%|██████████| 1838/1838 [00:00<00:00, 24011.94 examples/s] +2024-05-30:12:59:48,877 WARNING [task.py:763] [Task: sst2] metric acc is defined, but aggregation is not. using default aggregation=mean +2024-05-30:12:59:48,878 WARNING [task.py:775] [Task: sst2] metric acc is defined, but higher_is_better is not. using default higher_is_better=True +Downloading data: 100%|██████████| 3.11M/3.11M [00:00<00:00, 20.3MB/s] +Downloading data: 100%|██████████| 72.8k/72.8k [00:00<00:00, 484kB/s] +Downloading data: 100%|██████████| 148k/148k [00:00<00:00, 978kB/s] +Generating train split: 100%|██████████| 67349/67349 [00:00<00:00, 1076918.48 examples/s] +Generating validation split: 100%|██████████| 872/872 [00:00<00:00, 301395.39 examples/s] +Generating test split: 100%|██████████| 1821/1821 [00:00<00:00, 390103.05 examples/s] +/usr/local/lib/python3.10/dist-packages/datasets/load.py:1486: FutureWarning: The repository for winogrande contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/winogrande +You can avoid this message in future by passing the argument `trust_remote_code=True`. +Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`. + warnings.warn( +Downloading builder script: 100%|██████████| 5.65k/5.65k [00:00<00:00, 10.3MB/s] +Downloading readme: 100%|██████████| 9.97k/9.97k [00:00<00:00, 20.4MB/s] +Downloading data: 100%|██████████| 3.40M/3.40M [00:00<00:00, 7.03MB/s] +Generating train split: 100%|██████████| 40398/40398 [00:01<00:00, 24685.72 examples/s] +Generating test split: 100%|██████████| 1767/1767 [00:00<00:00, 23903.37 examples/s] +Generating validation split: 100%|██████████| 1267/1267 [00:00<00:00, 23703.08 examples/s] +2024-05-30:13:00:02,055 INFO [task.py:395] Building contexts for winogrande on rank 0... +100%|██████████| 1267/1267 [00:00<00:00, 69665.88it/s] +2024-05-30:13:00:02,141 INFO [task.py:395] Building contexts for sst2 on rank 0... +100%|██████████| 872/872 [00:00<00:00, 2578.74it/s] +2024-05-30:13:00:02,507 INFO [task.py:395] Building contexts for piqa on rank 0... +100%|██████████| 1838/1838 [00:01<00:00, 1107.22it/s] +2024-05-30:13:00:04,254 INFO [task.py:395] Building contexts for mrpc on rank 0... +100%|██████████| 408/408 [00:00<00:00, 1885.34it/s] +2024-05-30:13:00:04,488 INFO [task.py:395] Building contexts for copa on rank 0... +100%|██████████| 100/100 [00:00<00:00, 61890.28it/s] +2024-05-30:13:00:04,497 INFO [task.py:395] Building contexts for boolq on rank 0... +100%|██████████| 3270/3270 [00:01<00:00, 2000.23it/s] +2024-05-30:13:00:06,261 INFO [task.py:395] Building contexts for arc_easy on rank 0... + +100%|██████████| 2376/2376 [00:02<00:00, 1075.55it/s] +2024-05-30:13:00:08,663 INFO [evaluator.py:379] Running loglikelihood requests +Token indices sequence length is longer than the specified maximum sequence length for this model (1333 > 1024). Running this sequence through the model will result in indexing errors +Running loglikelihood requests: 0%| | 0/25011 [00:00