Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .log.txt.swp +0 -0
- 0 +0 -0
- 20 +0 -0
- 6 +0 -0
- ckpts/checkpoints_zero_stage_2/global_step120/debug.err +65 -0
- ckpts/checkpoints_zero_stage_2/global_step120/debug.out +0 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_01_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_03_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_04_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_05_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_06_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_07_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_1_mp_rank_01_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_1_mp_rank_06_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_1_mp_rank_07_optim_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/debug.err +65 -0
- ckpts/checkpoints_zero_stage_2/global_step80/debug.out +0 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_01-model_00-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_01-model_01-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_01-model_03-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_03-model_01-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_03-model_02-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_03-model_03-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_04-model_00-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_04-model_01-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_04-model_02-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_04-model_03-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_05-model_00-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_05-model_01-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_05-model_02-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_05-model_03-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_06-model_00-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_06-model_01-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_06-model_02-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_06-model_03-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_07-model_00-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_07-model_01-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_07-model_02-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_07-model_03-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_08-model_00-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_08-model_01-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_08-model_02-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_08-model_03-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_09-model_00-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_09-model_01-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_09-model_02-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_09-model_03-model_states.pt +3 -0
- ckpts/checkpoints_zero_stage_2/global_step80/layer_10-model_00-model_states.pt +3 -0
.log.txt.swp
ADDED
Binary file (4.1 kB). View file
|
|
0
ADDED
File without changes
|
20
ADDED
File without changes
|
6
ADDED
File without changes
|
ckpts/checkpoints_zero_stage_2/global_step120/debug.err
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:481: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
|
2 |
+
_torch_pytree._register_pytree_node(
|
3 |
+
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:338: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
|
4 |
+
_torch_pytree._register_pytree_node(
|
5 |
+
The following values were not passed to `accelerate launch` and had defaults used instead:
|
6 |
+
`--num_processes` was set to a value of `0`
|
7 |
+
`--num_machines` was set to a value of `1`
|
8 |
+
`--mixed_precision` was set to a value of `'no'`
|
9 |
+
`--dynamo_backend` was set to a value of `'no'`
|
10 |
+
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
|
11 |
+
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:481: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
|
12 |
+
_torch_pytree._register_pytree_node(
|
13 |
+
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:338: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
|
14 |
+
_torch_pytree._register_pytree_node(
|
15 |
+
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:338: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
|
16 |
+
_torch_pytree._register_pytree_node(
|
17 |
+
/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/gpu_migration/__init__.py:46: UserWarning: apex not installed, gpu_migration will not swap api for this package.
|
18 |
+
warnings.warn(
|
19 |
+
2024-05-22:14:51:48,518 INFO [__main__.py:251] Verbosity set to INFO
|
20 |
+
2024-05-22:14:51:57,504 INFO [__main__.py:335] Selected Tasks: ['arc_easy', 'hellaswag', 'mrpc', 'openbookqa', 'sst2', 'winogrande']
|
21 |
+
2024-05-22:14:51:57,504 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
22 |
+
2024-05-22:14:51:57,505 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step120'}
|
23 |
+
[W socket.cpp:464] [c10d] The server socket cannot be initialized on [::]:12345 (errno: 97 - Address family not supported by protocol).
|
24 |
+
[W socket.cpp:697] [c10d] The client socket cannot be initialized to connect to [localhost]:12345 (errno: 97 - Address family not supported by protocol).
|
25 |
+
[W socket.cpp:697] [c10d] The client socket cannot be initialized to connect to [localhost]:12345 (errno: 97 - Address family not supported by protocol).
|
26 |
+
[W socket.cpp:697] [c10d] The client socket cannot be initialized to connect to [localhost]:12345 (errno: 97 - Address family not supported by protocol).
|
27 |
+
2024-05-22:14:51:59,800 INFO [huggingface.py:164] Using device 'cuda'
|
28 |
+
Traceback (most recent call last):
|
29 |
+
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
|
30 |
+
return _run_code(code, main_globals, None,
|
31 |
+
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
|
32 |
+
exec(code, run_globals)
|
33 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 417, in <module>
|
34 |
+
cli_evaluate()
|
35 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate
|
36 |
+
results = evaluator.simple_evaluate(
|
37 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper
|
38 |
+
return fn(*args, **kwargs)
|
39 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/evaluator.py", line 180, in simple_evaluate
|
40 |
+
lm = lm_eval.api.registry.get_model(model).create_from_arg_string(
|
41 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/api/model.py", line 134, in create_from_arg_string
|
42 |
+
return cls(**args, **args2)
|
43 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 190, in __init__
|
44 |
+
self._get_config(
|
45 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 471, in _get_config
|
46 |
+
self._config = transformers.AutoConfig.from_pretrained(
|
47 |
+
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 934, in from_pretrained
|
48 |
+
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
|
49 |
+
File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 632, in get_config_dict
|
50 |
+
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
|
51 |
+
File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 689, in _get_config_dict
|
52 |
+
resolved_config_file = cached_file(
|
53 |
+
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py", line 370, in cached_file
|
54 |
+
raise EnvironmentError(
|
55 |
+
OSError: /mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step120 does not appear to have a file named config.json. Checkout 'https://huggingface.co//mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step120/tree/main' for available files.
|
56 |
+
Traceback (most recent call last):
|
57 |
+
File "/usr/local/bin/accelerate", line 8, in <module>
|
58 |
+
sys.exit(main())
|
59 |
+
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 46, in main
|
60 |
+
args.func(args)
|
61 |
+
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 1082, in launch_command
|
62 |
+
simple_launcher(args)
|
63 |
+
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 688, in simple_launcher
|
64 |
+
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
|
65 |
+
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'lm_eval', '--model', 'hf', '--model_args', 'pretrained=/mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step120', '--tasks', 'hellaswag,arc_easy,openbookqa,winogrande,sst2,mrpc', '--batch_size', 'auto']' returned non-zero exit status 1.
|
ckpts/checkpoints_zero_stage_2/global_step120/debug.out
ADDED
File without changes
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9e03eec5cdd561b7923d895648bb55777e32047dd16c045cd4e2231b332b763c
|
3 |
+
size 910989488
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_01_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e699c85dd36c60844f1bca7b18b4f6677ccba2910634a94f50892cc9bc7f2f16
|
3 |
+
size 910989488
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_03_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6fbb23d8da6093b1d5b6d704908c284245278dec3367203ba4309632aa574e3e
|
3 |
+
size 910989488
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_04_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:51bdaf44e5b5891f9b56464bd6123cf535eb7d8d9c2165f5ff529a28f1985349
|
3 |
+
size 911002480
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_05_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fcca01f8760f4c9b41a8a7958445c5d6fc5fc95e4eaac50d1c0348cadde05143
|
3 |
+
size 911002480
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_06_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:508413cf9b9dab4df41f5af2f3e2224c711b798be01c1642a8397dddbe58f3d7
|
3 |
+
size 911002480
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_0_mp_rank_07_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e68172f4ea31351b9822a494bff20f6ae833910787c1b99c4072d0c184921d0c
|
3 |
+
size 911002480
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:afe9629390d83f64b0d2cd9f6d8bbbf1a92a1b44bff468d278fe118338be50a4
|
3 |
+
size 910990192
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_1_mp_rank_01_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2829a5c461fe6a0a3a5c13c0fb106e6d7c3eafa0f097f36346ac45586b4c0357
|
3 |
+
size 910990192
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_1_mp_rank_06_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3aa7730861c43f1b8fe5d6b20d270e072f39ff03f32680501d520905e87285be
|
3 |
+
size 911001904
|
ckpts/checkpoints_zero_stage_2/global_step80/bf16_zero_pp_rank_1_mp_rank_07_optim_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f7552b42ef6bd6c18b6f3ea9d051fef799e9536b0206712ebe5a3d3155d75538
|
3 |
+
size 911001904
|
ckpts/checkpoints_zero_stage_2/global_step80/debug.err
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:481: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
|
2 |
+
_torch_pytree._register_pytree_node(
|
3 |
+
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:338: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
|
4 |
+
_torch_pytree._register_pytree_node(
|
5 |
+
The following values were not passed to `accelerate launch` and had defaults used instead:
|
6 |
+
`--num_processes` was set to a value of `0`
|
7 |
+
`--num_machines` was set to a value of `1`
|
8 |
+
`--mixed_precision` was set to a value of `'no'`
|
9 |
+
`--dynamo_backend` was set to a value of `'no'`
|
10 |
+
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
|
11 |
+
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:481: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
|
12 |
+
_torch_pytree._register_pytree_node(
|
13 |
+
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:338: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
|
14 |
+
_torch_pytree._register_pytree_node(
|
15 |
+
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:338: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
|
16 |
+
_torch_pytree._register_pytree_node(
|
17 |
+
/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/gpu_migration/__init__.py:46: UserWarning: apex not installed, gpu_migration will not swap api for this package.
|
18 |
+
warnings.warn(
|
19 |
+
2024-05-22:14:57:47,840 INFO [__main__.py:251] Verbosity set to INFO
|
20 |
+
2024-05-22:14:57:56,278 INFO [__main__.py:335] Selected Tasks: ['arc_easy', 'hellaswag', 'mrpc', 'openbookqa', 'sst2', 'winogrande']
|
21 |
+
2024-05-22:14:57:56,279 INFO [evaluator.py:131] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
22 |
+
2024-05-22:14:57:56,279 INFO [evaluator.py:177] Initializing hf model, with arguments: {'pretrained': '/mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step80'}
|
23 |
+
[W socket.cpp:464] [c10d] The server socket cannot be initialized on [::]:12345 (errno: 97 - Address family not supported by protocol).
|
24 |
+
[W socket.cpp:697] [c10d] The client socket cannot be initialized to connect to [localhost]:12345 (errno: 97 - Address family not supported by protocol).
|
25 |
+
[W socket.cpp:697] [c10d] The client socket cannot be initialized to connect to [localhost]:12345 (errno: 97 - Address family not supported by protocol).
|
26 |
+
[W socket.cpp:697] [c10d] The client socket cannot be initialized to connect to [localhost]:12345 (errno: 97 - Address family not supported by protocol).
|
27 |
+
2024-05-22:14:57:58,809 INFO [huggingface.py:164] Using device 'cuda'
|
28 |
+
Traceback (most recent call last):
|
29 |
+
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
|
30 |
+
return _run_code(code, main_globals, None,
|
31 |
+
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
|
32 |
+
exec(code, run_globals)
|
33 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 417, in <module>
|
34 |
+
cli_evaluate()
|
35 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate
|
36 |
+
results = evaluator.simple_evaluate(
|
37 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper
|
38 |
+
return fn(*args, **kwargs)
|
39 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/evaluator.py", line 180, in simple_evaluate
|
40 |
+
lm = lm_eval.api.registry.get_model(model).create_from_arg_string(
|
41 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/api/model.py", line 134, in create_from_arg_string
|
42 |
+
return cls(**args, **args2)
|
43 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 190, in __init__
|
44 |
+
self._get_config(
|
45 |
+
File "/mnt/weka/peacock/idc/cronscript/lm-evaluation-harness/lm_eval/models/huggingface.py", line 471, in _get_config
|
46 |
+
self._config = transformers.AutoConfig.from_pretrained(
|
47 |
+
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 934, in from_pretrained
|
48 |
+
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
|
49 |
+
File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 632, in get_config_dict
|
50 |
+
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
|
51 |
+
File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 689, in _get_config_dict
|
52 |
+
resolved_config_file = cached_file(
|
53 |
+
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py", line 370, in cached_file
|
54 |
+
raise EnvironmentError(
|
55 |
+
OSError: /mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step80 does not appear to have a file named config.json. Checkout 'https://huggingface.co//mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step80/tree/main' for available files.
|
56 |
+
Traceback (most recent call last):
|
57 |
+
File "/usr/local/bin/accelerate", line 8, in <module>
|
58 |
+
sys.exit(main())
|
59 |
+
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 46, in main
|
60 |
+
args.func(args)
|
61 |
+
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 1082, in launch_command
|
62 |
+
simple_launcher(args)
|
63 |
+
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 688, in simple_launcher
|
64 |
+
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
|
65 |
+
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'lm_eval', '--model', 'hf', '--model_args', 'pretrained=/mnt/weka/peacock/idc/cronscript/ckpts//hf_ckpt//global_step80', '--tasks', 'hellaswag,arc_easy,openbookqa,winogrande,sst2,mrpc', '--batch_size', 'auto']' returned non-zero exit status 1.
|
ckpts/checkpoints_zero_stage_2/global_step80/debug.out
ADDED
File without changes
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_01-model_00-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:00c05e168931cc0351f55bbc01b605bd26cb1a84f2e3789a0677e746fb2da53c
|
3 |
+
size 51905935
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_01-model_01-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b90298c6641f7b52adafb4fd56979bc6d8e497a21b0166f4df6612757c773d59
|
3 |
+
size 51905935
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_01-model_03-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:22c24e2763bf587f0e729aa2fcb55812f234b3d2f9f076a2799d749dfdf1634b
|
3 |
+
size 51905935
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_03-model_01-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2ee72f0ec50c3b33cf51e463e821b2c5f8870569b2f1bb8cd5615bb4bc0dc6fd
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_03-model_02-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0de0ada53aad4505363ce604c322467b815564de3167c06adeea8280b420d8a4
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_03-model_03-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d7a688cd078a5ccdfb6bd32a4ac0e159816a08d39a1ac85034c339a12ccf9876
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_04-model_00-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9b83a193c99ce934287487c28901f2230ac990f91b9dfc2f3bebf77aff7b8d75
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_04-model_01-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0d384bdc809026461398c1196e67e3fd10fd68b619d786ceb2cd6960fa0df745
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_04-model_02-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d0d489e99dc6d78c2a5f6d0ec6462891f77b3ca90d791d3d98c438ede169e1bb
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_04-model_03-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9deb1b388210a7fbdeb0778465f7c93e566fe585abf088bd33fd14b29480a9b0
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_05-model_00-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:33709595b98603186e534fd86ccc2ce5a7ae02f8375a40591745687abc2b0867
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_05-model_01-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:159d1f3fcaed1108f84afbe7887c621bb7b73e00bf78f9568fa134f522690b8b
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_05-model_02-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9edde2bf9dae4e3d255e8984281a3192858d27e00fa017fed286f440ca778d0d
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_05-model_03-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:db0a823b29bc1d3f679b5d381ea84a951fa9383effd9606c760531e5a7006835
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_06-model_00-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dc91533e49216a2faf3050f775ccab91a135b9f6ec82b7141a2bf08aabf5506d
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_06-model_01-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d14cebc24fefb4ac6aa58e8cf781bf40cac78d3d781f8ebcf1267bf958eda359
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_06-model_02-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:921ccd22b999b40d54e3a57cb2c43fc57e554ecce6664b04156323f952ba1a9b
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_06-model_03-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eb789b6cd2096d31dcddd6a175d4c7147c64eadb6ae53704f740341c68101368
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_07-model_00-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:59af0c075636c9d46d1251626b77cc6521d53bcbd62784e15feb213f2c4a589c
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_07-model_01-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:64e3b59c80f6094d58a847f879b12e59bf9ce25e85d1c110a005b3771c31ba74
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_07-model_02-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:da83697debb11aafc3229b569f81add5e75d4d87c2ebb02d7bc663c272abb2ed
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_07-model_03-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c251848ecce86c3b3374aba783c473ad22c2105da892774d003336bcb7a4bb1f
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_08-model_00-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:40c25384673523d6fd9ebad91a7ad1074dba6db1dd90b30929d199dc1977fd9b
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_08-model_01-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b9ba036e29617f0221a333d06540a9103c85c160dfd47c3a26eb61ff449e805f
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_08-model_02-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2cff6dc45b0c749585c295db43807c141b3e7c1928dcc12058ce92894aef1079
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_08-model_03-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ccfa5f86ed8813455f03bd4cd2df5e1309abd0b62869a421d5ea6b5e0c90fc37
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_09-model_00-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cce272865ff0945ec37429df49c6e4426416a2e03a39174e6dc4d6e0a58200cd
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_09-model_01-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bf03be5b64a16d32fc6634bd87ab6e42d3a6e4cc90a602eae0bcbf8e9d26922e
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_09-model_02-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:404c9566e4dd7a881435b73ba1e43f04ef575accb6452ceb7714941473fdf234
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_09-model_03-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:69cbcba8ef4c2444b2f65d4a3c2efba1a4f8c588438b9d6cf25509cf12919990
|
3 |
+
size 20983188
|
ckpts/checkpoints_zero_stage_2/global_step80/layer_10-model_00-model_states.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ac9260b078996250637ca1512e376ced0ce76d63aefac7ecdf00deaa650bc9ad
|
3 |
+
size 20983188
|