peacock-data-public-datasets-idc-bigscience
/
experiments
/gpt2-meg-ds-3d
/meg_ds_3d_gpt2_perf_n16-ds-on.out
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
***************************************** | |
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. | |
***************************************** | |
using world size: 64, data-parallel-size: 1, tensor-model-parallel size: 4, pipeline-model-parallel size: 16 | |
using torch.float16 for parameters ... | |
------------------------ arguments ------------------------ | |
accumulate_allreduce_grads_in_fp32 .............. False | |
adam_beta1 ...................................... 0.9 | |
adam_beta2 ...................................... 0.999 | |
adam_eps ........................................ 1e-08 | |
adlr_autoresume ................................. False | |
adlr_autoresume_interval ........................ 1000 | |
apply_query_key_layer_scaling ................... True | |
apply_residual_connection_post_layernorm ........ False | |
attention_dropout ............................... 0.1 | |
attention_softmax_in_fp32 ....................... False | |
bert_binary_head ................................ True | |
bert_load ....................................... None | |
bf16 ............................................ False | |
bias_dropout_fusion ............................. True | |
bias_gelu_fusion ................................ True | |
biencoder_projection_dim ........................ 0 | |
biencoder_shared_query_context_model ............ False | |
block_data_path ................................. None | |
checkpoint_activations .......................... True | |
checkpoint_in_cpu ............................... False | |
checkpoint_num_layers ........................... 1 | |
clip_grad ....................................... 1.0 | |
consumed_train_samples .......................... 0 | |
consumed_valid_samples .......................... 0 | |
contigious_checkpointing ........................ False | |
cpu_optimizer ................................... False | |
data_impl ....................................... mmap | |
data_parallel_size .............................. 1 | |
data_path ....................................... ['/gpfswork/rech/six/commun/datasets-custom/openwebtext-10k/meg-gpt2_text_document'] | |
dataloader_type ................................. single | |
DDP_impl ........................................ local | |
decoder_seq_length .............................. None | |
deepscale ....................................... False | |
deepscale_config ................................ None | |
deepspeed ....................................... True | |
deepspeed_activation_checkpointing .............. True | |
deepspeed_config ................................ ./ds_config.json | |
deepspeed_mpi ................................... False | |
distribute_checkpointed_activations ............. False | |
distributed_backend ............................. nccl | |
embedding_path .................................. None | |
encoder_seq_length .............................. 1024 | |
eod_mask_loss ................................... False | |
eval_interval ................................... 100 | |
eval_iters ...................................... 10 | |
evidence_data_path .............................. None | |
exit_duration_in_mins ........................... None | |
exit_interval ................................... None | |
ffn_hidden_size ................................. 32768 | |
finetune ........................................ False | |
fp16 ............................................ True | |
fp16_lm_cross_entropy ........................... False | |
fp32_residual_connection ........................ False | |
global_batch_size ............................... 1024 | |
hidden_dropout .................................. 0.1 | |
hidden_size ..................................... 8192 | |
hysteresis ...................................... 2 | |
ict_head_size ................................... None | |
ict_load ........................................ None | |
img_dim ......................................... 224 | |
indexer_batch_size .............................. 128 | |
indexer_log_interval ............................ 1000 | |
init_method_std ................................. 0.02 | |
init_method_xavier_uniform ...................... False | |
initial_loss_scale .............................. 4294967296 | |
kv_channels ..................................... 256 | |
layernorm_epsilon ............................... 1e-05 | |
lazy_mpu_init ................................... None | |
load ............................................ /gpfsscratch/rech/six/ura81os/checkpoints/gpt2-meg-ds | |
local_rank ...................................... 0 | |
log_batch_size_to_tensorboard ................... False | |
log_interval .................................... 1 | |
log_learning_rate_to_tensorboard ................ True | |
log_loss_scale_to_tensorboard ................... True | |
log_num_zeros_in_grad ........................... False | |
log_params_norm ................................. False | |
log_timers_to_tensorboard ....................... False | |
log_validation_ppl_to_tensorboard ............... False | |
loss_scale ...................................... 12.0 | |
loss_scale_window ............................... 1000 | |
lr .............................................. 0.00015 | |
lr_decay_iters .................................. 800 | |
lr_decay_samples ................................ None | |
lr_decay_style .................................. cosine | |
lr_warmup_fraction .............................. 0.01 | |
lr_warmup_iters ................................. 0 | |
lr_warmup_samples ............................... 0 | |
make_vocab_size_divisible_by .................... 128 | |
mask_prob ....................................... 0.15 | |
masked_softmax_fusion ........................... True | |
max_position_embeddings ......................... 1024 | |
merge_file ...................................... /gpfswork/rech/six/commun/models-custom/megatron-gpt2/megatron_lm_345m_v0.0/release/gpt2-merges.txt | |
micro_batch_size ................................ 4 | |
min_loss_scale .................................. 1.0 | |
min_lr .......................................... 1e-05 | |
mmap_warmup ..................................... False | |
no_load_optim ................................... None | |
no_load_rng ..................................... None | |
no_save_optim ................................... None | |
no_save_rng ..................................... None | |
num_attention_heads ............................. 32 | |
num_channels .................................... 3 | |
num_classes ..................................... 1000 | |
num_layers ...................................... 64 | |
num_layers_per_virtual_pipeline_stage ........... None | |
num_workers ..................................... 2 | |
onnx_safe ....................................... None | |
openai_gelu ..................................... False | |
optimizer ....................................... adam | |
override_lr_scheduler ........................... False | |
params_dtype .................................... torch.float16 | |
partition_activations ........................... False | |
patch_dim ....................................... 16 | |
pipeline_model_parallel_size .................... 16 | |
profile_backward ................................ False | |
query_in_block_prob ............................. 0.1 | |
rampup_batch_size ............................... None | |
rank ............................................ 0 | |
remote_device ................................... none | |
reset_attention_mask ............................ False | |
reset_position_ids .............................. False | |
retriever_report_topk_accuracies ................ [] | |
retriever_score_scaling ......................... False | |
retriever_seq_length ............................ 256 | |
sample_rate ..................................... 1.0 | |
save ............................................ /gpfsscratch/rech/six/ura81os/checkpoints/gpt2-meg-ds | |
save_interval ................................... 500 | |
scatter_gather_tensors_in_pipeline .............. True | |
seed ............................................ 1234 | |
seq_length ...................................... 1024 | |
sgd_momentum .................................... 0.9 | |
short_seq_prob .................................. 0.1 | |
split ........................................... 949,50,1 | |
synchronize_each_layer .......................... False | |
tensor_model_parallel_size ...................... 4 | |
tensorboard_dir ................................. None | |
tensorboard_log_interval ........................ 1 | |
tensorboard_queue_size .......................... 1000 | |
titles_data_path ................................ None | |
tokenizer_type .................................. GPT2BPETokenizer | |
train_iters ..................................... 1000 | |
train_samples ................................... None | |
use_checkpoint_lr_scheduler ..................... False | |
use_contiguous_buffers_in_ddp ................... False | |
use_cpu_initialization .......................... None | |
use_one_sent_docs ............................... False | |
virtual_pipeline_model_parallel_size ............ None | |
vocab_extra_ids ................................. 0 | |
vocab_file ...................................... /gpfswork/rech/six/commun/models-custom/megatron-gpt2/megatron_lm_345m_v0.0/release/gpt2-vocab.json | |
weight_decay .................................... 0.01 | |
world_size ...................................... 64 | |
zero_stage ...................................... 0 | |
-------------------- end of arguments --------------------- | |
setting number of micro-batches to constant 256 | |
> building GPT2BPETokenizer tokenizer ... | |
> padded vocab (size: 50257) with 431 dummy tokens (new size: 50688) | |
> initializing torch distributed ... | |
> initializing tensor model parallel with size 4 | |
> initializing pipeline model parallel with size 16 | |
> setting random seeds to 1234 ... | |
[2021-06-10 20:47:37,205] [INFO] [checkpointing.py:226:model_parallel_cuda_manual_seed] > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 | |
> compiling dataset index builder ... | |
make: Entering directory '/gpfsdswork/projects/rech/six/ura81os/stas/code/megatron-jeffra/megatron/data' | |
make: Nothing to be done for 'default'. | |
make: Leaving directory '/gpfsdswork/projects/rech/six/ura81os/stas/code/megatron-jeffra/megatron/data' | |
>>> done with dataset index builder. Compilation time: 0.106 seconds | |
> compiling and loading fused kernels ... | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
Detected CUDA files, patching ldflags | |
Emitting ninja build file /gpfsdswork/projects/rech/six/ura81os/stas/code/megatron-jeffra/megatron/fused_kernels/build/build.ninja... | |
Building extension module scaled_upper_triang_masked_softmax_cuda... | |
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) | |
ninja: no work to do. | |
Loading extension module scaled_upper_triang_masked_softmax_cuda... | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
Detected CUDA files, patching ldflags | |
Emitting ninja build file /gpfsdswork/projects/rech/six/ura81os/stas/code/megatron-jeffra/megatron/fused_kernels/build/build.ninja... | |
Building extension module scaled_masked_softmax_cuda... | |
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) | |
ninja: no work to do. | |
Loading extension module scaled_masked_softmax_cuda... | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
Detected CUDA files, patching ldflags | |
Emitting ninja build file /gpfsdswork/projects/rech/six/ura81os/stas/code/megatron-jeffra/megatron/fused_kernels/build/build.ninja... | |
Building extension module fused_mix_prec_layer_norm_cuda... | |
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) | |
ninja: no work to do. | |
Loading extension module fused_mix_prec_layer_norm_cuda... | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: | |
!! WARNING !! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
Your compiler (c++) is not compatible with the compiler Pytorch was | |
built with for this platform, which is g++ on linux. Please | |
use g++ to to compile your extension. Alternatively, you may | |
compile PyTorch from source using c++, and then you can also use | |
c++ to compile your extension. | |
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help | |
with compiling PyTorch from source. | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!! WARNING !! | |
warnings.warn(WRONG_COMPILER_WARNING.format( | |
>>> done with compiling and loading fused kernels. Compilation time: 12.900 seconds | |
time to initialize megatron (seconds): -41.720 | |
[after megatron is initialized] datetime: 2021-06-10 20:47:50 | |
building GPT model ... | |
[2021-06-10 20:47:50,326] [INFO] [utils.py:627:see_memory_usage] Before Building Model | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/cuda/memory.py:373: FutureWarning: torch.cuda.memory_cached has been renamed to torch.cuda.memory_reserved | |
warnings.warn( | |
/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/torch/cuda/memory.py:381: FutureWarning: torch.cuda.max_memory_cached has been renamed to torch.cuda.max_memory_reserved | |
warnings.warn( | |
[2021-06-10 20:47:50,329] [INFO] [utils.py:628:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB | |
[2021-06-10 20:47:50,329] [INFO] [utils.py:636:see_memory_usage] CPU Virtual Memory: used = 38.96 GB, percent = 20.8% | |
SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None | |
Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=0, model=1): 1, ProcessCoord(pipe=0, data=0, model=2): 2, ProcessCoord(pipe=0, data=0, model=3): 3, ProcessCoord(pipe=1, data=0, model=0): 4, ProcessCoord(pipe=1, data=0, model=1): 5, ProcessCoord(pipe=1, data=0, model=2): 6, ProcessCoord(pipe=1, data=0, model=3): 7, ProcessCoord(pipe=2, data=0, model=0): 8, ProcessCoord(pipe=2, data=0, model=1): 9, ProcessCoord(pipe=2, data=0, model=2): 10, ProcessCoord(pipe=2, data=0, model=3): 11, ProcessCoord(pipe=3, data=0, model=0): 12, ProcessCoord(pipe=3, data=0, model=1): 13, ProcessCoord(pipe=3, data=0, model=2): 14, ProcessCoord(pipe=3, data=0, model=3): 15, ProcessCoord(pipe=4, data=0, model=0): 16, ProcessCoord(pipe=4, data=0, model=1): 17, ProcessCoord(pipe=4, data=0, model=2): 18, ProcessCoord(pipe=4, data=0, model=3): 19, ProcessCoord(pipe=5, data=0, model=0): 20, ProcessCoord(pipe=5, data=0, model=1): 21, ProcessCoord(pipe=5, data=0, model=2): 22, ProcessCoord(pipe=5, data=0, model=3): 23, ProcessCoord(pipe=6, data=0, model=0): 24, ProcessCoord(pipe=6, data=0, model=1): 25, ProcessCoord(pipe=6, data=0, model=2): 26, ProcessCoord(pipe=6, data=0, model=3): 27, ProcessCoord(pipe=7, data=0, model=0): 28, ProcessCoord(pipe=7, data=0, model=1): 29, ProcessCoord(pipe=7, data=0, model=2): 30, ProcessCoord(pipe=7, data=0, model=3): 31, ProcessCoord(pipe=8, data=0, model=0): 32, ProcessCoord(pipe=8, data=0, model=1): 33, ProcessCoord(pipe=8, data=0, model=2): 34, ProcessCoord(pipe=8, data=0, model=3): 35, ProcessCoord(pipe=9, data=0, model=0): 36, ProcessCoord(pipe=9, data=0, model=1): 37, ProcessCoord(pipe=9, data=0, model=2): 38, ProcessCoord(pipe=9, data=0, model=3): 39, ProcessCoord(pipe=10, data=0, model=0): 40, ProcessCoord(pipe=10, data=0, model=1): 41, ProcessCoord(pipe=10, data=0, model=2): 42, ProcessCoord(pipe=10, data=0, model=3): 43, ProcessCoord(pipe=11, data=0, model=0): 44, ProcessCoord(pipe=11, data=0, model=1): 45, ProcessCoord(pipe=11, data=0, model=2): 46, ProcessCoord(pipe=11, data=0, model=3): 47, ProcessCoord(pipe=12, data=0, model=0): 48, ProcessCoord(pipe=12, data=0, model=1): 49, ProcessCoord(pipe=12, data=0, model=2): 50, ProcessCoord(pipe=12, data=0, model=3): 51, ProcessCoord(pipe=13, data=0, model=0): 52, ProcessCoord(pipe=13, data=0, model=1): 53, ProcessCoord(pipe=13, data=0, model=2): 54, ProcessCoord(pipe=13, data=0, model=3): 55, ProcessCoord(pipe=14, data=0, model=0): 56, ProcessCoord(pipe=14, data=0, model=1): 57, ProcessCoord(pipe=14, data=0, model=2): 58, ProcessCoord(pipe=14, data=0, model=3): 59, ProcessCoord(pipe=15, data=0, model=0): 60, ProcessCoord(pipe=15, data=0, model=1): 61, ProcessCoord(pipe=15, data=0, model=2): 62, ProcessCoord(pipe=15, data=0, model=3): 63} | |
[2021-06-10 20:47:51,179] [INFO] [module.py:360:_partition_layers] Partitioning pipeline stages with method type:transformer | |
stage=0 layers=7 | |
0: _to_float16 | |
1: EmbeddingPipe | |
2: <lambda> | |
3: ParallelTransformerLayerPipe | |
4: ParallelTransformerLayerPipe | |
5: ParallelTransformerLayerPipe | |
6: ParallelTransformerLayerPipe | |
stage=1 layers=4 | |
7: ParallelTransformerLayerPipe | |
8: ParallelTransformerLayerPipe | |
9: ParallelTransformerLayerPipe | |
10: ParallelTransformerLayerPipe | |
stage=2 layers=4 | |
11: ParallelTransformerLayerPipe | |
12: ParallelTransformerLayerPipe | |
13: ParallelTransformerLayerPipe | |
14: ParallelTransformerLayerPipe | |
stage=3 layers=4 | |
15: ParallelTransformerLayerPipe | |
16: ParallelTransformerLayerPipe | |
17: ParallelTransformerLayerPipe | |
18: ParallelTransformerLayerPipe | |
stage=4 layers=4 | |
19: ParallelTransformerLayerPipe | |
20: ParallelTransformerLayerPipe | |
21: ParallelTransformerLayerPipe | |
22: ParallelTransformerLayerPipe | |
stage=5 layers=4 | |
23: ParallelTransformerLayerPipe | |
24: ParallelTransformerLayerPipe | |
25: ParallelTransformerLayerPipe | |
26: ParallelTransformerLayerPipe | |
stage=6 layers=4 | |
27: ParallelTransformerLayerPipe | |
28: ParallelTransformerLayerPipe | |
29: ParallelTransformerLayerPipe | |
30: ParallelTransformerLayerPipe | |
stage=7 layers=4 | |
31: ParallelTransformerLayerPipe | |
32: ParallelTransformerLayerPipe | |
33: ParallelTransformerLayerPipe | |
34: ParallelTransformerLayerPipe | |
stage=8 layers=4 | |
35: ParallelTransformerLayerPipe | |
36: ParallelTransformerLayerPipe | |
37: ParallelTransformerLayerPipe | |
38: ParallelTransformerLayerPipe | |
stage=9 layers=4 | |
39: ParallelTransformerLayerPipe | |
40: ParallelTransformerLayerPipe | |
41: ParallelTransformerLayerPipe | |
42: ParallelTransformerLayerPipe | |
stage=10 layers=4 | |
43: ParallelTransformerLayerPipe | |
44: ParallelTransformerLayerPipe | |
45: ParallelTransformerLayerPipe | |
46: ParallelTransformerLayerPipe | |
stage=11 layers=4 | |
47: ParallelTransformerLayerPipe | |
48: ParallelTransformerLayerPipe | |
49: ParallelTransformerLayerPipe | |
50: ParallelTransformerLayerPipe | |
stage=12 layers=4 | |
51: ParallelTransformerLayerPipe | |
52: ParallelTransformerLayerPipe | |
53: ParallelTransformerLayerPipe | |
54: ParallelTransformerLayerPipe | |
stage=13 layers=4 | |
55: ParallelTransformerLayerPipe | |
56: ParallelTransformerLayerPipe | |
57: ParallelTransformerLayerPipe | |
58: ParallelTransformerLayerPipe | |
stage=14 layers=4 | |
59: ParallelTransformerLayerPipe | |
60: ParallelTransformerLayerPipe | |
61: ParallelTransformerLayerPipe | |
62: ParallelTransformerLayerPipe | |
stage=15 layers=8 | |
63: ParallelTransformerLayerPipe | |
64: ParallelTransformerLayerPipe | |
65: ParallelTransformerLayerPipe | |
66: ParallelTransformerLayerPipe | |
67: <lambda> | |
68: MixedFusedLayerNorm | |
69: EmbeddingPipe | |
70: float16_to_fp32 | |
loss: CrossEntropy | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 12): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 7): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 8): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 8): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 8): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 12): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 12): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 12): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 8): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 7): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 7): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 7): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 9): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 9): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 9): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 1): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 4): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 4): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 4): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 11): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 11): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 11): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 11): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 3): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 3): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 3): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 3): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 9): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 14): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 1): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 1): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 1): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 4): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 6): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 6): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 6): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 14): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 14): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 14): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 6): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 5): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 5): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 5): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 5): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 13): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 13): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 13): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 13): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 2): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 2): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 2): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 2): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 10): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 10): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 10): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 10): 805560320 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 15): 917774336 | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 15): 917774336 | |
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 917757952 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 15): 917774336 > number of parameters on (tensor, pipeline) model parallel rank (1, 15): 917774336 | |
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 917757952 | |
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 917757952 | |
[2021-06-10 20:47:51,720] [INFO] [utils.py:627:see_memory_usage] After Building Model | |
[2021-06-10 20:47:51,721] [INFO] [utils.py:628:see_memory_usage] MA 1.73 GB Max_MA 1.73 GB CA 1.75 GB Max_CA 2 GB | |
[2021-06-10 20:47:51,721] [INFO] [utils.py:636:see_memory_usage] CPU Virtual Memory: used = 39.14 GB, percent = 20.9% | |
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 917757952 | |
> learning rate decay style: cosine | |
DeepSpeed is enabled. | |
[2021-06-10 20:47:51,724] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.4.0+407ff0f, git-hash=407ff0f, git-branch=megatron2.4-3d | |
[2021-06-10 20:47:51,764] [INFO] [engine.py:172:__init__] DeepSpeed Flops Profiler Enabled: False | |
[2021-06-10 20:47:51,764] [INFO] [engine.py:682:_configure_optimizer] Removing param_group that has no 'params' in the client Optimizer | |
[2021-06-10 20:47:51,765] [INFO] [engine.py:687:_configure_optimizer] Using client Optimizer as basic optimizer | |
[2021-06-10 20:47:51,765] [INFO] [engine.py:696:_configure_optimizer] DeepSpeed Basic Optimizer = FusedAdam | |
[2021-06-10 20:47:51,765] [INFO] [logging.py:60:log_dist] [Rank 0] Creating fp16 unfused optimizer with dynamic loss scale | |
[2021-06-10 20:47:51,765] [INFO] [unfused_optimizer.py:37:__init__] Fused Lamb Legacy : False | |
[2021-06-10 20:47:51,885] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam | |
[2021-06-10 20:47:51,885] [INFO] [engine.py:509:_configure_lr_scheduler] DeepSpeed using client LR scheduler | |
[2021-06-10 20:47:51,885] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed LR Scheduler = <megatron.learning_rates.AnnealingLR object at 0x1533710dffd0> | |
[2021-06-10 20:47:51,885] [INFO] [logging.py:60:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999)] | |
[2021-06-10 20:47:51,885] [INFO] [config.py:900:print] DeepSpeedEngine configuration: | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] activation_checkpointing_config { | |
"partition_activations": false, | |
"contiguous_memory_optimization": false, | |
"cpu_checkpointing": false, | |
"number_checkpoints": null, | |
"synchronize_checkpoint_boundary": false, | |
"profile": false | |
} | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] allreduce_always_fp32 ........ False | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] amp_enabled .................. False | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] amp_params ................... False | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] checkpoint_tag_validation_enabled True | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] checkpoint_tag_validation_fail False | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] disable_allgather ............ False | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] dump_state ................... False | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] dynamic_loss_scale_args ...... {'init_scale': 4096, 'scale_window': 500, 'delayed_shift': 2, 'min_scale': 1} | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] eigenvalue_enabled ........... False | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] eigenvalue_gas_boundary_resolution 1 | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] eigenvalue_layer_name ........ bert.encoder.layer | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] eigenvalue_layer_num ......... 0 | |
[2021-06-10 20:47:51,885] [INFO] [config.py:904:print] eigenvalue_max_iter .......... 100 | |
10 20:47:51,886] [INFO] [config.py:904:print] optimizer_params ............. None | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0} | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] pld_enabled .................. False | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] pld_params ................... False | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] prescale_gradients ........... True | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] quantize_change_rate ......... 0.001 | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] quantize_groups .............. 1 | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] quantize_offset .............. 1000 | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] quantize_period .............. 1000 | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] quantize_rounding ............ 0 | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] quantize_start_bits .......... 16 | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] quantize_target_bits ......... 8 | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] quantize_training_enabled .... False | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] quantize_type ................ 0 | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] quantize_verbose ............. False | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] scheduler_name ............... None | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] scheduler_params ............. None | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] sparse_attention ............. None | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] sparse_gradients_enabled ..... False | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] steps_per_print .............. 2000 | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] tensorboard_enabled .......... False | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] tensorboard_job_name ......... DeepSpeedJobName | |
[2021-06-10 20:47:51,886] [INFO] [config.py:904:print] tensorboard_output_path ...... | |
[2021-06-10 20:47:51,887] [INFO] [config.py:904:print] train_batch_size ............. 1024 | |
[2021-06-10 20:47:51,887] [INFO] [config.py:904:print] train_micro_batch_size_per_gpu 4 | |
[2021-06-10 20:47:51,887] [INFO] [config.py:904:print] use_quantizer_kernel ......... False | |
[2021-06-10 20:47:51,887] [INFO] [config.py:904:print] wall_clock_breakdown ......... False | |
[2021-06-10 20:47:51,887] [INFO] [config.py:904:print] world_size ................... 1 | |
[2021-06-10 20:47:51,887] [INFO] [config.py:904:print] zero_allow_untested_optimizer False | |
[2021-06-10 20:47:51,887] [INFO] [config.py:904:print] zero_config .................. { | |
"stage": 0, | |
"contiguous_gradients": false, | |
"reduce_scatter": true, | |
"reduce_bucket_size": 5.000000e+08, | |
"allgather_partitions": true, | |
"allgather_bucket_size": 5.000000e+08, | |
"overlap_comm": false, | |
"load_from_fp32_weights": true, | |
"elastic_checkpoint": true, | |
"offload_param": null, | |
"offload_optimizer": null, | |
"sub_group_size": 1.000000e+12, | |
"prefetch_bucket_size": 5.000000e+07, | |
"param_persistence_threshold": 1.000000e+05, | |
"max_live_parameters": 1.000000e+09, | |
"max_reuse_distance": 1.000000e+09, | |
"gather_fp16_weights_on_model_save": false, | |
"ignore_unused_parameters": true, | |
"legacy_stage1": false | |
} | |
[2021-06-10 20:47:51,887] [INFO] [config.py:904:print] zero_enabled ................. False | |
[2021-06-10 20:47:51,887] [INFO] [config.py:904:print] zero_optimization_stage ...... 0 | |
[2021-06-10 20:47:51,887] [INFO] [config.py:906:print] json = { | |
"train_micro_batch_size_per_gpu": 4, | |
"gradient_accumulation_steps": 256, | |
"gradient_clipping": 1.0, | |
"prescale_gradients": true, | |
"zero_optimization": { | |
"stage": 0 | |
}, | |
"fp16": { | |
"enabled": true, | |
"loss_scale": 0, | |
"loss_scale_window": 500, | |
"hysteresis": 2, | |
"min_loss_scale": 1, | |
"initial_scale_power": 12 | |
}, | |
"steps_per_print": 2.000000e+03, | |
"wall_clock_breakdown": false | |
} | |
[2021-06-10 20:47:51,888] [INFO] [engine.py:76:__init__] CONFIG: micro_batches=256 micro_batch_size=4 | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=0 STAGE=0 LAYERS=7 [0, 7) STAGE_PARAMS=917757952 (917.758M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=1 STAGE=0 LAYERS=7 [0, 7) STAGE_PARAMS=917757952 (917.758M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=2 STAGE=0 LAYERS=7 [0, 7) STAGE_PARAMS=917757952 (917.758M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=3 STAGE=0 LAYERS=7 [0, 7) STAGE_PARAMS=917757952 (917.758M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=32 STAGE=8 LAYERS=4 [35, 39) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=34 STAGE=8 LAYERS=4 [35, 39) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=33 STAGE=8 LAYERS=4 [35, 39) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=35 STAGE=8 LAYERS=4 [35, 39) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=17 STAGE=4 LAYERS=4 [19, 23) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=49 STAGE=12 LAYERS=4 [51, 55) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=51 STAGE=12 LAYERS=4 [51, 55) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=48 STAGE=12 LAYERS=4 [51, 55) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=50 STAGE=12 LAYERS=4 [51, 55) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=16 STAGE=4 LAYERS=4 [19, 23) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=19 STAGE=4 LAYERS=4 [19, 23) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=18 STAGE=4 LAYERS=4 [19, 23) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=26 STAGE=6 LAYERS=4 [27, 31) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=24 STAGE=6 LAYERS=4 [27, 31) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=25 STAGE=6 LAYERS=4 [27, 31) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=27 STAGE=6 LAYERS=4 [27, 31) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=43 STAGE=10 LAYERS=4 [43, 47) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=40 STAGE=10 LAYERS=4 [43, 47) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=41 STAGE=10 LAYERS=4 [43, 47) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=42 STAGE=10 LAYERS=4 [43, 47) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=10 STAGE=2 LAYERS=4 [11, 15) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=8 STAGE=2 LAYERS=4 [11, 15) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=11 STAGE=2 LAYERS=4 [11, 15) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=4 STAGE=1 LAYERS=4 [7, 11) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=6 STAGE=1 LAYERS=4 [7, 11) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=7 STAGE=1 LAYERS=4 [7, 11) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=5 STAGE=1 LAYERS=4 [7, 11) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=58 STAGE=14 LAYERS=4 [59, 63) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=57 STAGE=14 LAYERS=4 [59, 63) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=56 STAGE=14 LAYERS=4 [59, 63) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=59 STAGE=14 LAYERS=4 [59, 63) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=61 STAGE=15 LAYERS=8 [63, 71) STAGE_PARAMS=917774336 (917.774M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=60 STAGE=15 LAYERS=8 [63, 71) STAGE_PARAMS=917774336 (917.774M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=63 STAGE=15 LAYERS=8 [63, 71) STAGE_PARAMS=917774336 (917.774M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=37 STAGE=9 LAYERS=4 [39, 43) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=36 STAGE=9 LAYERS=4 [39, 43) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=39 STAGE=9 LAYERS=4 [39, 43) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=38 STAGE=9 LAYERS=4 [39, 43) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=44 STAGE=11 LAYERS=4 [47, 51) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=47 STAGE=11 LAYERS=4 [47, 51) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=46 STAGE=11 LAYERS=4 [47, 51) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=45 STAGE=11 LAYERS=4 [47, 51) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=28 STAGE=7 LAYERS=4 [31, 35) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=30 STAGE=7 LAYERS=4 [31, 35) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=31 STAGE=7 LAYERS=4 [31, 35) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=29 STAGE=7 LAYERS=4 [31, 35) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=54 STAGE=13 LAYERS=4 [55, 59) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=55 STAGE=13 LAYERS=4 [55, 59) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=52 STAGE=13 LAYERS=4 [55, 59) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=12 STAGE=3 LAYERS=4 [15, 19) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=13 STAGE=3 LAYERS=4 [15, 19) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=14 STAGE=3 LAYERS=4 [15, 19) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=9 STAGE=2 LAYERS=4 [11, 15) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=53 STAGE=13 LAYERS=4 [55, 59) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=15 STAGE=3 LAYERS=4 [15, 19) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=62 STAGE=15 LAYERS=8 [63, 71) STAGE_PARAMS=917774336 (917.774M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=20 STAGE=5 LAYERS=4 [23, 27) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=23 STAGE=5 LAYERS=4 [23, 27) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=22 STAGE=5 LAYERS=4 [23, 27) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
[2021-06-10 20:47:52,226] [INFO] [engine.py:134:__init__] RANK=21 STAGE=5 LAYERS=4 [23, 27) STAGE_PARAMS=805560320 (805.560M) TOTAL_PARAMS=52453507072 (52453.507M) UNIQUE_PARAMS=52004716544 (52004.717M) | |
WARNING: could not find the metadata file /gpfsscratch/rech/six/ura81os/checkpoints/gpt2-meg-ds/latest_checkpointed_iteration.txt | |
will not load any checkpoints and will start from random | |
time (ms) | load-checkpoint: 11.96 | |
[after model, optimizer, and learning rate scheduler are built] datetime: 2021-06-10 20:47:53 | |
> building train, validation, and test datasets ... | |
> datasets target sizes (minimum size): | |
train: 1024000 | |
validation: 112640 | |
test: 10240 | |
> building train, validation, and test datasets for GPT ... | |
> building dataset index ... | |
reading sizes... | |
reading pointers... | |
reading document index... | |
creating numpy buffer of mmap... | |
creating memory view of numpy buffer... | |
> finished creating indexed dataset in 0.032667 seconds | |
number of documents: 10000 | |
> dataset split: | |
train: | |
document indices in [0, 9490) total of 9490 documents | |
validation: | |
document indices in [9490, 9990) total of 500 documents | |
test: | |
document indices in [9990, 10000) total of 10 documents | |
> loading doc-idx mapping from /gpfswork/rech/six/commun/datasets-custom/openwebtext-10k/meg-gpt2_text_document_train_indexmap_1024000ns_1024sl_1234s_doc_idx.npy | |
> loading sample-idx mapping from /gpfswork/rech/six/commun/datasets-custom/openwebtext-10k/meg-gpt2_text_document_train_indexmap_1024000ns_1024sl_1234s_sample_idx.npy | |
> loading shuffle-idx mapping from /gpfswork/rech/six/commun/datasets-custom/openwebtext-10k/meg-gpt2_text_document_train_indexmap_1024000ns_1024sl_1234s_shuffle_idx.npy | |
loaded indexed file in 0.115 seconds | |
total number of samples: 1024856 | |
total number of epochs: 99 | |
> loading doc-idx mapping from /gpfswork/rech/six/commun/datasets-custom/openwebtext-10k/meg-gpt2_text_document_valid_indexmap_112640ns_1024sl_1234s_doc_idx.npy | |
> loading sample-idx mapping from /gpfswork/rech/six/commun/datasets-custom/openwebtext-10k/meg-gpt2_text_document_valid_indexmap_112640ns_1024sl_1234s_sample_idx.npy | |
> loading shuffle-idx mapping from /gpfswork/rech/six/commun/datasets-custom/openwebtext-10k/meg-gpt2_text_document_valid_indexmap_112640ns_1024sl_1234s_shuffle_idx.npy | |
loaded indexed file in 0.050 seconds | |
total number of samples: 113200 | |
total number of epochs: 182 | |
> loading doc-idx mapping from /gpfswork/rech/six/commun/datasets-custom/openwebtext-10k/meg-gpt2_text_document_test_indexmap_10240ns_1024sl_1234s_doc_idx.npy | |
> loading sample-idx mapping from /gpfswork/rech/six/commun/datasets-custom/openwebtext-10k/meg-gpt2_text_document_test_indexmap_10240ns_1024sl_1234s_sample_idx.npy | |
> loading shuffle-idx mapping from /gpfswork/rech/six/commun/datasets-custom/openwebtext-10k/meg-gpt2_text_document_test_indexmap_10240ns_1024sl_1234s_shuffle_idx.npy | |
loaded indexed file in 0.023 seconds | |
total number of samples: 10255 | |
total number of epochs: 672 | |
> finished creating GPT datasets ... | |
[after dataloaders are built] datetime: 2021-06-10 20:47:54 | |
time (ms) | model-and-optimizer-setup: 2744.35 | train/valid/test-data-iterators-setup: 815.33 | |
done with setup ... | |
training ... | |
[before the start of training step] datetime: 2021-06-10 20:47:54 | |
[2021-06-10 20:47:54,339] [INFO] [checkpointing.py:408:forward] Activation Checkpointing Information | |
[2021-06-10 20:47:54,339] [INFO] [checkpointing.py:409:forward] ----Partition Activations False, CPU CHECKPOINTING False | |
[2021-06-10 20:47:54,339] [INFO] [checkpointing.py:412:forward] ----contiguous Memory Checkpointing False with 64 total layers | |
[2021-06-10 20:47:54,339] [INFO] [checkpointing.py:415:forward] ----Synchronization False | |
[2021-06-10 20:47:54,339] [INFO] [checkpointing.py:416:forward] ----Profiling time in checkpointing False | |
[Rank 1] (after 1 iterations) memory (MB) | allocated: 12337.45654296875 | max allocated: 19961.072265625 | reserved: 23288.0 | max reserved: 23288.0 | |
[Rank 61] (after 1 iterations) memory (MB) | allocated: 12923.83251953125 | max allocated: 18175.37841796875 | reserved: 19286.0 | max reserved: 19286.0 | |
[Rank 5] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 17461.94775390625 | reserved: 20002.0 | max reserved: 20002.0 | |
[Rank 9] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 17189.947265625 | reserved: 19824.0 | max reserved: 19824.0 | |
[Rank 17] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16645.9462890625 | reserved: 19216.0 | max reserved: 19216.0 | |
[Rank 13] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16917.94677734375 | reserved: 19456.0 | max reserved: 19456.0 | |
[Rank 25] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16101.9453125 | reserved: 18640.0 | max reserved: 18640.0 | |
[Rank 29] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15829.94482421875 | reserved: 18384.0 | max reserved: 18384.0 | |
[Rank 21] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16373.94580078125 | reserved: 18882.0 | max reserved: 18882.0 | |
[Rank 33] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15557.9443359375 | reserved: 18654.0 | max reserved: 18654.0 | |
[Rank 62] (after 1 iterations) memory (MB) | allocated: 12923.83251953125 | max allocated: 18175.37841796875 | reserved: 19286.0 | max reserved: 19286.0 | |
[Rank 6] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 17461.94775390625 | reserved: 20002.0 | max reserved: 20002.0 | |
[Rank 2] (after 1 iterations) memory (MB) | allocated: 12337.45654296875 | max allocated: 19961.072265625 | reserved: 23204.0 | max reserved: 23204.0 | |
[Rank 10] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 17189.947265625 | reserved: 19760.0 | max reserved: 19760.0 | |
[Rank 18] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16645.9462890625 | reserved: 19218.0 | max reserved: 19218.0 | |
[Rank 14] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16917.94677734375 | reserved: 19424.0 | max reserved: 19424.0 | |
[Rank 0] (after 1 iterations) memory (MB) | allocated: 12337.45654296875 | max allocated: 19961.072265625 | reserved: 22892.0 | max reserved: 22892.0 | |
iteration 1/ 1000 | consumed samples: 1024 | elapsed time per iteration (ms): 159778.9 | learning rate: 1.875E-05 | global batch size: 1024 | lm-loss: 1.244238E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
[Rank 22] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16373.94580078125 | reserved: 18882.0 | max reserved: 18882.0 | |
[Rank 26] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16101.9453125 | reserved: 19024.0 | max reserved: 19024.0 | |
[Rank 41] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18064.0 | max reserved: 18064.0 | |
[Rank 4] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 17461.94775390625 | reserved: 20018.0 | max reserved: 20018.0 | |
[Rank 8] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 17189.947265625 | reserved: 19794.0 | max reserved: 19794.0 | |
[Rank 45] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18302.0 | max reserved: 18302.0 | |
[Rank 30] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15829.94482421875 | reserved: 18384.0 | max reserved: 18384.0 | |
[Rank 16] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16645.9462890625 | reserved: 19200.0 | max reserved: 19200.0 | |
[Rank 34] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15557.9443359375 | reserved: 18622.0 | max reserved: 18622.0 | |
[Rank 12] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16917.94677734375 | reserved: 19504.0 | max reserved: 19504.0 | |
[Rank 53] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17182.0 | max reserved: 17182.0 | |
[Rank 49] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17710.0 | max reserved: 17710.0 | |
[Rank 60] (after 1 iterations) memory (MB) | allocated: 12923.83251953125 | max allocated: 18175.37841796875 | reserved: 19286.0 | max reserved: 19286.0 | |
[Rank 24] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16101.9453125 | reserved: 19040.0 | max reserved: 19040.0 | |
[Rank 37] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18304.0 | max reserved: 18304.0 | |
[Rank 57] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 16926.0 | max reserved: 16926.0 | |
[Rank 28] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15829.94482421875 | reserved: 18368.0 | max reserved: 18368.0 | |
[Rank 32] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15557.9443359375 | reserved: 18606.0 | max reserved: 18606.0 | |
[Rank 3] (after 1 iterations) memory (MB) | allocated: 12337.45654296875 | max allocated: 19961.072265625 | reserved: 23270.0 | max reserved: 23270.0 | |
[Rank 7] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 17461.94775390625 | reserved: 20002.0 | max reserved: 20002.0 | |
[Rank 11] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 17189.947265625 | reserved: 19744.0 | max reserved: 19744.0 | |
[Rank 63] (after 1 iterations) memory (MB) | allocated: 12923.83251953125 | max allocated: 18175.37841796875 | reserved: 19286.0 | max reserved: 19286.0 | |
[Rank 20] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16373.94580078125 | reserved: 18962.0 | max reserved: 18962.0 | |
[Rank 15] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16917.94677734375 | reserved: 19536.0 | max reserved: 19536.0 | |
[Rank 40] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18480.0 | max reserved: 18480.0 | |
[Rank 44] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17934.0 | max reserved: 17934.0 | |
[Rank 19] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16645.9462890625 | reserved: 19200.0 | max reserved: 19200.0 | |
[Rank 23] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16373.94580078125 | reserved: 18912.0 | max reserved: 18912.0 | |
time (ms) | |
[Rank 48] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17710.0 | max reserved: 17710.0 | |
[Rank 31] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15829.94482421875 | reserved: 18384.0 | max reserved: 18384.0 | |
[Rank 27] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 16101.9453125 | reserved: 18640.0 | max reserved: 18640.0 | |
[Rank 42] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18304.0 | max reserved: 18304.0 | |
[Rank 35] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15557.9443359375 | reserved: 18654.0 | max reserved: 18654.0 | |
[Rank 56] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17182.0 | max reserved: 17182.0 | |
[Rank 52] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17182.0 | max reserved: 17182.0 | |
[Rank 36] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18494.0 | max reserved: 18494.0 | |
[Rank 39] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18304.0 | max reserved: 18304.0[Rank 38] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18304.0 | max reserved: 18304.0 | |
[Rank 54] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17182.0 | max reserved: 17182.0 | |
[Rank 46] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18222.0 | max reserved: 18222.0 | |
[Rank 43] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18224.0 | max reserved: 18224.0 | |
[Rank 50] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17710.0 | max reserved: 17710.0 | |
[Rank 47] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 18302.0 | max reserved: 18302.0 | |
[Rank 58] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17182.0 | max reserved: 17182.0 | |
[Rank 55] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17182.0 | max reserved: 17182.0 | |
[Rank 51] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17710.0 | max reserved: 17710.0 | |
[Rank 59] (after 1 iterations) memory (MB) | allocated: 10837.39501953125 | max allocated: 15446.84716796875 | reserved: 17182.0 | max reserved: 17182.0 | |
iteration 2/ 1000 | consumed samples: 2048 | elapsed time per iteration (ms): 141096.8 | learning rate: 3.750E-05 | global batch size: 1024 | lm-loss: 1.244502E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 3/ 1000 | consumed samples: 3072 | elapsed time per iteration (ms): 137138.4 | learning rate: 5.625E-05 | global batch size: 1024 | lm-loss: 4.103157E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 4/ 1000 | consumed samples: 4096 | elapsed time per iteration (ms): 138928.9 | learning rate: 7.500E-05 | global batch size: 1024 | lm-loss: 4.305696E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 5/ 1000 | consumed samples: 5120 | elapsed time per iteration (ms): 137805.9 | learning rate: 9.375E-05 | global batch size: 1024 | lm-loss: 3.814122E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 6/ 1000 | consumed samples: 6144 | elapsed time per iteration (ms): 139183.6 | learning rate: 1.125E-04 | global batch size: 1024 | lm-loss: 3.368778E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 7/ 1000 | consumed samples: 7168 | elapsed time per iteration (ms): 138604.6 | learning rate: 1.312E-04 | global batch size: 1024 | lm-loss: 3.123441E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 8/ 1000 | consumed samples: 8192 | elapsed time per iteration (ms): 137448.5 | learning rate: 1.500E-04 | global batch size: 1024 | lm-loss: 2.563856E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 9/ 1000 | consumed samples: 9216 | elapsed time per iteration (ms): 134118.7 | learning rate: 1.500E-04 | global batch size: 1024 | lm-loss: 2.213366E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 10/ 1000 | consumed samples: 10240 | elapsed time per iteration (ms): 136533.1 | learning rate: 1.500E-04 | global batch size: 1024 | lm-loss: 1.981217E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 11/ 1000 | consumed samples: 11264 | elapsed time per iteration (ms): 139544.9 | learning rate: 1.500E-04 | global batch size: 1024 | lm-loss: 1.872394E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 12/ 1000 | consumed samples: 12288 | elapsed time per iteration (ms): 138324.6 | learning rate: 1.500E-04 | global batch size: 1024 | lm-loss: 1.740661E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 13/ 1000 | consumed samples: 13312 | elapsed time per iteration (ms): 134446.2 | learning rate: 1.500E-04 | global batch size: 1024 | lm-loss: 1.575262E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 14/ 1000 | consumed samples: 14336 | elapsed time per iteration (ms): 137764.0 | learning rate: 1.500E-04 | global batch size: 1024 | lm-loss: 1.397998E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 15/ 1000 | consumed samples: 15360 | elapsed time per iteration (ms): 137041.8 | learning rate: 1.500E-04 | global batch size: 1024 | lm-loss: 1.245603E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 16/ 1000 | consumed samples: 16384 | elapsed time per iteration (ms): 139143.0 | learning rate: 1.500E-04 | global batch size: 1024 | lm-loss: 1.082751E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 17/ 1000 | consumed samples: 17408 | elapsed time per iteration (ms): 139118.9 | learning rate: 1.500E-04 | global batch size: 1024 | lm-loss: 1.204085E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 18/ 1000 | consumed samples: 18432 | elapsed time per iteration (ms): 138928.4 | learning rate: 1.499E-04 | global batch size: 1024 | lm-loss: 1.150506E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 19/ 1000 | consumed samples: 19456 | elapsed time per iteration (ms): 139037.8 | learning rate: 1.499E-04 | global batch size: 1024 | lm-loss: 1.115988E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 20/ 1000 | consumed samples: 20480 | elapsed time per iteration (ms): 138096.1 | learning rate: 1.499E-04 | global batch size: 1024 | lm-loss: 9.714051E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 21/ 1000 | consumed samples: 21504 | elapsed time per iteration (ms): 139033.1 | learning rate: 1.499E-04 | global batch size: 1024 | lm-loss: 9.586049E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 22/ 1000 | consumed samples: 22528 | elapsed time per iteration (ms): 136872.8 | learning rate: 1.499E-04 | global batch size: 1024 | lm-loss: 9.537881E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 23/ 1000 | consumed samples: 23552 | elapsed time per iteration (ms): 137788.2 | learning rate: 1.499E-04 | global batch size: 1024 | lm-loss: 9.239707E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 24/ 1000 | consumed samples: 24576 | elapsed time per iteration (ms): 137068.7 | learning rate: 1.499E-04 | global batch size: 1024 | lm-loss: 8.807950E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 25/ 1000 | consumed samples: 25600 | elapsed time per iteration (ms): 139326.6 | learning rate: 1.498E-04 | global batch size: 1024 | lm-loss: 9.411034E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 26/ 1000 | consumed samples: 26624 | elapsed time per iteration (ms): 138753.7 | learning rate: 1.498E-04 | global batch size: 1024 | lm-loss: 1.019738E+01 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 27/ 1000 | consumed samples: 27648 | elapsed time per iteration (ms): 135832.6 | learning rate: 1.498E-04 | global batch size: 1024 | lm-loss: 8.967265E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 28/ 1000 | consumed samples: 28672 | elapsed time per iteration (ms): 137159.8 | learning rate: 1.498E-04 | global batch size: 1024 | lm-loss: 8.756670E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 29/ 1000 | consumed samples: 29696 | elapsed time per iteration (ms): 135068.0 | learning rate: 1.498E-04 | global batch size: 1024 | lm-loss: 8.835566E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 30/ 1000 | consumed samples: 30720 | elapsed time per iteration (ms): 135619.2 | learning rate: 1.497E-04 | global batch size: 1024 | lm-loss: 8.811040E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 31/ 1000 | consumed samples: 31744 | elapsed time per iteration (ms): 137837.0 | learning rate: 1.497E-04 | global batch size: 1024 | lm-loss: 8.659844E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 32/ 1000 | consumed samples: 32768 | elapsed time per iteration (ms): 135370.3 | learning rate: 1.497E-04 | global batch size: 1024 | lm-loss: 8.494865E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 33/ 1000 | consumed samples: 33792 | elapsed time per iteration (ms): 132840.5 | learning rate: 1.497E-04 | global batch size: 1024 | lm-loss: 8.415603E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 34/ 1000 | consumed samples: 34816 | elapsed time per iteration (ms): 135995.0 | learning rate: 1.496E-04 | global batch size: 1024 | lm-loss: 8.276673E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 35/ 1000 | consumed samples: 35840 | elapsed time per iteration (ms): 130121.0 | learning rate: 1.496E-04 | global batch size: 1024 | lm-loss: 8.076686E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 36/ 1000 | consumed samples: 36864 | elapsed time per iteration (ms): 134088.0 | learning rate: 1.496E-04 | global batch size: 1024 | lm-loss: 7.927558E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 37/ 1000 | consumed samples: 37888 | elapsed time per iteration (ms): 132751.8 | learning rate: 1.495E-04 | global batch size: 1024 | lm-loss: 8.049387E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 38/ 1000 | consumed samples: 38912 | elapsed time per iteration (ms): 137618.8 | learning rate: 1.495E-04 | global batch size: 1024 | lm-loss: 8.101182E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 39/ 1000 | consumed samples: 39936 | elapsed time per iteration (ms): 136129.3 | learning rate: 1.495E-04 | global batch size: 1024 | lm-loss: 8.031030E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 40/ 1000 | consumed samples: 40960 | elapsed time per iteration (ms): 125643.3 | learning rate: 1.494E-04 | global batch size: 1024 | lm-loss: 8.032815E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 41/ 1000 | consumed samples: 41984 | elapsed time per iteration (ms): 137845.6 | learning rate: 1.494E-04 | global batch size: 1024 | lm-loss: 8.030648E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 42/ 1000 | consumed samples: 43008 | elapsed time per iteration (ms): 136653.4 | learning rate: 1.494E-04 | global batch size: 1024 | lm-loss: 7.932028E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 43/ 1000 | consumed samples: 44032 | elapsed time per iteration (ms): 133720.0 | learning rate: 1.493E-04 | global batch size: 1024 | lm-loss: 7.879141E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 44/ 1000 | consumed samples: 45056 | elapsed time per iteration (ms): 134441.1 | learning rate: 1.493E-04 | global batch size: 1024 | lm-loss: 7.791877E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 45/ 1000 | consumed samples: 46080 | elapsed time per iteration (ms): 137502.0 | learning rate: 1.492E-04 | global batch size: 1024 | lm-loss: 7.738390E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 46/ 1000 | consumed samples: 47104 | elapsed time per iteration (ms): 131717.1 | learning rate: 1.492E-04 | global batch size: 1024 | lm-loss: 7.792564E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 47/ 1000 | consumed samples: 48128 | elapsed time per iteration (ms): 134668.9 | learning rate: 1.492E-04 | global batch size: 1024 | lm-loss: 7.803430E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 48/ 1000 | consumed samples: 49152 | elapsed time per iteration (ms): 134516.4 | learning rate: 1.491E-04 | global batch size: 1024 | lm-loss: 7.790527E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |
iteration 49/ 1000 | consumed samples: 50176 | elapsed time per iteration (ms): 136328.8 | learning rate: 1.491E-04 | global batch size: 1024 | lm-loss: 7.747273E+00 | loss scale: -1.0 | grad norm: 0.000 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | | |
time (ms) | |