diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/README.md b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/README.md new file mode 100644 index 0000000000000000000000000000000000000000..ebef23a168df0ed3afaa66f00507f3925139da3c --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/README.md @@ -0,0 +1,164 @@ +# Habana MLPerf™ inference submission +This directory provides instructions to reproduce Habana's results for MLPerf™ inference submission.\ +MLPerf™ is a trademark and service mark of MLCommons Association in the United States and other countries.\ +All rights reserved. Unauthorized use is strictly prohibited. + +- [Habana MLPerf™ inference submission](#habana-mlperf-inference-submission) + - [Setup](#setup) + - [Prepare MLPerf Directory](#prepare-mlperf-directory) + - [Build and Deploy HabanaLabs Container](#build-and-deploy-habanalabs-container) + - [Download Checkpoint](#download-checkpoint) + - [Download Dataset](#download-dataset) + - [Reproduce Results](#reproduce-results) + - [99 and 99.9 Accuracy](#99-and-999-accuracy) + - [Get Started](#get-started) + - [Generate Results](#generate-results) + - [Performance Optimization with FP8 Flow](#performance-optimization-with-fp8-flow) + - [Environment Variables](#environment-variables) + - [Supported Configurations](#supported-configurations) + - [Changelog](#changelog) + +## Setup + +Please follow the instructions provided in the [Gaudi Installation Guide](https://docs.habana.ai/en/latest/Installation_Guide/index.html) to set up the environment. + +### Prepare MLPerf Directory + +Perform the following: + +1. Follow the instructions provided in the [Gaudi Installation +Guide](https://docs.habana.ai/en/latest/Installation_Guide/index.html) to set up the +environment including the `$PYTHON` environment variable. +The guide will walk you through the process of setting up your system to run the benchmarks on Gaudi. + +2. Clone Model-References repository and switch to the branch that matches your SynapseAI version. You can run the +[`hl-smi`](https://docs.habana.ai/en/latest/Management_and_Monitoring/System_Management_Tools_Guide/System_Management_Tools.html#hl-smi-utility-options) +utility to determine the SynapseAI version. + + ```bash + export MLPERF_ROOT=/path/to/mlperf/root + cd $MLPERF_ROOT + git clone -b [SynapseAI version] https://github.com/HabanaAI/Model-References + export MLPERF_DIR=$MLPERF_ROOT/Model-References/MLPERF3.1/Inference + ``` + +### Build and Deploy HabanaLabs Container + +To build MLPerf inference 3.1 container, perform the following: + +1. Set the environment variables for the docker command. + * To find a docker image, go to [gaudi-docker](https://vault.habana.ai/ui/repos/tree/General/gaudi-docker). + * Open gaudi-docker directory, and select the folder that matches the SynapseAI version (determined by running [`hl-smi`](https://docs.habana.ai/en/latest/System_Management_Tools_Guide/System_Management_Tools.html#hl-smi-utility-options)). + * Navigate to subdirectories, choose system and framework version. + * Choose the docker build version. Most often 'latest' will be used. + * Navigate to "Docker Info" tab and note "Title" string. + * Set `DOCKER_IMAGE` to "Title" string with `vault.habana.ai/gaudi-docker/` prefix. See the examples below. + * Example on PyTorch Container: + ```bash + # NOTE: The below is only an example value. Replace [SynapseAI version] and [PT version] to match your setup and Supported Configuration. + export DOCKER_IMAGE=vault.habana.ai/gaudi-docker/[SynapseAI version]/ubuntu20.04/habanalabs/pytorch-installer-[PT Version]:latest + ``` + + +2. Create `mlperf-habana container` by running the following command. + +```bash +docker run --privileged --security-opt seccomp=unconfined \ + --name mlperf-habana -td \ + -v /dev:/dev \ + --device=/dev:/dev \ + -v /sys/kernel/debug:/sys/kernel/debug \ + -v /tmp:/tmp \ + -v $MLPERF_DIR:/root/Habana/ \ + --cap-add=sys_nice --cap-add=SYS_PTRACE \ + --user root --workdir=/root --net=host \ + --ulimit memlock=-1:-1 ${DOCKER_IMAGE} +``` + +3. Start the docker. +```bash +docker exec -it mlperf-habana bash +``` + +### Download Checkpoint +```bash +mkdir -p /mnt/weka/data/pytorch/ +pushd /mnt/weka/data/pytorch/ +wget https://cloud.mlcommons.org/index.php/s/QAZ2oM94MkFtbQx/download --output-document checkpoint.zip +unzip -q checkpoint.zip && rm checkpoint.zip +popd +``` + +### Download Dataset +```bash +pushd /root/Habana/code/gptj-99.9/gpt-j +python download_cnndm.py +cp data/cnn_eval.json /mnt/weka/data/pytorch/gpt-j/cnn_eval.json +popd +``` + +## Reproduce Results +### 99 and 99.9 Accuracy +The same script was submitted for both 99 and 99.9 benchmarks - no additional improvements were made for low accuracy (99), and 99.9 results were used for 99 as well. + +### Get Started +Install the requirements and build the latest loadgen. + +```bash +cd /root/Habana/code +source functions.sh +build_mlperf_inference +``` +### Generate Results +**To generate full submission results, run the following command:** +```bash +build_mlperf_inference --output-dir --submission gptj-99.9-fp8 +``` +The command produces results from accuracy and performance runs for both Offline and Server scenarios. +Logs can be found under /output_dir/logs/model/, e.g. /results/logs/gptj-99.9-fp8/ + + +**To generate results for Offline and Server scenarios separately, run the following commands:** +```bash +build_mlperf_inference --output-dir --submission gptj-99.9-fp8_Offline +``` + +```bash +build_mlperf_inference --output-dir --submission gptj-99.9-fp8_Server +``` +Logs can be found under /output_dir/logs/model/scenario/, e.g. /results/logs/gptj-99.9-fp8/Offline/ + +**To generate results for accuracy and performance separately, add ```--mode``` flag as in one of the following commands:** +```bash +build_mlperf_inference --output-dir --submission gptj-99.9-fp8_Server --mode acc +``` +```bash +build_mlperf_inference --output-dir --submission gptj-99.9-fp8_Offline --mode perf +``` + +Logs can be found under /output_dir/logs/model/scenario/mode/, e.g. /results/logs/gptj-99.9-fp8/Offline/accuracy/ + +## Performance Optimization with FP8 Flow +To optimize performance, we set heavy-performance ops to operate in fp8-143. + +All fp8 ops are working with a fixed fp8 exponent bias = 7 and no scaling is required. + +### Environment Variables +The following outlines custom ENV variables used in the GPT-J submission script: + +| Enviroment Variable | Effect | +|------------------------------------------------------------------------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| PT_USE_FP8_143=1 | Sets PT backend fp8 flavor to fp8_143 | +| UPDATE_MME_OUTPUT_PRECISION_FILTER="v_proj,matmul_av" | Allows the specified MME layer to output fp8 for performance optimization. | +| SCALES_FILE_PATH=quantization/measurements/per_tensor_scales_gpt_j.json | Loads per-tensor scales required for fp8 quantization. If not provided, no scaling is applied. | +| ENABLE_EXPERIMENTAL_FLAGS=true | Enables the above flags | + +## Supported Configurations + +| Validated on | SynapseAI Version | Framework Version(s) | Mode | +| :----------: | :---------------: | :------------------: | :------: | +| Gaudi2 | 1.14.0 | PyTorch 2.1.1 | Inference | + +## Changelog +### 1.13.0 +- Published MLPerf™ inference 3.1 GPT-J script \ No newline at end of file diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/accuracy_from_perf.config b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/accuracy_from_perf.config new file mode 100644 index 0000000000000000000000000000000000000000..5b5fcb34a23b9dc2ab02d6dae37f53db76ae2fd2 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/accuracy_from_perf.config @@ -0,0 +1,8 @@ +# The format of this config file is 'key = value'. +# The key has the format 'model.scenario.key'. Value is mostly int64_t. +# Model maybe '*' as wildcard. In that case the value applies to all models. +# All times are in milli seconds + +# mode dictionary (0 = submission, 1 = accuracy, 2 = performance, 3 = find peak perf) +*.*.mode = 2 +*.*.accuracy_log_sampling_target = 24576 \ No newline at end of file diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/functions.sh b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/functions.sh new file mode 100644 index 0000000000000000000000000000000000000000..3faaac2bf6cd60dfd087eaffbf39281fda482b95 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/functions.sh @@ -0,0 +1,93 @@ +#!/bin/bash + +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### + +[[ $0 != $BASH_SOURCE ]] || echo "This script must be sourced!" + +export MLPERF_INFERENCE_CODE_DIR=$(realpath $(dirname $BASH_SOURCE) ) + +function mlperf_inference_usage() +{ + echo -e "\n usage: build_mlperf_inference [options]\n" + echo -e "options:\n" + echo -e " --output-dir Path to save logs, results and summary; optional" + echo -e " --skip-reqs Skip installing requirements, downloading MLCommons Inference and building loadgen; optional" + echo -e " --compliance Create a submission package compliant with MLCommons submission checker; optional" + echo -e " --submission List of scenarios to run; optional" + echo -e " -h, --help Prints this help" + +} + +build_mlperf_inference() +{ + output_dir=$(pwd)/results + submission_args="" + compliance=false + skip_reqs=false + + while [ -n "$1" ]; + do + case $1 in + + -h | --help ) + mlperf_inference_usage + return 0 + ;; + --output-dir ) + output_dir=$2 + shift 2 + ;; + --compliance ) + compliance=true + shift 1 + ;; + --skip-reqs ) + shift + skip_reqs=true + ;; + --submission ) + shift + submission_args=$@ + break + ;; + --precommit ) + shift + submission_args="gptj-99-quick" + break + ;; + --promotion ) + shift + submission_args="gptj-99-quick" + break + ;; + esac + done + + if [ "$skip_reqs" == "false" ]; then + pip install -r $MLPERF_INFERENCE_CODE_DIR/gpt-j/requirements.txt + + BUILD_DIR=$(mktemp -d -t mlperf.XXXX) + pushd $BUILD_DIR + git clone --depth 1 --recurse-submodules https://github.com/mlcommons/inference.git mlcommons_inference + cd mlcommons_inference/loadgen + CFLAGS="-std=c++14 -O3" python setup.py bdist_wheel + cd ..; pip install --force-reinstall loadgen/dist/`ls -r loadgen/dist/ | head -n1` ; cd - + popd + fi + + if [ ! -z "$submission_args" ]; then + pushd $MLPERF_INFERENCE_CODE_DIR + if [ "$compliance" == "true" ]; then + python run_mlperf_scenarios.py $submission_args --output-dir $output_dir --mlperf-path $BUILD_DIR/mlcommons_inference + python prepare_and_check_submission.py $submission_args --output-dir $output_dir --mlperf-path $BUILD_DIR/mlcommons_inference --systems-dir-path $MLPERF_INFERENCE_CODE_DIR/../systems --measurements-dir-path $MLPERF_INFERENCE_CODE_DIR/../measurements + else + python run_mlperf_scenarios.py $submission_args --output-dir $output_dir + fi + popd + + fi + + rm -rf $BUILD_DIR +} diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/.gitignore b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..b88a2f417ce03174ec19d40d4b76989117a942e8 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/.gitignore @@ -0,0 +1,5 @@ +prof_* +.graph_dumps/* +__pycache__/* +build/* +data/* diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/DATASETS_MODELS.md b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/DATASETS_MODELS.md new file mode 100644 index 0000000000000000000000000000000000000000..c323a3b7d144125d312ffbcb9f15cdda9e70bb4c --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/DATASETS_MODELS.md @@ -0,0 +1,8 @@ +# Datasets + +This is a comprehensive list of public datasets and models used by this repository. + +| Name (Link/Source) | Framework | Use Case | +|--------------------| --------- | -------- | +| [cnn_dailymail (Hugging Face)](https://huggingface.co/datasets/cnn_dailymail) | PyTorch | Text Summarization | +| [gpt-j-6b (Hugging Face)](https://huggingface.co/EleutherAI/gpt-j-6b) | PyTorch | Text Summarization | diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/LICENSE b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..25ae4110625608b553d170b6bb5c439215503afe --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/backend.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/backend.py new file mode 100644 index 0000000000000000000000000000000000000000..c0673c229eac56654ee82da174d6d2ee3f907300 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/backend.py @@ -0,0 +1,314 @@ +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### + +import time +import math +import array +import statistics +import torch +from contextlib import contextmanager +from transformers import AutoModelForCausalLM, AutoTokenizer +import mlperf_loadgen as lg + +from dataset import Dataset +import habana_generation_utils as hgu +import modeling_gptj as hpu_modeling_gptj +import quantization.quantize as quantize +from torch.utils.tensorboard import SummaryWriter + + +gen_kwargs = { + "max_new_tokens": 128, + "min_new_tokens": 30, +} + + +def setup_pt_profiler(schedule): + activities = [torch.profiler.ProfilerActivity.CPU] + activities.extend([torch.profiler.ProfilerActivity.HPU]) + + profiler = torch.profiler.profile( + schedule=schedule, + activities=activities, + on_trace_ready=torch.profiler.tensorboard_trace_handler('.', use_gzip=True), + record_shapes=True, + with_stack=True) + return profiler + + +def setup_hltv_profiler(schedule): + import sys + import os + sys.path.append(os.environ['PYTORCH_MODULES_ROOT_PATH']) + from topologies.tools import SynapseProfilerApi, TraceType + api = SynapseProfilerApi() + + class SynapseProfiler: + def check(self): + if schedule(self.cur_step) == torch.profiler.ProfilerAction.RECORD_AND_SAVE: + api.profiler_start(TraceType.TraceAll, 0) + + def start(self): + self.cur_step = 0 + self.check() + + def step(self): + self.cur_step = self.cur_step + 1 + self.check() + + def stop(self): + api.profiler_stop(TraceType.TraceAll, 0) + api.profiler_get_trace_json(TraceType.TraceAll, 0) + + return SynapseProfiler() + + +def setup_profiler(step, profile_type): + active = 1 + warmup = 1 if step > 0 else 0 + wait = max(step - warmup, 0) + + schedule = torch.profiler.schedule(wait=wait, warmup=warmup, active=active, repeat=1) + + if profile_type == 'tb': + return setup_pt_profiler(schedule) + else: + return setup_hltv_profiler(schedule) + + +class SUT_base(): + def __init__(self, args, options): + print("Loading PyTorch model...") + self.dataset_path = args.dataset_path + self.model_path = args.model_path + self.batch_size = args.batch_size + self.input_length = 1919 + self.max_length = self.input_length + gen_kwargs['max_new_tokens'] + 1 + self.profile = args.profile + self.profile_type = args.profile_type + self.inference_times = [] + self.tb_writer = SummaryWriter() if args.enable_tensorboard_logging else None + self.is_eager = args.eager + + gen_kwargs["num_beams"] = options["num_beams"] + gen_kwargs["early_stopping"] = options["early_stopping"] + + if args.device == "cuda": + assert torch.cuda.is_available(), "CUDA device is not available!" + elif args.device == "hpu": + import habana_frameworks.torch.core + assert torch.hpu.is_available(), "HPU device is not available!" + self.device = torch.device(args.device) + + self.model = self.setup_model(args) + + self.hgu_opts = hgu.GenerationOptions( + max_length=self.max_length, + min_length=self.input_length+gen_kwargs['min_new_tokens'], + max_input_length=self.max_length, + **options, + ) + if self.profile: + self.hgu_opts.max_iterations = args.profile_tokens + if args.dtype == "float8": + self.hgu_opts.kv_cache_fp8 = True + + self.tokenizer = AutoTokenizer.from_pretrained( + self.model_path, + model_max_length=self.max_length, + padding_side="left", + use_fast=True,) + self.tokenizer.pad_token = self.tokenizer.eos_token + + self.data_object = Dataset( + self.model_path, self.dataset_path, total_count_override=args.max_examples) + self.qsl = lg.ConstructQSL(self.data_object.count, self.data_object.perf_count, + self.data_object.LoadSamplesToRam, self.data_object.UnloadSamplesFromRam) + + def setup_model(self, args): + if self.device.type == "hpu": + model = hpu_modeling_gptj.GPTJForCausalLM.from_pretrained( + self.model_path, + low_cpu_mem_usage=True, + torch_dtype=torch.bfloat16 + ) + else: + is_gpu = self.device.type == "cuda" + model = AutoModelForCausalLM.from_pretrained( + self.model_path, + device_map="auto" if not is_gpu else None, + low_cpu_mem_usage=True if not is_gpu else False, + torch_dtype=torch.bfloat16 + ) + + if model.config.pad_token_id is None: + model.config.pad_token_id = model.config.eos_token_id + model.to(torch.bfloat16) + model.to(self.device) + + if self.device.type == "hpu": + if not self.is_eager: + import habana_frameworks.torch.hpu.graphs as htgraphs + model = htgraphs.wrap_in_hpu_graph(model) + if args.quantization_file: + model = quantize.setup_quantization(model, args.quantization_file) + return model + + def warmup(self): + print("Warming up...") + dummy_tensor = torch.ones([self.batch_size, self.input_length], dtype=torch.int64) + input_batch = { + "input_ids": dummy_tensor, "attention_mask": dummy_tensor.detach().clone() + } + input_batch, _, _ = hgu.prepare_decoder_only_input_without_moving( + self.tokenizer.pad_token_id, self.hgu_opts, input_batch) + + t_start = time.time() + _ = self.inference_call(input_batch).cpu().numpy() + t_end = time.time() + print("Warmup took {:.2f} ms".format((t_end-t_start)*1000)) + + def issue_queries(self, query_samples): + num_samples = len(query_samples) + batches = math.ceil(num_samples / self.batch_size) + print("Number of Samples in query_samples : ", num_samples) + + profiler = None + if self.profile: + profiler = setup_profiler(batches - 1, self.profile_type) + profiler.start() + for batch_id in range(batches): + start_index = batch_id * self.batch_size + batch_size = min(num_samples - start_index, self.batch_size) + + input_batch = self.prepare_input_batch(query_samples, start_index, batch_size) + input_batch, _, _ = hgu.prepare_decoder_only_input_without_moving( + self.tokenizer.pad_token_id, self.hgu_opts, input_batch) + + with self.measure_and_save_time(batch_id): + output_batch = self.inference_call(input_batch).cpu().numpy() + if profiler: + profiler.step() + + self.send_responses(query_samples, start_index, batch_size, output_batch) + if profiler: + profiler.stop() + + def prepare_input_batch(self, query_samples, start_index, batch_size): + indices = [ + query_samples[start_index + j].index for j in range(batch_size) + ] + while len(indices) < self.batch_size: + indices.append(indices[0]) + + input_ids = [ + self.data_object.source_encoded_input_ids[index] for index in indices + ] + attention_masks = [ + self.data_object.source_encoded_attn_masks[index] for index in indices + ] + return { + "input_ids": torch.cat(input_ids), "attention_mask": torch.cat(attention_masks) + } + + @contextmanager + def measure_and_save_time(self, batch_id): + t_start = time.time() + yield + t_end = time.time() + time_taken = t_end - t_start + if self.tb_writer: + self.tb_writer.add_scalar('batch_time [seconds]', time_taken, batch_id) + print("Batch {} : {:.2f} ms".format(batch_id, (time_taken)*1000)) + self.inference_times.append(time_taken) + + def inference_call(self, input_batch): + with torch.inference_mode(): + input_batch_lengths = [x.shape[0] for x in input_batch["input_ids"]] + + if self.device.type == "hpu": + initial_ids, beam_trace = hgu.generate_on_prepared_input( + self.model, self.hgu_opts, input_batch, self.max_length, self.input_length) + output_batch = hgu.finalize_beams( + initial_ids, beam_trace, self.model.config, self.hgu_opts.length_penalty) + else: + output_batch = self.model.generate( + **input_batch, **gen_kwargs, pad_token_id=self.tokenizer.eos_token_id) + + output_batch_truncated = [] + for data, source_len in zip(output_batch, input_batch_lengths): + output_batch_truncated.append(data[source_len:]) + output_batch_truncated = torch.stack(output_batch_truncated) + return output_batch_truncated + + def send_responses(self, query_samples, start_index, batch_size, output_batch): + responses_array = [ + array.array("B", output_batch[i].tobytes()) for i in range(batch_size) + ] + bi = [ + response_array.buffer_info() for response_array in responses_array + ] + lg.QuerySamplesComplete([ + lg.QuerySampleResponse( + query_samples[start_index + j].id, bi[j][0], bi[j][1] + ) for j in range(batch_size) + ]) + + def flush_queries(self): + pass + + def close_log_file(self): + pass + + def __del__(self): + if self.inference_times: + mean = statistics.fmean(self.inference_times) + print(f"Average performance: {self.batch_size / mean:.3f} samples/s") + + if self.device.type == "hpu": + from habana_frameworks.torch.hpu.memory import memory_stats + GB = 1024**3 + memory_stats_dict = memory_stats(self.device) + max_in_use = memory_stats_dict['MaxInUse'] / GB + limit = memory_stats_dict['Limit'] / GB + print( + "HPU memory usage: {:.1f} GB / {:.1f} GB ({:.0f}%)".format( + max_in_use, limit, max_in_use / limit * 100.0 + ) + ) + print("Finished destroying SUT.") + + +class SUT_Offline(SUT_base): + def __init__(self, args, options): + SUT_base.__init__(self, args, options) + self.sut = lg.ConstructSUT(self.issue_queries, self.flush_queries) + self.warmup() + '''IssueQuery and inference methods implemented in Base class''' + + +class SUT_Server(SUT_base): + def __init__(self, args, options): + SUT_base.__init__(self, args, options) + self.batch_size = 1 # batching is not supported currently in Server mode + self.total_samples_done = 0 + self.sut = lg.ConstructSUT(self.issue_queries, self.flush_queries) + self.warmup() + + def issue_queries(self, query_samples): + input_batch = self.prepare_input_batch(query_samples, start_index=0, batch_size=1) + input_batch, _, _ = hgu.prepare_decoder_only_input_without_moving( + self.tokenizer.pad_token_id, self.hgu_opts, input_batch) + + t_start = time.time() + output_batch = self.inference_call(input_batch).cpu().numpy() + t_end = time.time() + print("Sample time : {:.2f} ms".format((t_end-t_start)*1000)) + + self.send_responses( + query_samples, start_index=0, batch_size=1, output_batch=output_batch) + + self.total_samples_done += 1 + if self.total_samples_done % 5 == 0: + print("Completed : ", self.total_samples_done) diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/backlog.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/backlog.py new file mode 100644 index 0000000000000000000000000000000000000000..17e284e34d5e9a0e22f080d6c9d024403748f115 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/backlog.py @@ -0,0 +1,56 @@ +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### + +import bisect +import itertools +import time + + +class Backlog: + def __init__(self, buckets, key_fn): + self.buckets = buckets + self.todo = [[] for b in buckets] + self.key_fn = key_fn + + def find_bucket(self, key): + key_tuple = (key,0) + return bisect.bisect_left(self.buckets, key_tuple) + + def add(self, queries): + for q in sorted(queries, key=self.key_fn, reverse=True): + self.todo[self.find_bucket(self.key_fn(q))].append((q, time.time())) + + def next(self, max_size): + starting_bucket = self.find_bucket(max_size) + for bidx in range(starting_bucket, -1, -1): + while len(self.todo[bidx]) > 0: + yield self.todo[bidx].pop(0) + + def next_n(self, max_size, n): + return list(itertools.islice(self.next(max_size), n)) + + def __len__(self): + return sum(len(b) for b in self.todo) + + def get_load(self): + return [(b[0], len(t)) for b, t in zip(self.buckets, self.todo)] + + def get_max_wait_time_from_bucket(self, bucket_size): + bucket_idx = self.find_bucket(bucket_size) + if len(self.todo[bucket_idx]) == 0: + return 0.0 + return time.time() - self.todo[bucket_idx][0][-1] + +if __name__ == '__main__': + import random + buckets = [256, 512, 768] + queries = [(random.choice(['A', 'B', 'C']), random.randrange(buckets[-1])) for _ in range(16)] + + backlog = Backlog(buckets, lambda q: q[1]) + + backlog.add(queries) + print(backlog.todo) + print(768, backlog.next_n(768, 3)) + print(256, backlog.next_n(256, 16)) + print(backlog.todo) diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/calibration-list.txt b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/calibration-list.txt new file mode 100644 index 0000000000000000000000000000000000000000..fc5d258845a6b588caa084b215781386f42cb143 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/calibration-list.txt @@ -0,0 +1,1000 @@ +eceaa658027ad9625f832368198e11bd2fa38977 +70052e55c12c97a9bf6796a25b6ced8f3ec4be06 +9767fdf0a53da6ee9e2f75582cac5964d80e1b5d +1f8c736647d06c42beb553b25a02e44ca15ca0fb +d3ce7d615ecc15f094d8130654812ad77cd604a3 +55086c3f69cb41b991d3db0c6b10b0aa374788b4 +2745f93afca3edf25dd9ccfd094eef06298f62cb +343644770a597a2dfa7548ba165fa9c6bdc88245 +e2cecb8734918ac6a2d9cc8afcfafb16b1781ae2 +feba32aa9aa3b51fb451bc48a54e78d02efe977f +9c2e4d2f6085ef9f237e6fe1baf83000a264cf93 +d85158494b7041560466f153c4d050362f90a7e6 +1e14852c49e84434ca249951e0fe603610eb68f6 +369d721d1102f0cad726ad3426d79c965a224b28 +b9898d6014353a7411c0cec222996431c832c35f +7cbe104b3203061bb544267879fa316436a1ab5f +f48a6b4fa0827b4c6324bd47dc2e8954141b1a6a +acb5ce76c230bc66633414678bf254387c3d6c85 +d70d5115ec3adc5fb3aee8b1e29c7f0f2db083be +ffbe89f592457d39ab9c28de4fd89fbac2150f81 +d841808ba87a4aabbfe4427c53eb0e2e8a74995c +2d4125c6162f9b4924a262a55bd8fe3faad0b3c7 +95fbe3b3a7e5fb6fa48289885df25d9a6e911d2d +f6ffa98e7d46000bee325e5284a2ac897ba4149d +31e424f7a6fe1f5ec61486eec17a85c78ca2ce8c +2165fd490b9d14ce0cd3784beb2f4a1d10028a1d +4d6b1a85d264397e956c0a03f70de123ed4fff5f +4d20111e71a06547613e161292a51cc44eb74da0 +b90b35dfde9fc08fe1651e36bc02a3f1503e3b6e +2d3b2eb21a6691c764aaa1f27030453fc44331ab +dbf02196bae40e0adbcd1790294748a560b1e45c +0ef5c715acd7a70f51a9d800c9e01bfe69657bed +f0f65f40fc56b69bbfab5d88a157dc52ad967484 +db3575fd124f65a7aeee7b4512b5e0fbebf2c8ea +1234fafb7b6ecc9224d6d979536115771467f4ae +c31f79babaf4d93609ef3ee2966e25428a4fc130 +600b619001a9d840f5bb5ee3362787ee38df26fd +5e68842017bc6f5806d962ae5ddea8490d763b82 +fa87334256af163f8f56d8035d1be4c7909147e9 +826f2414623f8e444725f1c9af593f27b440ebdc +3603896c0fbb49d6231ced092ff7d057db2c43f1 +4b8505e0219b78f9645fb12177d97b8e29821ee5 +3332226f8b4f6c46ed3c66ad0765c3720475254f +97223b7119e264a598afe67e0ba82fbb97fedd2b +87fd2fd13729ba13920591bcc96a83ddf61625e0 +2160c5d812611becf442b5079a7908c2f48f6de7 +559d3b10273acbd4b98ff5557aee92f33e50129d +273c1d2936592874fb599f39dce8d6a0813a49b3 +3e6affd8cc6ead338996574fe1d0cb03ca983ea2 +3b733db6e80a89bb4062c1e661b9f9d4690ea0c8 +4a0f219f4b67d9cda499b71e3ee54bff5736f8c1 +064396600b73dc4418ef783fc82f4fe1ff038f6d +eee1cbcef13cd16096734414da992aa1a7048bee +e190b373844d5d3a59b9abb867de8f0fdffddeda +8700aeab383d481d613759f19717ea4381df1433 +087b05a98112135d9fb1c7628fffb72ae2456e9e +f69c5a3c9ef4bfb195ad7ce2590021121a7afced +82958d258a7fe963f8c9a001010365437bf15fc2 +b6b37b9bc60519fd214d6e980fcbb16da066eb68 +a49ac163d47c1e690f5d3237433194a9d0ab558a +aa35fa6f613b29bf80806552b1bf260f04bbedc2 +c248fc3e54b451a4117f23f5adc88cb8067be3aa +f21eae7e796721088234b885bc6eae0daef05738 +b5c4d6f671adfb997abb53a5f2f54519180df7b5 +457b2ab2b4edb94c4b67c1219451be80dc369e8b +e80b0028e44685e39581ced42c1e1ed9cf44f74e +c2d90734f9228cf3163187ad72405c90bb09d13b +a999f5732f9bbe0991e6e895f9bfd42bdda75bf1 +cea6ac133923e62b186aeb17db37be6640a81200 +7facc85e37ababb8c029257f246fe0934f84a808 +21dcf444d4bec9d4b2a6ffb54112c4cbb797025f +f880779ad1b262aac438a8cf3a6df9c0ecebdada +7313410020b93dea1f223d2ffc0b997385d7886c +b0b2948eac6b4e082bbd420da8dff3de6a187377 +360b51c738cde3fa09cef18c3d7672a1d20d3379 +cdaa77a96e1d96a672548b6dc0bd83bffe6f1619 +cd2cb113b1cd90e2ad235466df3a64dfc956877a +7140dd21ed480a3f47a59c647c1f4e690939caf0 +f2c9d3d8f0622e67574f386197b92570730fa61c +010cd75a5b587285f7697cdb6db6526bcc0320b2 +86983b2cddcc91369ad7d4ff61c9e6d258c78b71 +bad5a939cc0d695a97e7f3fedac53c93f04c3253 +0b8cb37a8d54e1761b3d99b8a6e6f921f07e00ae +613111ac2a3831a7291656bee2def306453552d4 +0e29e9cf2a08c4b35ba840bde03b0537e3821a74 +2f425306562bf94bf5f4567b8c63c5b204a2c414 +8bdc6411954d7163b137c4970f6d6431aeeb9ee6 +be9ef18a4a0a08f94a0340b2df0d0f83144299b0 +cafdb2911d9af6038659354058586d6bd5174338 +0c14f1a16ad395dd9aff4437452cf555ae8858d2 +c3db5ce828128fc91c5bbf59d144e825f49ad271 +273221df42c8dd9e0296cf3ec203c59fa205ecd2 +5abe61623d56bbb0d7bbc79f5ffa96732b3a1d97 +98d6f13e4f3ed36591298430dcb49bddf89003b5 +18686c211cc5f466c48fb8dfaf5eaa00cf3ef0ee +d85b6f625e9e57efaa1e3c21e626dd25e3414758 +69e2ec5703156e86990ba690b2a9d4dd0f5733b0 +cc665a9316a9b22400183776d396940ffeea2fde +319a27f92609641ee0f0fe1f6a88a9aab739b98a +94817c53b53b3979c3d32d197ec26dc489921e6b +c803d0b957abefb26e1fe0ad87aea5fc80180a20 +6c5b9c68e7890f4b683e148ac62dace194b45b59 +b377a0f9dac858e2bf7271bac531591140a56e33 +6604ac30dd3d9a29faf40def3e4549feab4d9d02 +b7fc4ea36690353003cd172e597c456534bb2811 +b42286dd9b577bcc261e838045e51133266f7fce +73f9e1772ab372f36898dc95ef18711434827717 +838a481841bf69bbe3ac8a3b53da54d692ffd084 +0a1c17d84f846eecce39356447e4d556ffdd07fa +98f8a09b98daa05cd412e83d724e4b914e8d921f +f859c4b5c39653753b21c575a98d525fc47c16bf +2abf786570b9f9de945c70d9678cb67dd2a2e57e +f4a4d960ecfb87fe06e38f56310a14d23caddb42 +d96bcf6baf6cf8c61d54ad9c9abd997bab870077 +c6eaf9d97b059f3e824a1ab4ffdfe45494e5f8a1 +8bbbd3d05e22fc3372abdbb471799bd5f5380a75 +09dba9542ee64697d789d47ecd3bd53bc8b0d953 +7d42400c49f9cb313345f7d9ea74c51d0147a4f8 +1f58239bc2fa91c3787bda0c2b9ca5cffaa7510d +a0e187806f8ec8f02e0d329ac114ac44fe69a4b4 +ad2e90052e15364c93011386322f6a5007314348 +adba3dadc7c6a3177346ba9119a8df8f5b81ae0c +af2d712599be471d1ba0b91fa18c347220ca595d +77c610007679f13e1f5d314ddcf4b14c7e57876b +0b1d14b9f6619ced37003ff77f22dbd122fca645 +310fe57a8497cb5215c1d70d9f9a22ab91d5c054 +7056abbadea0a2eec779e157890219171bc98938 +3863aca32c99daf6ef8a0de6f60471bd9c54b885 +b32151c4d36a4b42e9b832e14f539396627f8eae +f72c81d6c240a6dc8395c6ef33b64edc423b1fae +4409a8f066166ba9aad02611f6979b44fc91afae +75474ebca7f3413b4e814794d6ffc13663120bae +55e430f0c68f6f0c4cc996fcb87c2e53233e2738 +5dc954a346ff3dba1629e03f3e6485235d6d4742 +95a92435cdb8f2b51ede4ce6220c5613c8dbfc2c +a39a529543dcbd6c0088cc301bd82173feb5f18d +2b8179710437421d5a1f0d281515725ccebff3f3 +087f4bd441c0032caec0a1f65a139d336a09d133 +3636a522fd19a2ae4ff514319d5c1fc012c4bcb6 +c5f38edba57d815658097ba5ebe37532ca160d7f +b0f17178ff8d37e5343119bd8917e262f386203d +dedac076c1649b0edf55353e9bd374c0cd4ad956 +9a3e0978753d5354eaebdcec8550641314c71b83 +0c9881539c0c5249e911dd70b37cf7f74327b97d +80e02450839ab9c1a08082e404c6f0398ae2e92f +6e85c4357ae3dccaae1b354641d22a359a100d47 +b05f9fa99ca30d7ce2611a6deb139f2274d1ad3b +306a3efeb8ab8919079525b9aa747093bdcab563 +98c31464a9052c1ccddc9cbb71c2529f3fba6f4d +35ffb93ea7e6cb006d5019185815f03c67b94d77 +2f3f088adba0256b27e6da9efce4293106e34291 +02d287e484a9da84424d10eeb0c8f3ae52cbc70c +33dedcbe9423f6031122f8be1f7c2c69ea4ad4ad +03765604d9073697904c2dc4cf29e90b924f36f0 +d2845ecaded68fdc5b372d10c3663441ec8b358d +4cd67a6ead5211ad92e1faaaf71ef28f7da2f593 +806333db217efd7e2a4562bb73e695bad88e712c +1307f117d423d143e6083faa99255c2bf2a2f3fd +c92b96b8114ec521af30fc090cef40c07f9544d6 +04b3d6adf722b3b33524c8d288c74e7db2632a2a +ee5db06cf8ca3010774965c3674c273c680c1611 +709e0af32c2463474cb8ffb85d2dbc07960037c5 +d503ee11d8f7d43c67841ba1b6bd863a6180a223 +709555f0f163e09098b58d03898a9e0d6e7ca0b2 +010f73477cd20c14cab78ad9cef350ac8c0f55b3 +9e95f4744d105fbcbc32a12db7287cb64254325c +6e47b7f6a76cc2728e61f4bdd30bef697d6490d2 +5620139a78269335505edf23a902bb0c9c264e3e +70908608fd62696f99cb3f7a185b226fe32e475d +0e907ac0dd02b47f4a0726790d01f0c57037ec2f +bab2d77edbb5ffb3a7938f16fabd7ad3cc83fae9 +f5d5c855f2e708067e3532980aef101d20c40cab +c50895475b8a401824cc9f1bfaaa8fd7797e172c +98690365a6ab1e82c25ca08c26db63a834c21fb9 +3368d60307efa2661820d3240854967fbbf6fbc9 +1f7b9e38af5bfddef1649c83b119e45063bbee34 +3fc0feedd683b49702d0da9d7d3c36b7be02ca09 +2014aa562c1d05dbbba727db120d9e163fb8f43a +e22249a8da0886b4c1338dbf2e54b766b13f4db6 +8344960457778ea0a4fdbba33e7eebb69aa979bf +2c7fb7b897db7f304961e919cd5ef1a5a93877a4 +e2b409c9d97825acad579abe22e0a37b685d6ca9 +b2fadf8584ecefe2a32cc2eba6590d10dc8a9d26 +5f201c66bf26298986c3dd2aa84818a312a596e3 +8dffb657ffd1e331b99cc00cebf18645e219da12 +3236d66fb8a63916b6fd00c2f2ec417b5cde01b6 +95cf024d3f1e40344f16cf4faab052d6fb1e60cb +9faac843955464da41331af273942e38561c9a8f +d383fbcb1a69ef97a660318a2b36486e5fdd6a44 +9939959cf9cb1a14497e63aec0b88a08ad3e451c +3c0c3ce2681718b816289eeeb3ac343ddc037fc4 +259d8cccfcb9b9edc00d757ec6efecde6fc06110 +9057ca8e09723c9959f923a412e409ee793d0062 +79f5ac0831ef03c2ebb40d325758350937a55313 +3f36f6f4d3317275130051db2405459021f56b8b +e35c3d07dd54243acf4298ee0ce6ea7e4621e90f +115a59d5c4916cb14b4c408bec36bbc6116043cc +5d563efeae0cab135ec70ae4456a4e55bf598aff +2d8d73eade954a63f892414accaf2db229ff3312 +d42bd8a35e147633d3d750266939c6539aecece9 +27155662fda1f5febdbb42e6572dda8d9e31588a +a210a653a08fd0460b52c7eb68bdbde0c40ea63b +fb4be2d8538e5e4418042eb7d81491dc7e94dcc5 +53940e5d960d1b63e5ec84fae802fcd599b20f01 +213dc667b6c665a4257c4afef5e5fd39d42eb01a +08a16f7ffb9968774fe4562acfb79aa6a1a59a2b +d1caa4726d8ac1d9ad611708038db896828f06f7 +67e3d20bbadc184c57efe184ce8ccc402de23bed +ae05bddb7e816fd0e14e95cc525e06caf9392918 +b2bec4804d38db4d01520c4b65f410acb20e4d2a +78a8c13605a8eda09a0ac0f04910b414eed6b765 +ebdd1d2b3891d6f0de29ffa1eeed3f03bbef7912 +d54da603155d9e507b81d7188e1baee2f984a99a +18823ffe4e7d30056229c6b0c3b71f9c72c1d2e1 +86b8d10094b19ab1059b5dd7983f26fc2bb133ca +a16c62ef8dfc132a0a5c406e429a08e1d40b8756 +64e19e8802e2f598c5a84858a4b2c0c43b99877b +3fe31bfa86777b3f4a1bcbb46650f683fa477935 +d6e9929980eb730124e8cf6561991d43f19241e8 +20ffd27dc3be9eb895fe8a5ae3cffcd795ad100f +3960b049e19e3217968723430f3595fb1d4e1dab +846e738db5d5df03f621e5cca067016e84327f16 +2c20a17cc4846b8dc437fd00f84d08cd15d0c8d4 +16cbd9a93ee9067271748479378a31d24390e048 +77677862965b241d7b9c4ea61836ccf09b3e37a4 +3ec8db9d06345bf26aad0ccfd05408880946f4a7 +3747faa432e732538f1636c9aca56f068ec44a4d +f1959058e2074a54c0bddf7afc60131df132415a +216558f2fb3e918840acc2fca7c81f27c7a80e3f +1c7e55eaf41d1e43121755c1cd667d210e45a000 +ce5c91d45d83f9f114814c8db9a1230b2d79eb02 +e735d473af54e1ff29a66b379fff9e88ccd8a164 +7809602d8d9398f05b032bc399a922af1567c56a +e85a828dc7853ddcce5d7d919b07370236fd089d +0f81b75410062d52138ab8a67ae49d03321e991f +c9bfa4787bab601fe2e0749b4fb1e44d3f168373 +c03e79ef13869270df1be0c63ae86dddb7c21bf9 +91e1856c8de122ef09c10589afb9b3728bba9296 +0a661af9686af6c8f298c8309e8e1a96ef0cc08f +08102a4509565732289f843007d08cfa72ea5456 +9b9b7248f513f621089a6cdb956828a3fa6da09b +7e4ec1b8fa3a477f43f00075da2ed26a31db45f2 +4c5589e14718f8d6ef4027baa22b680f556d9ce9 +f63c67c039b3bfb83b3d46f4250e3509c2e9394a +8500584842b1a7abfce6a2453fb9f76c5b39d26d +5f43c3db85393d73b57174a6e3c72884cf1402e6 +1401f556e033d9f10dbdf83e9b5bfcf6a84823d1 +ff7e6d2a2c5fca5f33db717bd68228538fa09f37 +9d6addd57bfee73721c64830eb2d0fd27e8fb9bb +a55a50b6cd898fc79bf4657fef0f0ad44de6a5fc +6af08a408468481f5847013cd8b7f9c0ec7296ec +7d54beb04ea368c6386dc8174ffa1915b3414bb9 +93456d2e7f067d518838df8cd7f32ee85289f4bd +d603f66ebc365627756eab740140ed43f0e5f40d +e9217085cfc52f0fc47d91f2feb681a33f88fb59 +d0912f63112be8069398b3f6c926c727469f1191 +2d1edae390d9f079095606c8bed0a83f5bd5d767 +b6136dd5f245f26dece12bf294d524bf584bed69 +c4a4d6e24e7753e098e09324e903c3fc2cb45f74 +8f49dee3dcf6b505e43475e3b7c15a5e25f0d85c +edc1c91f5eb0547c18877e123cf3ec248ac734d7 +f9f269f3df343d14b11c40286b22f2c54d74d8af +99c98449dd5a99222dae7cfb14bc060852f220e4 +017d27d00eb43678c15cb4a8dd4723a035323219 +61e137b37db0b3157c04fab0a5f4314fe4b03931 +43f54e39221310d45fed028b202e0e26490846be +ff5b1552320e183941d8d58f726f589324035284 +f140814244c9e54cf4ff2085d7d52b2dd87d2737 +0264d85da73237f1967bcab20b2f99313a00250e +7d2215881b5399038a625726794c523be20e567a +77810c807b3c7452a00968927dc8b3b76c2aaf63 +361864a24f139d975cb02736e81d106b6b50de37 +5b08a44dcfaa7da30b066b62e688177ae4c27bc6 +f0bb651f7498ac35c750d4216b3fbdc1c6e83508 +0865448cb045a8b9568e679dbdb5b752ba0e38fc +afbe85965b4aae74bc86d5c56c75fe55e782c7a0 +c4e68babe61c2389be350f11dfc8e2c5ddc9f032 +7de82ac3cca30893284f93cc133d87276f39f8df +0e983ee75b47509844fddf43d06a989b3448376f +645ec5713498f91b494d39bbe8ac6619a20d45e8 +2a853caa0177515501abb206103e15fed7bf2315 +6b675d840afe29591d304e7b52a1edb442decf2e +aa4246332705bc11ed706555620cf99aecace692 +461aa6c463d8ed8a3485519f8347d3e8fd30d5f7 +0963c147bc9d5370ae2062863e776853744c64a7 +b6bc7591f950b6647f2d5cbf11bcfaccd8da0ec8 +aa3cbdb196eb266ebeb48c1be941df20ecc1bb90 +945cbe99df1af1b5db99d8dfcec142e5d0452065 +9c2b9de4b8928f63bfbaecc97bddee210e2cd38a +6ae4c366fec9f8ffb28f74e03fee29f300e4b0c8 +e9797953e895ec7596bb0c80d6c3e13a6170ba32 +4d63952d88ef8b61c631d92744b8b88d5900ba82 +6c668fb743f9af4bb080654040e6416f7e9b5605 +31d2a88aa62215e0046d4db0c0cfcb7390e16762 +941f9ba5091a41a41338a0b5c06ef998ab76bf92 +4f31114f7ead2ec76449bdfba502b576c8cbdc51 +636ed1de3d915dd13e94ea6f83ed418139898672 +e1b8a490189840089a0e42f357d7e18aa04d695f +e92c4914629728b8c18cf61320cf4a34baa77300 +9afcefd2944149fff4d5b74f5b26a39288b7cd59 +f802525632b1c8fa85b43911f07d8129694621c4 +c7b0320fd85f3ef25cee88621de6eb541b399c36 +ee5ee7b755e26ac0eabf5191e7747f6d72ddc84b +65a03a7e863b3a5b97576bb3fdda2d8c4380c706 +cefa54e79f57eec0b1273f69ff7149dcd90c7ee0 +c2add6ebc7d17385f7e0d0d9fca5fc98115c68fc +169f5f6ab3818fc14b9f2471ee0d8dbd61d5e566 +9862b8aab2db9c82fd1012792783a90ec79f7269 +0e1e33051f7d782d2643d645eff67157c37370a7 +8ab2735a3a614a5e95b2f53fabcc04cc482a0abd +1d400e7242d8570c79f9f34c392ce02e217e01b8 +c56d3923764328f6767dec2e5617f562cc88e791 +f9689fb9656132e1c6d186851563f2b968643791 +08d845c78055627eb898cb74bc38274794351b17 +ed3449c7f2b4a2f4f1548af509dc9ab1960e9fa9 +0892fc2908f83d76b147c3ba1847af0056a47e9f +eeb9ec2b66bfce439d6ad3f25e364d3b1d826bc4 +55dc9832dc56cae9f0bf180d2103a1d20c1b1ee8 +b05f9fa99ca30d7ce2611a6deb139f2274d1ad3b +3152602658285f9edadaa1d9cb7cc4948ab8fa54 +ba620c801834cdcd41547b08712734e30e84ae52 +c02f067640c67b1aa5013207c2c7782ac6b97399 +3349e092bb3bf21585d52e72e2c782692932b139 +26e816229351dfe7578c758ba07c4d2d2a891b2b +064f086f49fa410b664d59a0494367c421ed2f8a +1241b04b4380b1a796390d32183e3e738d7b82ff +79879bbee2c8f0b46fe44c80949e24b3c11ff7fb +2f4e7d5a0130b48ba687536a3bd5623fa906f9a8 +a72c848bc3bffda7aed21ece2b07327153fc11f4 +b10cfb970a746327ce47764050473ea27b15f649 +b30975204e2d948c1ca8d33a9f6e755f86d8e200 +e54abc8237ffb5e2172f192200fbde85a100cdcf +0eb8e5740eecc2098cd862cb5d1ff41f9aa97eb5 +0807b672dd1a7ee6f8038649f70a66cfa3ba4fed +e22b4d2a35411b0b2270871f83c19e9f6efbfa67 +6bb9d73ac47b68b90872d97b9ac1e1aa34ae72fa +c83faf99c08fd4d44d9ee38d1c3ef84c273909f2 +cf91d2b46870970ec013ea2ef0567f695ca80261 +151ea6f1dc4a40cf854a8b2d9fed22ea457d2afa +cd29f730499023601901dc9ca801c279637c5a81 +4fbaf01100e4d6ee1823f1b25ba309fe73ffb6d9 +49654512a36b27837b069fef447ebcc460b0c911 +09df70a379653872798f1284efe95240944f6af6 +43e9d988417d90e85868aa09b5c53e2ddf0364bf +5730ccc0f1a125be76253006f14a6d3a39fec5ae +65012f2f3ec9d16629eb8577d149de30257127dd +ae05bddb7e816fd0e14e95cc525e06caf9392918 +a65f3d75c5cbf99deccb00c9b94f91b5ad52a050 +bf6d04b98e0af89f073f4b71c5125017c9aa079b +6f1f25365bd131c0caf19acf0f4fd02a3535f538 +23a87dcd1007f73c4a6278d230aacb6411c71266 +a6d88d33454805c4c3b9f3c50b1b2482048c32a2 +d6392fc14b8c5e61bb6342dfada3b5085dfa691a +ad61eb84269497ea2e8d9e6f3b1a504d9bf82d7f +b90e7d7139f69d50d53d5bab66a560785596bcb4 +237f4b3fafb5bca89627701cabdb01a61ddef306 +8df29408cf5fbf40bcdb5a73d9eff3e30b928638 +02f0cb84d4e8f2c78189f3008c327db6a7dde4b6 +d7449a49a1e808c3d2f2c87f6b6b26dc8cbfe638 +096fb4a6df33a35a8a4c28cf6707d6093b8fb483 +54ce333c923bc3d8107ed1b803575c249d92a7ca +390a82f6f49cad470b3278465d07a9320c163fd3 +b44d920cfa42cb0ea8e279c4401f565577217323 +3c2a82b4460be3eb08988c038156f24e690ce149 +fbfdc61792dcee3d0102859ed2681489b037339b +5d788ec362a874cc113c2204b06fae82d1d70ea7 +e4a186c4590ba156eb3e45862c2a5b4181e2fab3 +5f7c94ff1e4c755c47343046fa0ed6823124b85f +aae35772a13f84876be5fadd919f1265159acae6 +22680dc843e4692474815b0c9ca78b9f4f1a116a +1c23f12e590b2cb4a89314c0b933f12b7193a37b +674cc0ecfb854619f3e50df0e4baecc67c73724a +fe41f09089b134bad7f40be0ea4a6fa7a691655b +12327d1afb02007b3b736570856176234edfa8bf +26fe548ac5f3ff1e700b2cc6890c2d5b152234b4 +861bbb99177d314267023bf3699ec069f3bda6d6 +cdaeefc0fc597b0b591b76d20f979c1207e98880 +2a370c32d2c1464da03bc2440bc96ad23059e428 +6bba3c06659ebeaeca823bb7517baac4425faae5 +e7ddfb8e15144c1a1e48d8b98ce1a44b666c18ae +62dd1f31cd4e2c8250c587b557f4c2be67d5e495 +3ab9f2c8f9512bf98dcd467971b3ffc6d612d308 +00120f91cfcab17bac165f7a4719019a628a9db3 +3ee94d218979b459196743ab0a3d2957f72422c3 +6970a0ff24ea28a7500763ec1b72a671bae412aa +ab356240d60a6e7d6efce1a9638415f13bcf6591 +39d0bcbfe75fb7bbbee7d4bb72a77cebf03e39cf +ae202f1474cd1ea41a5172230fb083e1bf932d17 +7dc9c7399c2e313bbc6264072f6a592b6915b082 +fa79ac193bddc262fb35a468c8bdfaae536bbd7d +412630c97420afb50e5278d3406e0cf4b08d0b77 +eb1cbbbc594b324145c3155bae5614a2553a17c9 +c948920cd3dde6dda8767bafc8173c0c37127430 +43ecf30e43686b2a11f6b329f5046a68082b7272 +49b2330923275c10d5faf66681fea724f9938893 +ea6c2b9b8479325e3c081252b59c61047988736c +d28da9e2bb92814351486125aa35b16d112f3a76 +393a89e4d5b67ce3e29678656d73a0dbc2ee930f +daed716140202e583ad4cdc98fcd8b4b3aa5ae35 +51ce09bfb256ad9ac38a8e071f36b0097f6fa68c +36beb23a74208850b1bec50c966e985aef7e4075 +52894d78609a1022f6dbc4ec4fe32fcc31fa9366 +3988b5a02009b7589973eeb2cb929f2d37c4f409 +cbf788987b75c11aa5c49518c5b3cb45e2c177df +ba1d997f0df5b17a0899bb643a467e95332cf0d8 +4a0453760d334ccbc84d00ff67a87865fabae97b +d39cf4acbfe4fbc26e001266243644ae35beb712 +d05b5e0e3619b87fc46e731ced5111f47ccda50d +bb98ef0258299046aecb9fe0070ec309d1cac401 +080b5e433c62201fde1221066f4d723e3ef427dd +92a6b2e9d9e7da09ab27eee906a8c38c0219f390 +04d7b42001e7249bef064a175aa1236be8211e4c +75855c9acc21043af85c9f161fdb0f68af165771 +2fc69a299b1a7ce0a9dd2552a910f319773f3eb5 +98ebcb90dfca1bfcc7bd2cecf53cc12f7dde6970 +db8025d7b55de50ac56606023c838bb9975dceee +f1530622d659a31a36b4cc5b79d3ad9302ecb384 +a9e946a8ee4153ae7e45143941da7f61fd04321a +c6f17e5ec10ffaed02b111f02af4afa86d347d3e +a7ce2524995e668268028e9f7237dbfbae3cadd1 +407f1d56cdeccb0e313c15ddaac53b186acdbf0c +a8c0086c0ce76f960aedff7a7d28d9009751cc34 +e28316232274fd9444562d8f7b5b6949072d0a2c +ce5ba5110238d225394df833987db22b197dc93d +32f66b696f66911ca0c73e36ee32708d59124f32 +242c50a749fe607d1a652db0e06f453d5d7f80f4 +4375cd51b7ac544735b9c89df29db30369b0eb9b +1458f8a2a10e49686cbc0b5e14a97acc3aa78a30 +c6ce5c4febbfb715453373d447c4b5572f5fcada +0c145a6bf87af0340fe06024ad8ccea391c9134f +b5d3ce4d0549d7802002f2d5e90f05ac35f5db5a +7944055eb6cd49f12b5d42ca0b971eaa6dd51e07 +63c0d33115ed9eedd51c7f34177a113f6b40156b +f2ab65bb852cc93aaeb521f0f65fc2ffd14e996c +c7147ba3242871a59b80860245ab60c3d04c5ecb +d9accb542e2321181468e8f7e490114b30c1cf53 +8ae25fe05b21fc819243746ad7caf4555e11df0f +a3ddfaae625902b0394f854d6b341b21684638c4 +1a71c481191a57d4ae387450f040d1da83c10eb1 +7045e190e940ad597893d85b3336afa77cabe20c +714db9c2b78eb2e4b26fb94c3927bf372a993900 +85ea5ac1300194927b58530756575dbc84dd46af +18de5b9eb32fade90cb550ca65052bc1e0095a99 +667ae97a088538b0b321579c5b5bd12fa101e04a +da9609278b099c165aa343793bc2e03c2ed17752 +27bb197b70f0475abd00cc0db2ffa53de84c9e75 +debde12bfd41f1960cacadf1239f1b50db2624d9 +c7ed3ba86b0e3978955714855a42b4a7d8c67233 +e5572a346b97cbaadbe68f0ec35a09d923a66383 +a729d22dafc85162347b87dd530c05caf64ae2f3 +c12d47630071fbbaa5b10507aa97f02c58aa37c7 +6d9afc0ab84073e890da12d0332a5987ab659d68 +240b251022182eb14ad96aa9f558150f8cb4c543 +000e009f6b1d954d827c9a550f3f24a5474ee82b +a8fd170d0ff3f6178900977ce422ddcded7c6c43 +8c8d04a5274fd92716fcf0926aea0c06e83e7987 +61e06a233a45c987979139488084ccd0012d466c +6d5f7fa2062d3e5ca89760dc09b13a16199d1359 +7e84933971ea1853295b9d73e4b75f3478498c72 +e10c7f72bc3de187cb7adfa31a1f098d0f47bbf9 +f450b642e3e32641ef9878aed22f732d314a8c4a +ef17977a93067d945566356f538640febe56157b +c2d40cafa9bdfbf9d04d096d09a6aab9584c3ef0 +87b1981c3d51bf560e628fec4e65e4bc8f54566a +f9f7fc8e9da723e776abddb6f7e836fe72136eaf +c2e3a01f8e6f8a021a1551f72529f92c9a7703e4 +d7dcb4acee7cca3e54b11ff196c0c26528e665ae +3c2452d9487b0b6b5426fc7c502b4d8115236051 +e430e7bb4e1316f3652bcf9ac93ecda4aae3729b +e21cec4d51eaca6fef39f717a12355853c8e25a8 +0a5c494f13f21e009f531c2a56543d274a8c5932 +3f68dbe78481050ae64297153361f374956140ce +a54ee0a7552c6d6a5fd5ff6e0b67ecf511a8777a +a1d08c47cbfe06de1206493bee12f301386725e2 +a91d9dc6a9ad7fa6aca2c9ca4d9c7aefd1503585 +991ee479aee8194c495fc11e06f91f7b33809161 +39405d7bb73434ad12a0106c15cc194689eb4de5 +89275be2a434addb83a29a275b63113f4500e328 +24a1a847d4d5f74b57a02c5898af9364aa83debf +6ef88240d12dc57a102450dd26ee7a0510a848df +3299e38fb5351c11d9beba7400722773f3b74e6c +99f61cef6386573c8cce688a30fa2cc82a1dc05d +a49f8c20eea4af67ec54408cca737bff98628769 +7cdffadd7c11226ad6973a1707e404fef96dd541 +d853339b5945d07de64c5d8738e89259dbd40401 +137797fd34ea672ca506d1c8848d0da355baf7d9 +9b00d0ce1008b4c3765bf57817dd5241bdeb5c8b +81984aa7d079c46380e4baaf49d078c86466edc7 +78e6ca8bc83669866fdf9fc5ecc19797f4011261 +70172cb5f244c48d2cb41621c35858ccfdf31997 +a509b26bb05b83f9fbdff3465a2acfef5b35ae30 +694c620e98472213a53e932214054137e278a073 +67518afdc981945e4cbf620ff05c773934607a44 +2d3dd578113df1602cf753d2b11a4e802f616990 +4abddb8c6e0df6689eee21bfe27aa231d0ae8dc9 +ee3ed04d53a5c7d1f60a5fc4d7c6832a7f32d3bf +f76269f3e3c431fccf5d9991a8a5da27977646fd +ba30d4f9339aca62283d1df7756b42158f637931 +31dc8c2da848a7eb21c8287e23990cb3ee8b6307 +68ac8fb1f847b3307be2a6a9a0fe66235a5e8c4f +8ce0548b386ac1e48150a945cf36dbb6a0bd0ae7 +2b4439bdb73d8cdb6637d275f426f13135d415fd +3b0b095afa3ef1b73a2bae29a5a131bd02c0e714 +9170bc3cbca5d8f82b02ae1e33128c62fa2a00a1 +4ecac34a02791711bc456edca64c086b9aef357b +d70512d3069e6532b7069fd0c8fda28d75324293 +9e39cf719ab85cc10326ed1d9df2273e75b67b89 +65c44ce096871da2588e1c140ac91ef771fbae97 +a8949852e1a6258f3e7146d5a0b073861d12dd56 +4baf74306ea4d6d60d89c2575484dbb111cbac83 +4a410aafb82e10f1cfee2062b5cf2e038a3d12af +7c7cf4f235cd2c455b2826e96803b1a6a47ba4f8 +d42801a7b4d67e49ed3d417db4efc7dc6d4b5ce0 +9964dc1ba45079060e594be6429829042854b4ff +db275395dd0a2455ae378265850a90a3025fac09 +65f5e35193414c2998a1b5de2f959cc785f1fd6c +19c0f1cd0bf5780a7c2a8abd5d5d8dcdbcf2fb86 +279e575b7c82e95beab30d37836e1e56176d7ee3 +3945e22c503109659c8d463d4674d153a6f5e8ec +e553763df924e731b9aecc68342af73ccf47ad2e +128c6f4780cc59cfb76bde414ce42bbd544efacf +ab3ae67419bec16ebce20cdbcf76f2e8508b35fd +42986dc0132012b6150eb6066fdc1047d57fbf29 +7ca2a2cf7fcf4afe8dc3ebb4e7f8f9a599d5748e +7c478b9c7f099c8149ecaf11b917d41b5cb36011 +6cdb121b56dcc9cced06a26d1b11bdc907e4aaa6 +2d60a7c9131bb6044c9636d42f6888295a519dcc +883b0ca02ebd243bb393bfc6144974539735d64a +d4c710c356bbc78529b427336e4bf7163a904239 +3952896b9156a43e6e2193cb3ae8a71a0cff6923 +79b92f4b2e09bb0bab7af1b036c03bcf075e2682 +6e0f11b530638be7f478b43a715dd3bcd6b17d04 +e7d4a4d0b37b35569ce85c261d6c9ce9b57558d2 +bad9a32c0b3f74e4f54de56f37f24a265f45fce9 +4ec3467fa91e3889a3ba2d695c863b5207ebe9c4 +d8a1ee24cafd2ad6a648b4b62be7b06f446b1a89 +d61dcadd8afce804e85d4b40ec5eacdf37f04fed +1a15d2dc834fbfb276c67a2cd73d7e8cf650bfb2 +08d5f0581bd3f7196c90036e40f615c43c97eec5 +020a823e09ab8f8c2f13f78aba48cd5549848cb7 +fd091455b9ed6ec71dc4eabf4b59ffb5650ea2fc +c0d25282a77168ec25c503cdba87a0b16f73e759 +19094aa75ff7ab5a9331eeefd36c15a201b0ab62 +2e74bfb1f3a9e72a00f6727067bb42cc6d8c4db1 +e3cfe5af31c4c3e4cb9bc30ddae635241b476b19 +d030d0a5f16a4087cc56137190fa0e7ddf19dacf +fb15d9bff157666a98e09d0e75cb0f05d9998e51 +64cbcdde35d49cb5220009a855561f6a440c91c9 +faf5e36c67de12654252e4890b40297de6f0f18e +f7ae25006ab8f8e2beeb218acbc5d273376d54ec +0b95bd0ff7b9d14421fba10a50634c26f3bb0692 +9db0f5d741239f0adf441c69f7037f1143c99fa1 +1a528a69a27510d5b3036ecef3f8ce416cc8a9b0 +3f0ba68515f730c5edcd6b5f7a2487672238b381 +28c9b5d17fd3e52e27b5b8d6d5338f823f8abe96 +52a4638c81b3feb5cbd2b66987b1c7fc1ca7ae59 +10af2962663aabf4b56357038b430adb7b2d0986 +65961456d11269a4191a41b0f0a0f2d92fcb6907 +46e7cc2ba0bf218cb004f58ebc249e5e72b8c29f +b616399a316a7816941a498c09de81c3ecdf0f03 +878cbf5db93ed95a2ddac0927543addd0d6105f6 +6d48e2be404813f7d346516d519369ede95d7226 +c968e9268088153bcf51f3555b80f69e7f162db3 +405f21274bd606e89a0366cd8aa82e6dbaf8050b +2b9872f1248cda295127c4374dbe49850b81d95b +c94ea8bfd1a74b0d93a2a207a1234b0ef1f73d0a +b0f4c1bd78e59b33cd73b510dac2b45e3cd735ef +158c5fc595eb5ebdd337f44438d98d5581a87756 +a210729626a48d3c75bf2adef15d856d0a9e5918 +248a68920a184395f2fb66fe69f7a2b1276e0f95 +8ceae55b2d091350328e94bb7e3ad1b2048efd6d +26c43e258d65949742057d164454efac73bbb63b +cd76aa45608dd3370639d3ae4d2e774ea7c3e5a9 +b645b284600692840ecb34473db3394bc354472f +9caea797113b583b5ab74990ea22db63d14c2f99 +9e9394a307c29b74289f20464554131438b34216 +90519a813017e881d6d95e4df8952a393d1a7726 +7dc1bba5f4ccf529d19ed517880a10491df307a0 +653abdecd41eb6b1cc3315bd4a6e5819d1831df8 +a644add72093f735a99ce94b304e91703f250b94 +ca1385936bd95b3005b923bff4ff0077816e3d68 +e85217770edae3f88d5114fec35166bf7a80e4d4 +47bbe920c329ac749fa3dcfb10570fdeacc6fc3f +dc36563ebbfbccc065d91fa24fe84f9b0402ed68 +1274f8b5947b2a5f87801d40503ef5b8c883771e +694ec8ba0a9dc85fe62b5cb5041b71198936be89 +c2fa24208e4bf7d91592ed094f88713be35fa708 +1ee394eff8bbe8488411ecb68712b0a6f08280f5 +2849071dfbc3e18241f7a5243d4ca06e4418174e +c0880199e5c76be3640005137c2c383f0c84b57c +f0814435ad279f8e908c65049775a8676ce15f94 +8669e9660c67b2489c0e4308eefe20b8fb3d2cf7 +1c61acd00fa431d425fc79b0c90ecefafecb3ace +999e5dd9f1857d3f65650882fa2cf6d19ae3b9ee +04e19a432042f4044bf0d51e3657f890f10cfaca +91a2a3da8d7ab82cb4034056381a44c4848ff19e +1c0a7f35b6eef0226ed7af5ad8ac87ce07fff38e +3458f834c56fbafded76527b01578e5ef34b9b42 +80bb4f72ab3dc526c23a3dff758e7777cf1b3c09 +6c4bccb7aeb5aac67d498b8da720199c63e277c1 +9061d2012210b95c86401af9dacb0e63ac871657 +fbeb39fdb55d24b827bdd578cd6a471a0a1063b4 +748b0badc59cdacb0717ac7a55a490f7e0ab4d71 +70952d29c9b7db955fbabde8800a629665a0d24e +a576c35a831b1e889631b757ed86916341fc7202 +a5858355506446cb36f949d98bfa811e7d37e76b +a5f97dba2ae0b7949bea49a0e7068a1c6ac42ae5 +1c16e278550ac208d9aa1a65d0a9795f4132bd5e +5a2cf867e368a77f135a855cd1de59ba5fde99a1 +132d7c4172cc25eb59c7745e6d74cc4a4dd88dd6 +061dc0ca6fa3ee55aa7e688910169e4e6c74257b +e2ee40eb4145cb1450572a7837ef544802b99866 +b2b0c6a0f14f3b76df69046861fd04972ac9f3ee +dfc83bc2dc59d24775e3e8228beddf9e654167d1 +8ea9a921345fa2ea894bd9b953081f15713224cf +c6b2ae26499e736ac081af57a7b41c39a7b97fda +abac6c071e35ac30cfe3317089061124ac301495 +5e0f057dcb8b6c21806b379cd349d85598f5bf39 +b4623e58d28e1d790a508d26b754a752f70c288b +621f194999cbead9449bbd7222c6e8852c5043c0 +5d952c025f36694c06917bb1a5395fa13ccb84d1 +a0fffbb1fe7c929f520de855ce045b840272cca6 +e2e8fe4405767b62766efa00f95dc7b501e9eff3 +d24495608f98c48e6f3030d4af691b009d09cf41 +826c9569ab9e52eb031dd692baf84337eb217cd6 +470ec40f14d3d077afc6702a1c1c0bb4baaeec57 +281237526c3d4125250aa204bd6798e16cca4bc3 +2fe0fa9e25453b1797f4cf786c40eddd64483d3f +d231ee5ec82309024acd028a83ae876d9ffcce94 +71732a91bd25ea50aec127f95b7f8b8609db3da3 +793ec3ff30c242c570c9a9e8c95d78b05c7489ed +4b5cb898edd34436e4065c5d3de05c2ec7d95153 +23d0930474aee4957dac9571e06d40757b5535aa +992a3b15640c2613b5481fbe2cf022178e5f3ff8 +e8c0ce0985596758a82b71bdb6759c72af43d06d +846bb87419fc959197879e04dc9c15f3723555d7 +8c6907fb70ecd74ffba960283bf596155a7fb273 +1cc3cd345edba8eedbcf183afd6e746b5b29a422 +f3e48a4d193edb98933989cf54dfb46310ffdd9f +30a4a422254a4026dfb77d2660467994b18b1eb3 +5f5ea4d8846ad79c33bb149e6acb853ac78b4247 +c6d0ae6864fc9ac5307e23d283b1ca4b291b21e7 +2de834aa1cf63d1e6b7098c5528e4d021f131f00 +55e79f31165cd20502922ceda572d3b7db9cb41c +6ee19b994fea7c9447b05e9dab49350e2f8c1377 +de44cbbe8fd64ea13caaddab77560a48806c2180 +a2619695901d714b44c3941aad3689a40abcf363 +485385e26c8fae0a7efd34ee11ba645662074a13 +8035d023b91b92978788fcdcaa6062c38883f4ae +38a7dee7cd042726d64a95d3c5c3d341d656d68e +0a280b46be0fde5d87ed47fc7e970e3ed494cad6 +aafdf3a5bd71126c9ad07d93285966dc04d40c85 +93cb2f4e32053398d3602e0cbdaa12e8bca062ef +4ab039bae14499bc4f432f9f20a2509fe9310fb0 +29c411d098ce2a631503bc168ebb0ee6f65df497 +88ce7a6fa029b6e8f51c92f1666b02a404b827e6 +4f98a1294468c67a563b48d1ab6b4766a6d899ec +ef0e221c7edde75caf0d5bedc0d93745890a854f +94a1929f8845d841bb1f47667c1e489ab21bbc56 +4ab4b43b33178d53a5a445283d6be39ac57106de +5ae9d4c70fa9dd4cc70037f4920ef15d4fc63d2f +08af6504291f39451465dd1f1df6466e61c4595b +23be51cd2399b9825facbac2a88475450c5927b9 +750af4d960982c1655e9edc08470aada3c72a9a8 +47f51c462114ca3590df18c3c96da04a217b79a1 +e9e46f551280ffcc98b45c3c9b18085ed14f38ba +cb0ce99d40bb9d942aeaf08cbb83b075927a96d0 +62045a56ba933fab5e2bc61be05eb5bfb81a8527 +9112ffd4afcd36702f2e6ff7aafd653edf2557f6 +b1fadf90fed9739d72e98b56727e471e070b85b3 +75fba14591fb8de7567bd2378b5c5c114bad77b1 +7e81e309cfbc2385f04c4d377c4562efac6ca238 +0a38f333c2a4adc64c5f8d074508632418074755 +f4aa472201d2337eef2115dae23439e0a6dd9663 +d1adc7acf92d644b3e1a821668a05b024974c350 +699fc0441179a3cf82b303cbe25bd5a3be551ea0 +697a6cba6628b2f233f2a1cf317fe8127e4d05eb +397e0ec274130aed3bc1bdc461bad41c485f629c +02aba4aeba128defbfa587e3f07efdef724666b7 +5e31470b18e9dd499b9f8787056cf0e68d52e055 +9507c23d0741682f71cde608ae517c0c1ad2a4f4 +16c115ba95f7b71292bf5c00a1d425a8586c551c +42ef162afc768b86b881bd4c59beb8839149d76a +a7714cc78a381995633711d95465e883b613ddd9 +91519a0367b9eaa66b5ffe27964ca4c913093aed +e90d736f7f5e32ff845a898036c529518cce0c6b +82b0860745881e030c57a3d1bbfea46bc404bce2 +26c3e814a69b6335ab65aeb2c4a1e97015595206 +7990c00e8ae3117587f54a880d7d20d0578d4646 +260d91a2ece614587559ea3bf37f76e4d5a48beb +bc19bf14032da3bd5d3e6b86fdddd47f80152747 +1badd9612877ba84a92b025096fca1e0a36f07e6 +4f339c24142d0442f20301c1992d523946d1c6d3 +fc89b29738b18fe4c0ece96bf00f2cbe687e45db +ae3e7378f86cac99783c3de50d0c073e79a92759 +a1e6044b7e31b86d42b6dfb7ddaa1eb6bac2070a +948ccfbc7690989a96170839cc5d622e12e0b044 +9ec16c5ec0fd561efdf57572fe22e3e768ecfdc4 +c1849b79963362d71d09ff4cea2c46f9b3a03d89 +e82a29a4c2fcb1ecaed942c6fb550a14b916345a +b8412b898cf77763bdb3da689bb1bc9d10447116 +147a073799722bed54c3606c8833cdd58b1aa1dd +79f4216bdb44dc618f168d2a5061481350c9a38b +c4352c74759634af80f1f6acc69c55261dd12acb +b6a68adc1d771af97938d64d3c21ef4fcc99cfb8 +b5f8a569e73948a0930d18622740f52ab91c1a42 +fb784ea50559ccf087521510e7760473038cef2e +b7aadd7612b6a1970dedfdc175ec4780a8732703 +0146da64eabaab5d7f53e1ce58aeb9e74dea18f8 +7465d748cb38e50921b446953ef27b0c0fb6abe1 +4cc4e5eb162c622f786e1c98a9e00237f5687ee6 +a95f85c1672b2ff74f860a5980d83b440715deae +afb99fceb6c0532a769b61a81e8dfeb7cb70a86a +8a7b29224fce56d21e0b4d8b83cb42c32a4a2e29 +2a4c040c4c53d763d1263d8cf797e0b672c154fc +a81140222a3c6e2323cb290f353d595686473491 +642db49c7519de4227b0dca5b23144945bbf54ad +40df1682f34e463ba031f077e211a1e8eb1b7e0c +cb778e62e3d6b15a836e50d65a18a269a8a82577 +3aed3c2855ec616b87c4cf79a69298ba45c427a0 +f18f929339d5ab26ce8e26e716da4ad095474768 +8e03a13f5223f71d02f875b4fe4e48cdc1ea3738 +4443f53a766617ae7c30c48d8cb55d6fdc3ceb30 +f38b2e2284f48100513689571ce9d41cff63bd4c +a5182de1c12d3ff131a5dedc6130e02b43c3b267 +9054ed7b6b3eca10003d19098e5d3c51a8dc071e +2f0833d2eed57049454b3e0f41dc02eb7587bcb9 +a9883044091acb92e2edc709d5136af372d06ebf +1360b846750bd2c7e31cfe015c77c5968a9b541c +156253c33dee7a50df8e9e5e78adcef72705f3c9 +42580982bcfa232a30d39a26fbcd605cb041e092 +5ea3d3524c6c824950289855e33037576e741d30 +87c21fd886502206ada74a652082ec8dbb0fe7c7 +958b77776ac602ff78ddee5ecc758ee170cb5fda +0baa5e5fe65401be934349bd1f067b31a4a0f0d0 +0f18a06bd539d1de1e3abf38469e1d14030ed41b +cc06d014fb7fc7f5d0fae8c3576c134a281ca14c +cae072ed5710b9cea48c8cb0b011dc3a9cdceacf +cc7928b6593cb03aa125a5865684da7fc0405d74 +9f71dfac529fa72a12235f016cd481b02192c3bc +2f77ee6c43941f768b9771bca5a02332d89bf80b +cb6bd932f7a9e1c4845bd0d974f8983f2d5d6968 +a3b808778e2f90b6605b09b68cd7f3eb4659477c +7e7f7e823c05955c57123af1b61dabeaa5221825 +784321dfdb51fa207c790a3f670f0022fd575775 +69891ae41f6320ec437455913c5dd6d76a0241d0 +a243d306ec4c022a4199b6e160bdeba677415fc8 +1990eae2d51f66f9b5dd3b2d2beaa17f2b95599f +4d55ea7163e490d3f37218482269961898c62a87 +4d33676bdf0c6738fa3088d5e972ceeadd3730e9 +54aefea5f3c14105ae08f09aa60ba5f6917b1b88 +d0fb72f2df7282c349193c0ea47af281034a2c32 +284f81ec4d1297d3949ed95a114f4c10011abf40 +83fec7b4d265b21ae38e07c6e1046416b7758993 +f78bc049eba41b15d9e2ea28bff38e508b0e71bc +bb940fa349ab09d69edcb5f3a8fe96e55cdb69b8 +4dce40768f628700555244e91a69c5775d6caf6c +d176f57c12f30fae319ccd5b50b3096837767ed5 +1bd87d9bd116cca4f00aa031cab25897d35418bf +684d8d97b04fdbce1a08fffb59e1e280318cdfb7 +99b1b03fc906723790db2ebd04ecc51b8ed52052 +ce7ff18c9588042aaf62c8c71c69f769a16c4a7a +ee14d42f2b34f4bb5bf90d8c813934aa5d6b5e01 +d7f730f7658fb4af7d492e848fb759d031726e34 +caefbcb40174cb97c8361dfaa7899beb20202509 +fbc427e1bc2cf82ac3756c8c7de4249b52e56505 +a2e4de47a027a36757d181f61e2d3fa6dde7274a +85b17ae766f1da36b8ed0556a932d63bec08c785 +36ebbab9aeba7a8a04ceb800b2e445a85e4b2c0a +5826b9a1cce4a960cbf4516004b194c988312730 +a06ecf2bf25af0a6b32be1d6a82ba618d9ecbb33 +8f03971de78085457c1440e3ca545ae5cbb5230a +06588a8ab74f068ec61b89de9ca03a28f5ebd6f4 +72bd7e434c944937912039c7cf79c07bd40241f4 +14b5d1ee3b508505b96a3f403f1b6685e110c3f5 +6ce0eec1ba71291ba928d4a825e582c919a2457a +c25eaa87d7ff1d1fd503bfb7049a41bbf282e916 +0ca9829040ed3d37f3df6341e28becc8df839409 +4e516c3549d4aa6a057dadc9f9f6f9aeabfe35db +8a63d4ed82617bb5f3da2ab351138b4690c9e03b +0956df18b019953eac5eeaf6eca49674af37e52a +a8dedb9efe2e9bfe658503702a0602fbefcc3316 +c0787c279f755fe76464ca4fbc94e24add71e3ac +b11cae312129d1e47a4102f87ad8e1f0781d34c7 +248e4202dbe0d45e76e930b614578206b3dbc383 +a0dc9ceccd24357326241c97c07df17c93e77420 +5468413e75a18f8d7acb2d26c2b80bddfc9adb99 +7c81d09ca7a80c686ba8530986cb53e555eb60a9 +8df13df883dbd7e8944d8098b74ebf3aeb4b735f +e2ee2a92b8f493b2960c4e1ba2abf4f2a54c6758 +c470e5a7568645a10488f402443f3701f69403a4 +23e3897002ff686867b2372767d5d8f121cc9b4a +f1f670d0fe617fb374b15bcc20110b89b6082aa4 +b17028b6a57a1301be1bb2021cf51d6fe4bdd354 +d3c8813d44913745f4ec4253e048af17d4cb159f +b1b3a8940587229a063dc836cb0422065ac0d292 +ff7672f15b344e93c02d0d3b9676b8070a735e93 +331073170c761735eab3c9a516903016c2aad8dd +07605b39c50fbc320453c583ee749ae4f97126d3 +e710845e4cc7eb6a1d99073dfbf6f9278c24bfa8 +0cce9dd80952ff900e8704e6115f9c1bacae894a +2ffdb3f488210d4ebbe41759618bd8c6d15878bc +16d1cc466220c90c009bad3f09c2a085bdd47d5a +1a71e84c0c599408ec18a189dbd779d5e20d4e21 +51de15d048a6f3b0330e8da198b2d17260ce8c85 +36ac969d234f196366b404c9c714c3b8d30ddf6a +244d6af0cf929f993a2ed2de0ede4f57d501eade +a834e49430e3b3b1cd596dc1338a028e7166643e +2e489b53225f71cc5b73f9aaef5c692737c0f6bf +1d52cef8af071ed110d3ef8feb3e4b275dfddd01 +ea0e27967a6c62875355c5f423e4962835c5921a +02535aacbfed4c3ab00a0945d59933dab54f6fd1 +0cf6cbdeca67c729a260b7c1f5710b7a1e0aefa5 +ca42191efe091ea06d25dceef9ebd84df8ce75e4 +c5001aeae4fe17f8b7ccc1d6c604727ae63c35e7 +730b9668fda289b194a3b66a53fd3745ef42ca32 +3207c07bc7ace3a01ad233641f1df91ab37a505e +de657634cef20a388d43127a184619105d110a27 +dfe850a4b3c6c002dbee134a112f16f8e1b974c5 +1b0e7715e01a62130ac573c38834b09274a7a866 +99c6160a2f6e22b5040bb47a279f81b4224fb222 +ceadf8419c256716569dce2c60d98dd703bf2cb1 +d8fb13c8444f71e7f309d7ccdd7ef329a47a4df3 +75805d5f1f22bc6fcdff850c88a4fcce7dc3e17f +e6bb6b913b34b30af0e19a93bca4f55b39579f88 +c29d27094de54106cc903c2e0dfeb89cdcf9ae02 +41ae19f40a339b6b47fceee00f512d849df292be +04090c2dadbd1d446a8364df894344687131f841 +ba47d6e2e838b11290d702d1fc03261d27ba59d5 +ebd463e2b4b89a626e16b43071b06f3145cfb661 +78141478f00ac19912fa2b283e8c91e30eb3a7c6 +d8a4f3fba1b67bb6848489e45a92e9c1229ff7d2 +929b032a966f563e8401285e4d96850b17f640da +82a3799090db99bdf611599094170b85bd4eee4e +620fdb835eb7e095e9a34f8a165843f81fe50328 +66f89413b6f050fb903d58b36ec961461145af82 +fb824fa4ce932e860604ac21db4b555c6ad1114e +d8c925f283216521073497659088f4ba707311c9 +a28c2815223f89026b6a198415a1291cd67eca0a +8201b77f669191dd01caacbea1e3b5ffbab92962 +dbfd44e667bdeeb17295ab40d123ddae70d3daff +f324f1736d24f14c7685df0f2a2cc4bb20999fa4 +296f977687e8ce959a2e38129ce1c0d31e755d8d +e4888dafd50eaf43e1476701bd26bf940865d973 +17ca1470986faac5115d246d3f9b78244b7215eb +0904469f246fecf43062b2863bc81f730a96b20e +09b7e506802fe6fa4a12154e322dddbc34553f9c +b526d3ce8d4649e96446e1e8947b674001fe16a5 +46665024a071b4916afcae4b9ed3cec0aaeabc7b +36fee230f41e1fc89a26b1b7bc7e884862dbf56f +702806939cff2095b2ff97a08d84bc14d1dfc5ae +30ea107c7831a846dfe6828947249489468f3ef7 +8d51b266df630345c667bdbc07f172b906e627af +3691b1bffd90518b4017ccefd8c15ffaa8d87d6d +6c81ff344b4285b42f2733cadf42536addd736d2 +6c15f6261e0d7d09ca59071955ce30d09bbe97ae +aa3acca1a17c375731214851c56020878929a068 +c4cf20cc2e3665ba0b7d948683bfa1e82aa9b7e2 +9ef570f878a8c2d9460a99ca523b835535de67d5 +7d76f893313ec0b855d1dd6ce9b8fc9bc77723cb +e7ad68df97b2c9bcdf6e56cc017301f84a7f9b4a +8f5a3860948e5dc213ed825fb4715f0ffa013ce3 +27e795d99164a2372106c9e1f118cc19258e41a2 +fb438b2cad9b7583f4eda4fbe6fe9e9cd1f59f10 +6c6e755b03472223c69700bb166d81d9adf080b1 +8cfbb990201cb91fff3db779885041d2b5c52c1e +eef5130bd17ede5cacf8be5881eab0c09a538bda +f20d8a304a9009a79a54867664bce33473947272 +aef5dcb164dba680b436bbb37faeeccbbc4fe2b4 +b412608a7f30af28fb8615e4b522b7dcecabe212 +799415d8ea5094bd6cca8c178d6d8531827da191 +ef5ba9f7f4c954dc6208e9a47fdaa730602fa27c +6204780ab854a5443a52c343534637fc227dd70b +58733bf4d2489d1823a432b2f515f22fa835a88b +a8f30d02868c8ffc924271d9da99e0c180477a1c +c8baff658f6506e04d7f530d9b266ba2d4b632b7 +87a31d871a336bc60987492515a20ef25d18d0d4 +e2929ddb475b033444f85c3cf7e5ca38e84ed7e6 +c7fb0295ad6226798e65332c841f6a1508eb9efe +db80674c14610f0b964fb574ac32c6984cd226f5 +ebed652b9c7ae1784ab032b2023445e8b8cbaa41 +08ac1c4c2c7589f889b2bce3687a724d0c636c40 +dae2a0e1c908135eebc98a0db33ca435ebe7ad5a +3d415472346209c9e90706dfd313728a0ea15003 +df08670661d8887644542806a8d69046e3ba87ab +32a96ae444a08d6ae828b34539aec76a835a95e4 +4a0465f87d082b8e9a22608da161f232e8d6f464 +c7f47a4f22bcd6f11e6ee97e9687b5e917d9e495 +9b4dec196b29bcc98a377d6f433638a85177e0c9 +c0f1425ba0cdac23bc342587ce6ea6cb53515c55 +ca6f373a6c76d4a4284240fe5e88c130bd56d27c +785966c05fb5fe10addeca3a86f1857329957fb0 +fd71a64340425384294a115d3a42bc8069ac9f67 +08784cd3a744ca0750c746910124a6056d46f608 +ee9284abb97ecdc3ed78a4807303124652924051 +c9cfae108e2aaea3b12010dfd0404f7bbffa5c2a +d7a63c81f8bed7df99b942d88a380c100e74accd +23fc6eff1bbf238513e2f9c76e40762f01b1737d +485c7afff53fcb4f694a5b3cfdc09c372cf73e18 +8656b25529d3e5aabde19eb42d10aec5d8af2088 +fe954e108708531e155eadf4945fff5e432c57b3 +0f5fe6ee00187bde832acb092e42431f9fa8430c +8827ce43536f7246e21f643fdcc1b1ad44c05a12 +869e1a290cb6ce44eada26c00d5abee0e5c2ecd5 +a2215dc789a33e1ab3be1dfcc03e8f7f02d046d5 +62a233d2e55b159001ed622fb96b9444fce9c11d +26fff6559df5149b98c3366e7c01236daaf2b1d1 +115e024f021871b307a7a315aef720bbffe1d54c +19719df575d3ae0d8c93c037f7f1972b9e10f1ba +3628f33e8ef1350912bab8d4ae467c7e1f3056fd +e123e08e23278c95e399b3b11da411325135da21 +77b3598df08e6f3a2b4ae157904e30d5aa2ad49a +e071ff877d67787d0a6582ac3dcbcb627dec9ac4 +722a05a34115832ebdfa990a99bd999d097a0ce5 +3dcebec3361c047d19cf639879437ed5b769e7f2 +8f37fd4e1147e623fe6f8cc6d190c304467d999e +5339c690ad044e082f8a31bfe929099d7e75531d +82fd9658604cefb93728b198e73889872ce7d70a +804983224e3f5cccbd52b26bebc53b88369c448c +562f9fdd5811793c11970c856d21c7f0c32118b8 +5914ca61115649643f88ae110eaf3da4b112e6e4 +f44b0a2d303b725a7f5c82048d7423858e78e490 +de5e6430a7c1166ca82500ab7fb82cb95cc643c1 +c174eab1c3615c3ba5dbc0c6c30ac67ab6b47024 +aae0341ca8ab04c9c169f4dde3e2e943d758422a +517dc966b1379d84a9ef741ff9ca43e281868c60 +436fd9441cf9517b6e8b5162db78031228b18d9a +cd7e1db2eb4709309b43cc400d6619aff480484b +5c100f2e25d49a90b25685b9d3bb17a35e325374 +362892feaf8dbf44a0429d3676f9b5e4ea6a46a2 +ca8c5f96adddb61025107907704ec344143b0088 +8c97077f3dc6794837f887a8d57bc8d3c05e8b4b +04ba3c53b4068a8bcf31bbfc674d520ab2843a2d +c91c677ceb1093b393d46dd21252147c3ddecd1f +d85824d0d1dbc389c30ba584837d82e85c5bcd37 +c0f7e29dfb195770d68e6ee608c7129e72a89e23 +e55510a4c7ea27d0e47137479fcb16562f8d380f +8845f1c8a8b45987a6fe69bcd89060ba38475d2d +8ebf95f844971fbacb819e2e05fea4e27402a34c +5be6327602aabcb3fafaf439f69ebc621601d30a +698560f44a2c58c87988498dcbe51e30ea62c989 +29f215bf015e848c5af9a9c70e1e3e052016704f +6211582a40d5a1d67e930e337ea11f1b3538ef5e +80cb64b8ae5710be8044127b678bbc0e010e79a2 +7e613e66f3b7da299b8f4689cfa31da7bb381e31 +128131f6fd6e7bb018806ed5f150a207ae8c7e69 +f686c1c3a2fce19f177aafc281d6c724977a6dfe +56f58b9bd5e4a5c6aec7f2c5a4a04a702fc3f2dc +e63e6ef318e5cc205518f7fc052da7020742f55a +c19c8562df56700121a61f5cdbc8525a46197e1f +7ffa78b92966e11b0142829ae17c871b9f6b5c15 +426952f1145f112142141f26556313924ce7465a +f975e857f57f0f6d96ed50006a7b4e126edf1f1a +8a6220895e1d634d0aa0f41ce6882c98d7b495d0 +12da0f4b955b911a893158bd3beb9b24f1a0043e +ff8441521f15f11db3c60850a1ee551b81661fef +0b88599d7b1e25e59f2da8338520ec3325de9337 +0fda61a11326021d7ff0071b6bd8b2b3517100c8 +acdaf288f8a96f77e2c34104fadf26c04307f5fc +16d04e701ed59f32ea3c4226b553b6f0f50c7426 +7b759405d39047b5aa0f0c22d91c3d254fbaeba1 +facb5a7732d083c66484c9b3dbb274ff1d6a1ee1 +f959116e0606392633e8d8eaeb710664e4532c6c +febbd51aa5181f74d56f3d0e01d38e264444f825 +90ffbc94fcd43cdbd2e54f5cad75d2a7d659bdd8 +61cfcbd1b8ef945165acef5e7145762bb510453d +5477b6eb53ccc404db0ac820d3d858052bdebbe2 +4c6156e3830087141b0014005bf955f1a87e1edc +12dc55dc446574144eb863292c3565736ce0bfc3 +a761ce0dc6d89ad3170a3b69e3d2c71bfd014b8e +8cf55dd9b1bd7a4c8350c81e469d92ec956af62a +8671360c5d830f38316ccc4f63362ded7a2d20a6 +97f1a15d8196c514517e76f1d80571fa769e28b3 +85b2c0a31be506ef27e0ca124be5a89c628de120 +935dfa6867b975280d63f75cdef228372adc40ef +63367984bfa6dcb0ae560d7bab812c622481920c +ec10a4353082865abdb5697560390653c3902065 +b7974f532d25aa2eda5e16e5dc58d3f726373c03 +f804d65a6009874a0c4d555b6e9d8d14cbf935ef +cf251f22dbe2c976d340eaa8469e1ea21ff88a42 +6998dad2b81635da9c328ef829a8b1125393c38b +2a073e0f2e510318e83c16ad9141f5c1a31cf6a2 +eb6f4e52212ccb96b2aa08e0af5949dc8c67a024 +09b9f830b520e68635c45a604641874e0f2bfeb0 +17a33c4bc2856e52acf16f3f86dd7053e340ffc5 +81f4e9eee7d046992f4091cd2d82a6a82981b354 +5a6746c9041d494e8f794e4ecfb6a7c941f5ccce +5249fba450a5865325c2b47ce5fac5a585b2ca23 +e35df1cddab6e311e0b4f0b732c555c51e8a739d +8f95ac3d57280ec506907f000e60b9bcb065b4bf +2750ae3dac18bcf9eecdf9127e5aedaeac19a67e +dc4d88520f9221eea943cdc54bd89e21e52677ca +bdfc42f3dce77e9e964ba2922c19faba2ca563ee +c3b349b83e4fa2389ee59ea9ca036001b358ca02 +3c992e03d64ea763d4b6db96e3371143294172b8 +f40f581bb9a644dc31feeea1bdc3dd6bbc42ccca +d59c8256b9451b83457299244fa9f81d0369081f +b015c20c7868a98a3cee9878553502c708fd96a0 +b6e30268a7f110d767dac9144454d2c6fe49eb34 +dbfc2a5e7753d96913593c41db73a32dac062ff8 diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/configs/bf16.conf b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/configs/bf16.conf new file mode 100644 index 0000000000000000000000000000000000000000..00531a9921295dcffd3b6faddb5060163a9b04aa --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/configs/bf16.conf @@ -0,0 +1,9 @@ +# The format of this config file is 'key = value'. +# The key has the format 'model.scenario.key'. Value is mostly int64_t. +# Model maybe '*' as wildcard. In that case the value applies to all models. +# All times are in milli seconds + +*.Server.target_qps = 28 +*.Server.target_latency = 20000 +*.Server.min_query_count = 49152 +*.Offline.min_query_count = 98304 diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/configs/fp8-99.9.conf b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/configs/fp8-99.9.conf new file mode 100644 index 0000000000000000000000000000000000000000..3d314e89630200e2708242d6f0ca7033e78c4c90 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/configs/fp8-99.9.conf @@ -0,0 +1,9 @@ +# The format of this config file is 'key = value'. +# The key has the format 'model.scenario.key'. Value is mostly int64_t. +# Model maybe '*' as wildcard. In that case the value applies to all models. +# All times are in milli seconds + +*.Server.target_qps = 77.7 +*.Server.target_latency = 20000 +*.Server.min_query_count = 98304 +*.Offline.min_query_count = 786432 diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/configs/fp8-99.conf b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/configs/fp8-99.conf new file mode 100644 index 0000000000000000000000000000000000000000..3d314e89630200e2708242d6f0ca7033e78c4c90 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/configs/fp8-99.conf @@ -0,0 +1,9 @@ +# The format of this config file is 'key = value'. +# The key has the format 'model.scenario.key'. Value is mostly int64_t. +# Model maybe '*' as wildcard. In that case the value applies to all models. +# All times are in milli seconds + +*.Server.target_qps = 77.7 +*.Server.target_latency = 20000 +*.Server.min_query_count = 98304 +*.Offline.min_query_count = 786432 diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/dataset.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/dataset.py new file mode 100644 index 0000000000000000000000000000000000000000..4164798eb147d47ef5a0ae7796a63b0d5dba2fa7 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/dataset.py @@ -0,0 +1,90 @@ +from transformers import AutoTokenizer, BatchEncoding +from torch.nn.functional import pad + +import utils +import torch + +PROMPT_DICT = { + "prompt_input": ( + "Below is an instruction that describes a task, paired with an input that provides further context. " + "Write a response that appropriately completes the request.\n\n" + "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" + ), + "prompt_no_input": ( + "Below is an instruction that describes a task. " + "Write a response that appropriately completes the request.\n\n" + "### Instruction:\n{instruction}\n\n### Response:" + ), +} + + +class Dataset(): + def __init__(self, model_path, dataset_path, total_count_override=None, perf_count_override=None, add_padding=True, fake_data=False): + print("Constructing QSL") + + self.model_path = model_path + self.dataset_path = dataset_path + self.add_padding = add_padding + self.fake_data = fake_data + + self.tokenizer = AutoTokenizer.from_pretrained( + self.model_path, + model_max_length=2048, + padding_side="left", + use_fast=True,) + self.tokenizer.pad_token = self.tokenizer.eos_token + + self.list_data_dict = utils.jload(self.dataset_path) + + prompt_input, prompt_no_input = PROMPT_DICT["prompt_input"], PROMPT_DICT["prompt_no_input"] + self.sources = [prompt_input.format_map( + example) for example in self.list_data_dict] + self.targets = [ + f"{example['output']}" for example in self.list_data_dict] + + self.source_encoded_input_ids, self.source_encoded_attn_masks = self.encode_samples() + + self.count = total_count_override or len(self.sources) + self.perf_count = perf_count_override or self.count + + def encode_samples(self): + def pad_tensor(tensor, value=0): + max_length = 1919 + return pad(tensor, (max_length - tensor.shape[-1], 0), value=value) + + print("Encoding Samples") + + max_length = 1919 + min_length = 30 + total_samples = len(self.sources) + + source_encoded_input_ids = [] + source_encoded_attn_masks = [] + + for i in range(total_samples): + if not self.fake_data: + source_encoded = self.tokenizer(self.sources[i], return_tensors="pt", + padding=True, truncation=True, + max_length=max_length) + else: + # Hack to generate a deterministic semi-random sequence without using random.* + length = min_length + len(self.sources[i]) % (max_length - min_length) + source_encoded = BatchEncoding({ + 'input_ids': torch.ones((1, length), dtype=torch.int64), + 'attention_mask': torch.ones((1, length), dtype=torch.int64)}) + if self.add_padding: + source_encoded.input_ids = pad_tensor(source_encoded.input_ids, self.tokenizer.pad_token_id) + source_encoded.attention_mask = pad_tensor(source_encoded.attention_mask) + source_encoded_input_ids.append(source_encoded.input_ids) + source_encoded_attn_masks.append(source_encoded.attention_mask) + + return source_encoded_input_ids, source_encoded_attn_masks + + def LoadSamplesToRam(self, sample_list): + pass + + def UnloadSamplesFromRam(self, sample_list): + pass + + def __del__(self): + print("Finished destroying QSL.") diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/habana_generation_utils.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/habana_generation_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..f9a33d58e9de3ddcfb0a2ca3725054e01f506dd5 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/habana_generation_utils.py @@ -0,0 +1,543 @@ +#!/usr/bin/env python3 +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### +import time +import os +import glob +import torch +import torch.nn.functional as F +from enum import Enum +import habana_frameworks.torch.core as htcore + +from collections import UserDict + + +def boolean(string): + char = string.lower()[0] + assert char == 't' or char == 'f', f"Invalid value: {string} - it should start with either 't' or 'f'" + return char == 't' + + +def flip(dictionary): + return {v: k for k, v in dictionary.items()} + + +def unwrap_ds(model): + if hasattr(model, 'module'): + return model.module + return model + + +def defined(v): + return v is not None + + +class Option: + def __init__(self, opt_type, default=None, help=None, is_custom=False): + self.opt_type = opt_type + self.default = default + self.is_custom = is_custom + self.help = help + + def describe(self, name): + type_str = FLIPPED_SUPPORTED_TYPES[self.opt_type] + default_str = f'={self.default}' if defined(self.default) else '' + custom_str = ' [custom]' if self.is_custom else '' + help_str = f'\n\t{self.help}' if self.help else '' + return f'{name}:{type_str}{default_str}{custom_str}{help_str}' + + +class CustomOption(Option): + def __init__(self, opt_type, **kwargs): + super().__init__(opt_type, **kwargs, is_custom=True) + + +SUPPORTED_TYPES = { + 'int': int, + 'bool': boolean, + 'float': float, +} +FLIPPED_SUPPORTED_TYPES = flip(SUPPORTED_TYPES) + +OPTIONS = { + # HF options + 'max_length': Option(int, default=128, help='Maximum input + output length. Overriden by max_new_tokens.'), + 'max_new_tokens': Option(int, help='Maximum number of tokens to generate.'), + 'min_length': Option(int, help='Minimum input + output length. Overriden by min_new_tokens.'), + 'min_new_tokens': Option(int, help='Minimum number of tokens to generate.'), + + 'num_beams': Option(int, default=1, help='Number of beams. When num_beams=1 greedy_search is used, otherwise beam_search.'), + 'early_stopping': Option(boolean, default=False, help='Exit beam-search when N hypothesis are found'), + 'early_stopping_delay': Option(int, default=1, help='Determines how many iterations to schedule before checking for early exit condition'), + 'do_sample': Option(boolean, default=False, help='Enable sampling. Affects both greedy_search and beam_search.'), + 'temperature': Option(float, help='Value > 1.0 increase sampling randomness. Value < 1.0 makes tokens with best score more likely to be selected.'), + 'top_k': Option(int, help='Limit sampling to top_k best tokens at each step.'), + 'top_p': Option(float, help='Limit sampling to a minimal set of tokens S such as P(S) >= top_p.'), + 'repetition_penalty': Option(float, help='Penalize repeating tokens. Value > 1 makes tokens that have already appeared less likely.'), + 'no_repeat_ngram_size': Option(int, help='Forbid ngrams that have already appeared from reappearing.'), + 'length_penalty': Option(float, default=1.0, help='Applied as exponent to beam length. Value > 1.0 encourages longer sequences (because of log used in scoring). Value < 0.0 encourages shorter sequences. Beam-search only.'), + 'use_cache': Option(boolean, default=True, help='Run with KV-cache enabled.'), + + # Generic HPU options + 'use_graphs': CustomOption(boolean, default=True, help='Use HPU graphs if possible.'), + 'ignore_eos': CustomOption(boolean, default=True, help='Run greedy_search for full max_length to avoid device<>CPU synchronization.'), + 'max_iterations': CustomOption(int, help='Limit number of iterations. Useful for profiling and debugging.'), + + # Model specific HPU options + 'static_shapes': CustomOption(boolean, help='Run with static shapes to avoid graph recompilations.'), + 'bucket_width': CustomOption(int, help='Pad shapes to a multiple of bucket width when static_shapes are used.'), + 'max_input_length': CustomOption(int, help='Maximum length of input when static_shapes are used.'), + 'trim_logits': CustomOption(boolean, help='Calculate logits only for the last token in the initial run of the model.'), + 'limit_graphs': CustomOption(boolean, help='Use hpu graphs only for iterations > 0.'), + 'reuse_cache': CustomOption(boolean, help='Reuse kv-cache memory between prompts.'), + 'kv_cache_fp8': CustomOption(boolean, default=False, help='store kv-cache in float8 when kv-cache is used'), + + 'use_position_ids': CustomOption(boolean, default=True, help='Use position ids in GPT-J'), + 'kv_cache_margin': CustomOption(int, help='Update only last K entries in KV-cache. Requires reuse_cache.'), +} + +MIN_INF = float('-inf') + + +def custom_options(): + return [k for k, v in OPTIONS.items() if v.is_custom] + + +def generate_option_help(): + result = 'Options need to be specified in the form of KV1,KV2,[...] where each KV is either KEY_N=VALUE_N or KEY_N:TYPE_N=VALUE_N. ' + result += '\nKnown options:' + for name, op in OPTIONS.items(): + result = result + '\n ' + op.describe(name) + result += '\nOptions that are not listed above but are supported by HF API can be passed by explicitly specifing their type. For example: penalty_alpha:float=0.5 . Note: this is only supported in "vanilla" and "compatibility" generation modes.' + result += '\nOptions marked as "custom" are only used when running in "optimized" generation mode.' + return result + + +def parse_key_type_value(ktv): + if '=' in ktv: + # Full key/type/value + # key[:type]=value + kt, value = ktv.split('=') + kt = kt.split(':') + name = kt[0] + if len(kt) > 1: + opt_type = kt[1] + assert opt_type in SUPPORTED_TYPES, f'Unsupported type: {opt_type}. Supported types: {list(SUPPORTED_TYPES.keys())}' + opt_type = SUPPORTED_TYPES[opt_type] + else: + assert name in OPTIONS, f'Cannot deduce type! Unknown option:{name}! Please specify type or use one of the following options: {list(OPTIONS.keys())}' + opt_type = OPTIONS[name].opt_type + return (name, opt_type(value)) + else: + # Boolean shorthand + # [!]key + if ktv.startswith('!'): + return (ktv[1:], False) + else: + return (ktv, True) + + +def parse_options(string, default_values={}): + if string is None: + return GenerationOptions(default_values) + kvs = [parse_key_type_value(ktv) for ktv in string.split(',')] + return GenerationOptions(default_values=default_values, **dict(kvs)) + + +class GenerationOptions(dict): + def __init__(self, default_values={}, **args): + super().__init__(self, **args) + self.set_defaults(default_values) + + def filter(self, *keywords): + result = GenerationOptions(**self) + for k in keywords: + result.pop(k, None) + return result + + def set_defaults(self, default_values): + for k, v in default_values.items(): + if k not in self: + self[k] = v + for k, v in OPTIONS.items(): + if defined(v.default) and k not in self: + self[k] = v.default + + def __getattr__(self, key): + if key in self.keys(): + return self[key] + return None + + def set(self, key, value): + self[key] = value + + def print(self): + print("Generation options:") + for k, v in sorted(self.items()): + print(' ', f'{k}={v}') + + +def fast_topk(tensor, k, dim): + min_inf = torch.tensor(MIN_INF, dtype=tensor.dtype, device=tensor.device) + best = [] + for i in range(k): + value, index = torch.max(tensor, dim=dim) + best.append((value.unsqueeze(-1), index.unsqueeze(-1))) + if (i + 1 < k): + tensor.scatter_(dim, index.unsqueeze(-1), min_inf.unsqueeze(0).expand(tensor.size(0), 1)) + best_value, best_index = zip(*best) + best_value = torch.cat([b for b in best_value], dim=-1) + best_index = torch.cat([b for b in best_index], dim=-1) + return best_value, best_index + + +if os.environ.get('TOPK_ALGORITHM', 'FAST') == 'NATIVE': + TOPK_IMPL = torch.topk +else: + TOPK_IMPL = fast_topk + + +class SelectionBeam(): + def __init__(self, batch_size, beam_size): + self.batch_size = batch_size + self.beam_size = beam_size + + def __call__(self, logits, eos_token_id): + eos_logits = logits[:, eos_token_id].clone() + logits[:, eos_token_id] = MIN_INF + logits = logits.view(self.batch_size, -1) + topk = TOPK_IMPL(logits, k=self.beam_size, dim=-1) + return (*topk, eos_logits) + + +def get_device(model): + if hasattr(model, 'device'): + return model.device + if hasattr(model, 'module'): + return model.module.device + assert False, 'Cannot extract device!' + return None + + +def is_on_hpu(obj): + return str(get_device(obj)).startswith('hpu') + + +@torch.no_grad() +def generate_on_prepared_input(model, + options, + model_inputs, + max_length, + input_length): + if options.use_cache and options.reuse_cache: + model_inputs['reuse_cache'] = True + bs, _ = model_inputs['input_ids'].shape + unwrap_ds(model).allocate_kv_cache(bs * options.num_beams, max_length, options.kv_cache_fp8) + + device = get_device(model) + model_inputs = move(model_inputs, device) + + initial_ids = model_inputs['input_ids'] + bs = initial_ids.shape[0] + selection_algorithm = SelectionBeam(bs, options.num_beams) + beam_trace = beam_search(model, options, selection_algorithm, max_length, input_length, model_inputs) + return initial_ids.cpu(), move(beam_trace, 'cpu') + + +def calculate_input_padding(input_length, options): + if not options.static_shapes: + return 0 + if defined(options.bucket_width): + return round_up(input_length, options.bucket_width) - input_length + if defined(options.max_input_length): + return options.max_input_length - input_length + assert False, "Running with static_shapes requires setting either 'bucket_width' or 'max_input_length'" + + +def calculate_max_length(input_length, options): + if defined(options.max_new_tokens) and defined(options.bucket_width): + return round_up(input_length + options.max_new_tokens, options.bucket_width) + if defined(options.max_new_tokens) and defined(options.max_input_length): + return options.max_input_length + options.max_new_tokens + if defined(options.max_input_length): + assert options.max_length >= options.max_input_length, \ + f"max_input_length={options.max_input_length} is bigger then max_length={options.max_length}! Either increase max_length or specify max_new_tokens." + return options.max_length + + +def prepare_decoder_only_input_without_moving(pad_token_id, options, model_args): + input_ids = model_args['input_ids'] + attention_mask = model_args['attention_mask'] + + input_ids = input_ids.to(torch.int32) + attention_mask = attention_mask.to(torch.bfloat16) + + input_length = input_ids.shape[-1] + input_padding = calculate_input_padding(input_length, options) + max_length = calculate_max_length(input_length, options) + + if options.static_shapes: + model_args['token_idx'] = torch.tensor(input_length) + if input_padding > 0: + input_ids = F.pad(input_ids, (0, input_padding), value=pad_token_id) + attention_mask = F.pad(attention_mask, (0, input_padding), value=0) + + position_ids = attention_mask.int().cumsum(-1) - 1 + start_end = torch.full((input_ids.shape[0], 2), input_length, dtype=torch.int32) + start_end[:, 0] -= position_ids[:, -1].to(torch.int32) + + attention_mask = (1.0 - attention_mask) * torch.finfo(attention_mask.dtype).min + attention_mask = attention_mask.unsqueeze(1) + + model_args['input_ids'] = input_ids + model_args['attention_mask'] = attention_mask + model_args['position_ids'] = position_ids + model_args['start_end'] = start_end + model_args['use_cache'] = options.use_cache + if options.trim_logits: + model_args['trim_logits'] = True + + return model_args, max_length, input_length + + +def round_up(n, multiple): + return (n + multiple - 1) // multiple * multiple + + +def calc_iterations(input_length, max_length, options): + if defined(options.max_new_tokens): + iterations = options.max_new_tokens + else: + iterations = max_length - input_length + if defined(options.max_iterations): + iterations = min(iterations, options.max_iterations) + return range(max(iterations, 0)) + + +@torch.no_grad() +def beam_search(model, + options, + selection_algorithm, + max_length, + input_length, + model_input): + + if model.config.is_encoder_decoder: + input_ids_key = 'decoder_input_ids' + attention_mask_key = 'decoder_attention_mask' + else: + input_ids_key = 'input_ids' + attention_mask_key = 'attention_mask' + past_key = 'past_key_values' + + input_ids = model_input[input_ids_key] + attention_mask = model_input[attention_mask_key] + + token_idx = model_input.get('token_idx', None) + position_ids = model_input.pop('position_ids') + + MIN_LENGTH = 30 + MAX_LENGTH = 128 + bs = input_ids.shape[0] + beam_scores = torch.zeros((bs,), device=input_ids.device, dtype=torch.float32) + beam_trace_scores = torch.zeros((MAX_LENGTH, bs * options.num_beams), device=input_ids.device, dtype=torch.float32) + beam_trace_indices = torch.zeros((MAX_LENGTH, bs * options.num_beams), device=input_ids.device, dtype=torch.int32) + beam_trace_tokens = torch.zeros((MAX_LENGTH, bs * options.num_beams), device=input_ids.device, dtype=torch.int32) + beam_trace_eos = torch.zeros((MAX_LENGTH, bs * options.num_beams), device=input_ids.device, dtype=torch.float32) + beam_trace_idx = torch.tensor(0, device=input_ids.device) + + total_eos_tokens = torch.zeros((1), device=input_ids.device, dtype=torch.int32).repeat(bs) + max_eos_tokens = torch.tensor(options.num_beams, device=input_ids.device, dtype=torch.int32).repeat(bs) + + model_input['kv_cache_shape'] = (bs * options.num_beams, input_ids.shape[-1]) + + if options.early_stopping: + checks = [None] * options.early_stopping_delay + + start = torch.full([bs], input_length, dtype=torch.int32, device=input_ids.device) + end = torch.full([bs], input_length, dtype=torch.int32, device=input_ids.device) + mul = torch.tensor([[64, 16, 4, 1]], dtype=torch.int32, device=input_ids.device) + + htcore.mark_step() + + for i in calc_iterations(input_length, max_length, options): + first_step = (i == 0) + + embed_positions = model.transformer.embed_positions.repeat(position_ids.shape[0], 1, 1) + repeated_position_ids = position_ids.unsqueeze(-1).repeat(1, 1, embed_positions.shape[-1]) + sincos = torch.gather(embed_positions, 1, repeated_position_ids) + sin, cos = torch.split(sincos, sincos.shape[-1] // 2, dim=-1) + output_size = 2 * sin.shape[2] + + model_input['sin'] = torch.repeat_interleave(sin, 2, dim=2, output_size=output_size).unsqueeze(2) + model_input['cos'] = torch.repeat_interleave(cos, 2, dim=2, output_size=output_size).unsqueeze(2) + + model_output = model(**model_input) + + logits = model_output['logits'] + if token_idx is None or logits.shape[-2] == 1: + next_token_logits = logits[:, -1, :].unsqueeze(-2) + else: + next_token_logits = logits.index_select(-2, token_idx - 1) + + next_token_logits = next_token_logits.squeeze(-2) + vocab_size = next_token_logits.shape[-1] + + if i < MIN_LENGTH: + next_token_logits[:, model.config.eos_token_id] = MIN_INF + + next_token_logits = F.log_softmax(next_token_logits, dim=-1, dtype=torch.float32) + beam_scores.unsqueeze(-1) + next_token_values, next_token_indices, eos_scores = selection_algorithm(next_token_logits, model.config.eos_token_id) + beam_scores = next_token_values.flatten() + beam_indices = next_token_indices.div(vocab_size, rounding_mode='floor').flatten().to(torch.int32) + beam_tokens = next_token_indices.remainder(vocab_size).flatten().to(torch.int32) + + if first_step: + model_input[past_key] = unwrap_ds(model).reorder_kv_cache_first_token(model_input['kv_cache_shape']) + else: + indices = beam_indices.view(bs, options.num_beams) + indices = torch.sum(indices * mul, axis=-1).to(torch.uint8) + end.add_(1) + model_input[past_key] = unwrap_ds(model).reorder_kv_cache_next_token(start, end, indices, model_input['kv_cache_shape']) + + if options.early_stopping and i >= MIN_LENGTH: + bs_beam_scores = beam_scores.reshape((bs, -1)) + bs_eos_scores = eos_scores.reshape((bs, -1)) + scores = torch.cat([bs_beam_scores, bs_eos_scores], dim=-1) + best_indices = torch.topk(scores, options.num_beams)[1] + eos_tokens = (best_indices >= options.num_beams).sum(dim=-1, dtype=torch.int32) + total_eos_tokens.add_(eos_tokens) + is_finished = (total_eos_tokens >= max_eos_tokens) + end = torch.logical_not(is_finished).to(torch.int32) * end + cur_check_idx = i % options.early_stopping_delay + checks[cur_check_idx] = is_finished.all() + + if first_step: + eos_scores = eos_scores.repeat_interleave(options.num_beams, dim=0, output_size=options.num_beams * bs) + beam_trace_scores.index_copy_(0, beam_trace_idx, beam_scores.unsqueeze(0)) + beam_trace_indices.index_copy_(0, beam_trace_idx, beam_indices.unsqueeze(0)) + beam_trace_tokens.index_copy_(0, beam_trace_idx, beam_tokens.unsqueeze(0)) + beam_trace_eos.index_copy_(0, beam_trace_idx, eos_scores.unsqueeze(0)) + beam_trace_idx.add_(1) + + if first_step: + attention_mask = torch.repeat_interleave( + attention_mask, options.num_beams, dim=0, output_size=options.num_beams * bs + ) + attention_mask.index_fill_(2, token_idx, 0) + + next_tokens = beam_tokens.unsqueeze(-1) + + token_idx.add_(1) + + model_input[input_ids_key] = next_tokens + model_input[attention_mask_key] = attention_mask + + if first_step: + model_input["start_end"] = None + + if first_step: + position_ids = position_ids[:, -1].unsqueeze(-1) + position_ids = position_ids.repeat_interleave(options.num_beams, dim=0, output_size=options.num_beams * bs) + else: + position_ids.add_(1) + + if options.early_stopping and i >= MIN_LENGTH: + next_check_idx = (i + 1) % options.early_stopping_delay + all_done = checks[next_check_idx] + if all_done is not None and all_done.cpu().item(): + break + + return (beam_trace_idx, beam_trace_scores, beam_trace_indices, beam_trace_tokens, beam_trace_eos) + + +def finalize_beams(initial_ids, beam_trace, model_config, length_penalty): + beam_trace_idx, beam_trace_scores, beam_trace_indices, beam_trace_tokens, beam_trace_eos = beam_trace + + bs = initial_ids.shape[0] + num_beams = beam_trace_scores.shape[1] // bs + + beam_trace_idx = beam_trace_idx.item() + beam_trace_scores = beam_trace_scores[:beam_trace_idx, :].reshape(beam_trace_idx, bs, -1) + beam_trace_indices = beam_trace_indices[:beam_trace_idx, :].reshape(beam_trace_idx, bs, -1) + beam_trace_tokens = beam_trace_tokens[:beam_trace_idx, :].reshape(beam_trace_idx, bs, -1) + beam_trace_eos = beam_trace_eos[:beam_trace_idx, :].reshape(beam_trace_idx, bs, -1) + + input_lengths = torch.tensor(initial_ids.size(-1)) - torch.eq(initial_ids, model_config.eos_token_id).sum(-1) + + results = [] + for batch in range(bs): + best_score = (False, MIN_INF) + best_beam = 0 + best_step = 0 + total_finished = 0 + for step in range(beam_trace_idx): + #b_len = initial_ids.shape[-1] + step + b_len = input_lengths[batch] + step + p_scores = torch.cat([beam_trace_scores[step, batch], beam_trace_eos[step, batch]]) + scores = p_scores / (b_len ** length_penalty) + top_scores, top_beams = torch.sort(scores, dim=-1, descending=True, stable=True) + # print(batch, step, top_scores.numpy().tolist(), top_beams.numpy().tolist()) + for beam in top_beams[:num_beams]: + beam = beam.item() + finished = beam >= num_beams + score = (finished, scores[beam]) + total_finished += finished + # print("\t", beam, score) + if score > best_score or (not best_score[0] and beam == 0): + best_beam = beam + best_score = score + best_step = step + # print('new best', score, 'vs', best_score) + if total_finished >= num_beams: + break + + idx = best_beam + tokens = [] + for step in range(best_step, -1, -1): + if idx >= num_beams: + tokens.append(model_config.eos_token_id) + idx = idx - num_beams + else: + tokens.append(beam_trace_tokens[step, batch, idx].item()) + idx = beam_trace_indices[step, batch, idx].item() + tokens.reverse() + results.append(tokens) + + max_length = max(len(r) for r in results) + results = [torch.tensor(r) for r in results] + results = torch.cat([expand_if_needed(r, max_length, model_config.pad_token_id).unsqueeze(0) for r in results], dim=0) + results = torch.cat([initial_ids, results], dim=-1) + + return results + + +def map_tensors(obj, fn): + constructor = type(obj) + if isinstance(obj, tuple): + return constructor(map_tensors(v, fn) for v in obj) + if isinstance(obj, list): + return constructor([map_tensors(v, fn) for v in obj]) + if isinstance(obj, dict) or isinstance(obj, UserDict): + return constructor({k: map_tensors(v, fn) for k, v in obj.items()}) + if isinstance(obj, torch.Tensor): + return fn(obj) + return obj + + +def move(obj, device): + return map_tensors(obj, lambda t: t.to(device)) + + +def expand_if_needed(tensor, new_size, value, dim=-1): + orig_len = tensor.shape[dim] + padding_len = new_size - orig_len + if padding_len > 0: + if dim == -1: + return F.pad(tensor, (0, padding_len), value=value) + elif dim == -2: + return F.pad(tensor, (0, 0, 0, padding_len), value=value) + else: + assert False, f'Unsupported dim value: {dim}' + return tensor diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/hgu_options.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/hgu_options.py new file mode 100644 index 0000000000000000000000000000000000000000..375c8e47afe3e7a462c866ca0b225d923ea58acd --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/hgu_options.py @@ -0,0 +1,31 @@ +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### + +import habana_generation_utils as hgu + + +default_options = { + "early_stopping": True, + "early_stopping_delay": 2, # schedule an extra step before checking early_stopping, i.e. schedule-0, skip-check-1, schedule-1, check-0, schedule-0, check-1 + "max_iterations": 128, + "num_beams": 4, + "static_shapes": True, + "use_cache": True, + "use_graphs": True, + "limit_graphs": False, + "use_rolling_position_ids": True, + "reuse_cache": True, + "kv_cache_fp8": False, + "trim_logits": True, + "kv_cache_margin": 129, +} + + +def get_options_dict(options_str: str = None) -> dict: + options = {} + if options_str is not None: + options = dict( + [hgu.parse_key_type_value(ktv) for ktv in options_str.split(',')] + ) + return {**default_options, **options} diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/main.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/main.py new file mode 100644 index 0000000000000000000000000000000000000000..32e4409a038d8180ee0f5779be0a9fe5e23c4ea3 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/main.py @@ -0,0 +1,117 @@ +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### + +import os +os.environ.setdefault('PT_HPU_INFERENCE_MODE', '1') + +import argparse +import mlperf_loadgen as lg +import sys + +sys.path.insert(0, os.getcwd()) + +scenario_map = { + "Offline": lg.TestScenario.Offline, + "Server": lg.TestScenario.Server +} + +def get_args(): + parser = argparse.ArgumentParser() + parser.add_argument("--scenario", choices=["Offline", "Server"], default="Offline", help="Scenario") + parser.add_argument("--model-path", default="/mnt/weka/data/pytorch/gpt-j", help="") + parser.add_argument("--dataset-path", default="/mnt/weka/data/pytorch/gpt-j/cnn_eval.json", help="") + parser.add_argument("--accuracy", action="store_true", help="enable accuracy pass") + parser.add_argument("--dtype", choices=["bfloat16", "float32", "float8"], default="bfloat16", + help="data type of the model, choose from bfloat16, float32 and float8") + parser.add_argument("--device", type=str, choices=["cpu", "cuda", "hpu", "socket"], + default="hpu", help="device to run the inference on") + parser.add_argument("--mlperf_conf", default="mlperf.conf", help="mlperf rules config") + parser.add_argument("--user_conf", default="user.conf", + help="user config for user LoadGen settings such as target QPS") + parser.add_argument("--max_examples", type=int, default=13368, + help="Maximum number of examples to consider (not limited by default)") + parser.add_argument("--num_workers", type=int, default=1) + parser.add_argument("--batch_size", type=int, default=12) + parser.add_argument("--quantization_file", "-qf", type=str, + help="Read quantization configuration from a file") + parser.add_argument("--log_path", default="build/logs") + parser.add_argument("--options", type=str, default='', + help="Coma-seperated list of options used in generation") + parser.add_argument("--profile", action='store_true', help="Enable profiling") + parser.add_argument("--profile_type", type=str, choices=["tb", "hltv"], default='tb', help="Profiling format") + parser.add_argument("--profile_tokens", type=int, default=5, help="Number of tokens to profile") + parser.add_argument("--help_options", action="store_true", help="Show detailed option help") + parser.add_argument("--fake_device", action='store_true', help="Enable dummy device with estimated delay") + parser.add_argument("--fake_dataset", action='store_true', help="Enable dummy dataset") + parser.add_argument("--stdout", action="store_true", help="Print logs to stdout instead of a file") + parser.add_argument('--enable-tensorboard-logging', action='store_true') + parser.add_argument('--eager', action='store_true') + args = parser.parse_args() + return args + + +def main(): + args = get_args() + if args.eager: + os.environ['PT_HPU_LAZY_MODE'] = '0' + + # These imports need to be placed after setting PT_HPU_LAZY_MODE=0 when we're running eager mode + from hgu_options import get_options_dict + import habana_generation_utils as hgu + + if args.num_workers != 1: + assert args.device != 'hpu', "In order to run more than 1 worker, you need to set device to 'socket'" + if args.help_options is True: + print(hgu.generate_option_help()) + sys.exit(0) + + if args.scenario == "Offline": + if args.device == "socket": + from socket_backend import SUT_Offline + sut = SUT_Offline(args) + else: + from backend import SUT_Offline + options = get_options_dict(args.options) + sut = SUT_Offline(args, options) + else: + if args.device == "socket": + from socket_backend import SUT_Server + sut = SUT_Server(args) + else: + from backend import SUT_Server + options = get_options_dict(args.options) + sut = SUT_Server(args, options) + + settings = lg.TestSettings() + settings.scenario = scenario_map[args.scenario] + # Need to update the conf + settings.FromConfig(args.mlperf_conf, "gptj", args.scenario) + settings.FromConfig(args.user_conf, "gptj", args.scenario) + + if args.accuracy: + settings.mode = lg.TestMode.AccuracyOnly + else: + settings.mode = lg.TestMode.PerformanceOnly + os.makedirs(args.log_path, exist_ok=True) + log_output_settings = lg.LogOutputSettings() + log_output_settings.outdir = args.log_path + log_output_settings.copy_summary_to_stdout = True + log_settings = lg.LogSettings() + log_settings.log_output = log_output_settings + log_settings.enable_trace = True + + lg.StartTestWithLogSettings(sut.sut, sut.qsl, settings, log_settings) + + print("Test Done!") + + print("Destroying SUT...") + sut.close_log_file() + lg.DestroySUT(sut.sut) + + print("Destroying QSL...") + lg.DestroyQSL(sut.qsl) + + +if __name__ == "__main__": + main() diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/mlperf.conf b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/mlperf.conf new file mode 100644 index 0000000000000000000000000000000000000000..82b896fb9dd432d6b2d701a10f3c5c245257c532 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/mlperf.conf @@ -0,0 +1,64 @@ +# The format of this config file is 'key = value'. +# The key has the format 'model.scenario.key'. Value is mostly int64_t. +# Model maybe '*' as wildcard. In that case the value applies to all models. +# All times are in milli seconds + +# Set performance_sample_count for each model. +# User can optionally set this to higher values in user.conf. +resnet50.*.performance_sample_count_override = 1024 +ssd-mobilenet.*.performance_sample_count_override = 256 +retinanet.*.performance_sample_count_override = 64 +bert.*.performance_sample_count_override = 10833 +dlrm.*.performance_sample_count_override = 204800 +rnnt.*.performance_sample_count_override = 2513 +# set to 0 to let entire sample set to be performance sample +3d-unet.*.performance_sample_count_override = 0 + +# Set seeds. The seeds will be distributed two weeks before the submission. +*.*.qsl_rng_seed = 148687905518835231 +*.*.sample_index_rng_seed = 520418551913322573 +*.*.schedule_rng_seed = 811580660758947900 +# Set seeds for TEST_05. The seeds will be distributed two weeks before the submission. +*.*.test05_qsl_rng_seed = 793197339507417767 +*.*.test05_sample_index_rng_seed = 255610748586851044 +*.*.test05_schedule_rng_seed = 352213341366340113 + + +*.SingleStream.target_latency_percentile = 90 +*.SingleStream.min_duration = 600000 +#*.SingleStream.min_query_count = 1024 + +*.MultiStream.target_latency_percentile = 99 +*.MultiStream.samples_per_query = 8 +*.MultiStream.min_duration = 600000 +#*.MultiStream.min_query_count = 270336 +*.MultiStream.min_query_count = 662 +retinanet.MultiStream.target_latency = 528 + +# 3D-UNet uses equal issue mode +3d-unet.*.sample_concatenate_permutation = 1 + +*.Server.target_latency = 10 +*.Server.target_latency_percentile = 99 +*.Server.target_duration = 0 +*.Server.min_duration = 600000 +#*.Server.min_query_count = 270336 +resnet50.Server.target_latency = 15 +retinanet.Server.target_latency = 100 +bert.Server.target_latency = 130 +dlrm.Server.target_latency = 60 +rnnt.Server.target_latency = 1000 +gptj.Server.target_latency = 20000 + +*.Offline.target_latency_percentile = 90 +*.Offline.min_duration = 600000 +# In Offline scenario, we always have one query. But LoadGen maps this to +# min_sample_count internally in Offline scenario, so set this to 24576 since +# the rule requires that Offline scenario run for at least 24576 samples. +*.Offline.min_query_count = 24576 + +# These fields should be defined and overridden by user.conf. +*.SingleStream.target_latency = 10 +*.MultiStream.target_latency = 80 +*.Server.target_qps = 1.0 +*.Offline.target_qps = 1.0 \ No newline at end of file diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/modeling_gptj.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/modeling_gptj.py new file mode 100644 index 0000000000000000000000000000000000000000..18ac34b6c8d2fb07e5438213743efda0b3fef63e --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/modeling_gptj.py @@ -0,0 +1,782 @@ +# coding=utf-8 +# Copyright 2021 The EleutherAI and HuggingFace Teams. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### +# Changes: +# - remove dead code (functions and tensors unused in MLPerf GPT-J benchmark) +# - remove training support +# - remove float16 support +# - remove device-parallelism support +# - use apply_rotary_pos_emb kernel on HPU +# - remove duplicated operations (for example: calculate sin and cos only once) +# - reshape tensors from 4D to 3D for better performance +# - use optimized softmax +# - adjust the code to HPU graphs +# - use optimized kernels for KV cache reorder +# - introduce support for fp8 KV cache +# - remove unnecessary int64 usage (use int32 or bfloat16) + +from typing import Optional, Tuple, Union +import numpy as np + +import torch +import torch.fx +import torch.utils.checkpoint +from torch import nn + +try: + from habana_frameworks.torch.hpex.kernels import apply_rotary_pos_emb as apply_rotary_pos_emb_hpu + from habana_frameworks.torch.hpex.kernels import RotaryPosEmbeddingMode +except ImportError: + print("Not using HPU kernel for apply_rotary_pos_emb") + apply_rotary_pos_emb_hpu = None + +from habana_frameworks.torch.hpex.kernels import CustomSoftmax as FastSoftmax + +try: + in_place_interleave_hpu = torch.ops.hpu.in_place_interleave_ +except AttributeError: + print(f"Not using HPU kernel for in_place_interleave_") + in_place_interleave_hpu = None + +__package__ = 'transformers.models.gptj' + +from ...activations import ACT2FN +from ...modeling_outputs import ( + BaseModelOutputWithPast, + CausalLMOutputWithPast, +) +from ...modeling_utils import PreTrainedModel +from .configuration_gptj import GPTJConfig + + + +def create_sinusoidal_positions(num_pos: int, dim: int) -> torch.Tensor: + inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2) / dim)) + sinusoid_inp = torch.einsum("i , j -> i j", torch.arange(num_pos, dtype=torch.float), inv_freq).float() + return torch.cat((torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)), dim=1) + + +def rotate_every_two(x: torch.Tensor) -> torch.Tensor: + x1 = x[:, :, :, ::2] + x2 = x[:, :, :, 1::2] + x = torch.stack((-x2, x1), dim=-1) + return x.flatten(-2) # in einsum notation: rearrange(x, '... d j -> ... (d j)') + + +def apply_rotary_pos_emb(tensor: torch.Tensor, sin: torch.Tensor, cos: torch.Tensor) -> torch.Tensor: + if apply_rotary_pos_emb_hpu is None: + return (tensor * cos) + (rotate_every_two(tensor) * sin) + else: + return apply_rotary_pos_emb_hpu(tensor, cos, sin, None, 0, RotaryPosEmbeddingMode.PAIRWISE) + +class Matmul(nn.Module): + def __init__(self): + super().__init__() + + def forward(self, x, y): + return torch.matmul(x, y) + +class BatchMatmul(nn.Module): + def __init__(self): + super().__init__() + + def forward(self, x, y): + return torch.bmm(x,y) + +class CacheUpdateFp8(nn.Module): + def __init__(self): + super().__init__() + + def forward(self, prev, cur, dim, idx): + orig_cur = cur + cur_fp8, amax = torch.ops.hpu.cast_to_fp8_v2(cur,None,False, False) + if prev.shape[0] != cur_fp8.shape[0]: + assert prev.shape[0] % cur_fp8.shape[0] == 0, f'Cannot update kv-cache. BatchSize changed! {prev.shape[0]} vs {cur_fp8.shape[0]}' + # Repeat to accomodate bs/beam changes + repeats = (prev.shape[0] // cur_fp8.shape[0], 1, 1, 1) + cur_fp8 = torch.ops.hpu.fp8_repeat_v2(cur_fp8, repeats) + assert prev.shape == cur_fp8.shape, f'Cannot update kv-cache. BatchSize changed! {prev.shape[0]} vs {cur_fp8.shape[0]}' + # Initialize + torch.ops.hpu.fp8_copy_(prev, cur_fp8) + return orig_cur + else: + assert cur_fp8.shape[2] == 1, f'Cannot update kv-cache. Unsupported shapes. prev:{prev.shape} cur:{cur_fp8.shape}' + torch.ops.hpu.fp8_index_copy_(prev, dim, idx - 1, cur_fp8) + prev_bf16 = torch.ops.hpu.cast_from_fp8(prev, None, cur.dtype) + return prev_bf16 + +class CacheUpdate(nn.Module): + def __init__(self): + super().__init__() + + def forward(self, prev, cur, dim, idx): + orig_cur = cur + if prev.shape[0] != cur.shape[0]: + assert prev.shape[0] % cur.shape[0] == 0, f'Cannot update kv-cache. BatchSize changed! {prev.shape[0]} vs {cur.shape[0]}' + # Repeat to accomodate bs/beam changes + cur = cur.repeat(prev.shape[0] // cur.shape[0], 1, 1, 1) + assert prev.shape == cur.shape, f'Cannot update kv-cache. BatchSize changed! {prev.shape[0]} vs {cur.shape[0]}' + # Initialize + prev.copy_(cur) + return orig_cur + else: + assert cur.shape[2] == 1, f'Cannot update kv-cache. Unsupported shapes. prev:{prev.shape} cur:{cur.shape}' + return prev.index_copy_(dim, idx - 1, cur) + + +class GPTJAttention(nn.Module): + def __init__(self, config): + super().__init__() + self.matmul_qk = BatchMatmul() + self.matmul_av = Matmul() + + self.attn_dropout = nn.Dropout(config.attn_pdrop) + self.resid_dropout = nn.Dropout(config.resid_pdrop) + + self.past_key = {} + self.past_value = {} + self.kv_cache_fp8 = False + self.v_update = CacheUpdate() + self.k_update = CacheUpdate() + + self.embed_dim = config.hidden_size + self.num_attention_heads = config.num_attention_heads + self.head_dim = self.embed_dim // self.num_attention_heads + if self.head_dim * self.num_attention_heads != self.embed_dim: + raise ValueError( + f"embed_dim must be divisible by num_attention_heads (got `embed_dim`: {self.embed_dim} and" + f" `num_attention_heads`: {self.num_attention_heads})." + ) + self.register_buffer("inv_scale_attn", + torch.rsqrt(torch.tensor(self.head_dim, dtype=torch.float32)).to(torch.get_default_dtype()), + persistent=False) + self.inv_scale_attn_scalar = 1.0 / np.sqrt(self.head_dim) + + self.k_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) + self.v_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) + self.q_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) + self.out_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) + self.rotary_dim = config.rotary_dim + + def _split_heads(self, tensor, num_attention_heads, attn_head_size, rotary): + """ + Splits hidden dim into attn_head_size and num_attention_heads + """ + new_shape = tensor.size()[:-1] + (num_attention_heads, attn_head_size) + tensor = tensor.view(new_shape) + if rotary: + return tensor + if len(tensor.shape) == 5: + return tensor.permute(0, 1, 3, 2, 4) # (batch, blocks, head, block_length, head_features) + elif len(tensor.shape) == 4: + return tensor.permute(0, 2, 1, 3) # (batch, head, seq_length, head_features) + else: + raise ValueError(f"Input tensor rank should be one of [4, 5], but is: {len(tensor.shape)}") + + def _merge_heads(self, tensor, num_attention_heads, attn_head_size): + """ + Merges attn_head_size dim and num_attn_heads dim into hidden dim + """ + if len(tensor.shape) == 5: + tensor = tensor.permute(0, 1, 3, 2, 4).contiguous() + elif len(tensor.shape) == 4: + tensor = tensor.permute(0, 2, 1, 3).contiguous() + else: + raise ValueError(f"Input tensor rank should be one of [4, 5], but is: {len(tensor.shape)}") + new_shape = tensor.size()[:-2] + (num_attention_heads * attn_head_size,) + return tensor.view(new_shape) + + def _attn( + self, + query, + key, + value, + attention_mask=None, + start_end=None, + ): + batch_size, query_len, key_len = query.shape[0], query.shape[-2], key.shape[-2] + + # Reshape to 3D tensors + query = query.reshape((batch_size * self.num_attention_heads, query_len, self.head_dim)) + key = key.reshape((batch_size * self.num_attention_heads, key_len, self.head_dim)) + value = value.reshape((batch_size * self.num_attention_heads, key_len, self.head_dim)) + + attn_weights = self.matmul_qk(query, key.transpose(-1, -2)) + + if query_len == 1: + # next token + attn_weights = attn_weights * self.inv_scale_attn + attn_weights = attn_weights + attention_mask + + attn_weights = FastSoftmax.apply(attn_weights, 2) # optimized softmax (no LUTs) + + else: + # first token + attn_weights = torch.ops.hpu.scaled_masked_triangular_softmax( + attn_weights, + start_end, + self.inv_scale_attn_scalar, + self.num_attention_heads, + False, # don't use max + 1 # optimized softmax (no LUTs) + ) + + attn_output = self.matmul_av(attn_weights, value) + + # Reshape back to 4D tensors + attn_output = attn_output.reshape((batch_size, self.num_attention_heads) + attn_output.shape[1:]) + attn_weights = attn_weights.reshape((batch_size, self.num_attention_heads) + attn_weights.shape[1:]) + + return attn_output, attn_weights + + + def allocate_kv_cache(self, batch_size, seq_len, kv_cache_fp8): + if (batch_size, seq_len) not in self.past_key.keys(): + device = self.k_proj.weight.device + dtype = self.k_proj.weight.dtype + shape = (batch_size, self.num_attention_heads, seq_len, self.head_dim) + past_key = torch.empty(shape, dtype=dtype, device=device) + past_value = torch.empty(shape, dtype=dtype, device=device) + if kv_cache_fp8: + self.kv_cache_fp8 = True + self.past_value[(batch_size, seq_len)], amax = torch.ops.hpu.cast_to_fp8_v2(past_value, None, False, False) + self.past_key[(batch_size, seq_len)], amax = torch.ops.hpu.cast_to_fp8_v2(past_key, None, False, False) + self.v_update = CacheUpdateFp8() + self.k_update = CacheUpdateFp8() + + import habana_frameworks.torch.core as htcore + htcore.mark_step() + else: + self.past_key[(batch_size, seq_len)] = past_key + self.past_value[(batch_size, seq_len)] = past_value + + def reorder_first_token(self, tensor): + if in_place_interleave_hpu is not None: + in_place_interleave_hpu(tensor) + else: + shape = tensor.shape + l = [] + NUM_BEAMS=4 + for i in range(shape[0] // NUM_BEAMS): + val = tensor[i, :, :, :].clone() + for i in range(NUM_BEAMS): + l.append(val) + updated = torch.cat(l, 0) + updated = torch.reshape(updated, shape) + tensor.copy_(updated) + + def reorder_kv_cache_first_token(self, kv_cache_shape): + if self.past_key is None or kv_cache_shape not in self.past_key.keys(): + return (None, None) + + self.reorder_first_token(self.past_key[kv_cache_shape]) + self.reorder_first_token(self.past_value[kv_cache_shape]) + + return (self.past_key[kv_cache_shape].shape, self.past_value[kv_cache_shape].shape) + + def reorder_kv_cache_next_token(self, start, end, beam_idx, kv_cache_shape): + if self.past_key is None or kv_cache_shape not in self.past_key.keys(): + return (None, None) + + if self.kv_cache_fp8: + torch.ops.hpu.fp8_kv_reorder_(self.past_key[kv_cache_shape], start, end, beam_idx) + torch.ops.hpu.fp8_kv_reorder_(self.past_value[kv_cache_shape], start, end, beam_idx) + else: + torch.ops.hpu.kv_reorder_(self.past_key[kv_cache_shape], start, end, beam_idx) + torch.ops.hpu.kv_reorder_(self.past_value[kv_cache_shape], start, end, beam_idx) + + return (self.past_key[kv_cache_shape].shape, self.past_value[kv_cache_shape].shape) + + def forward( + self, + hidden_states: torch.FloatTensor, + layer_past: Optional[Tuple[torch.Tensor]] = None, + attention_mask: Optional[torch.FloatTensor] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + token_idx: Optional[torch.Tensor] = None, + reuse_cache: Optional[bool] = False, + kv_cache_shape: Tuple[int, int] = None, + sin: Optional[torch.Tensor] = None, + cos: Optional[torch.Tensor] = None, + start_end: Optional[torch.Tensor] = None, + ) -> Union[ + Tuple[torch.Tensor, Tuple[torch.Tensor]], + Optional[Tuple[torch.Tensor, Tuple[torch.Tensor], Tuple[torch.Tensor, ...]]], + ]: + query = self.q_proj(hidden_states) + key = self.k_proj(hidden_states) + value = self.v_proj(hidden_states) + + query = self._split_heads(query, self.num_attention_heads, self.head_dim, True) + key = self._split_heads(key, self.num_attention_heads, self.head_dim, True) + value = self._split_heads(value, self.num_attention_heads, self.head_dim, False) + + k_rot = key[:, :, :, : self.rotary_dim] + k_pass = key[:, :, :, self.rotary_dim :] + + q_rot = query[:, :, :, : self.rotary_dim] + q_pass = query[:, :, :, self.rotary_dim :] + + k_rot = apply_rotary_pos_emb(k_rot, sin, cos) + q_rot = apply_rotary_pos_emb(q_rot, sin, cos) + + key = torch.cat([k_rot, k_pass], dim=-1) + query = torch.cat([q_rot, q_pass], dim=-1) + + key = key.permute(0, 2, 1, 3) + query = query.permute(0, 2, 1, 3) + + if layer_past is not None or reuse_cache: + if reuse_cache: + past_key, past_value = self.past_key[kv_cache_shape], self.past_value[kv_cache_shape] + else: + past_key, past_value = layer_past + + key = self.k_update(past_key, key, -2, token_idx) + value = self.v_update(past_value, value, -2, token_idx) + + if use_cache is True: + if reuse_cache: + present = (key.shape, value.shape) + else: + present = (key, value) + else: + present = None + + # compute self-attention: V x Softmax(QK^T) + attn_output, attn_weights = self._attn(query, key, value, attention_mask, start_end) + + attn_output = self._merge_heads(attn_output, self.num_attention_heads, self.head_dim) + attn_output = self.out_proj(attn_output) + attn_output = self.resid_dropout(attn_output) + + outputs = (attn_output, present) + if output_attentions: + outputs += (attn_weights,) + + return outputs # a, present, (attentions) + + +class GPTJMLP(nn.Module): + def __init__(self, intermediate_size, config): # in MLP: intermediate_size= 4 * embed_dim + super().__init__() + embed_dim = config.n_embd + + self.fc_in = nn.Linear(embed_dim, intermediate_size) + self.fc_out = nn.Linear(intermediate_size, embed_dim) + + self.act = ACT2FN["quick_gelu"] + self.dropout = nn.Dropout(config.resid_pdrop) + + def forward(self, hidden_states: Optional[torch.FloatTensor]) -> torch.FloatTensor: + hidden_states = self.fc_in(hidden_states) + hidden_states = self.act(hidden_states) + hidden_states = self.fc_out(hidden_states) + hidden_states = self.dropout(hidden_states) + return hidden_states + + +class GPTJBlock(nn.Module): + def __init__(self, config): + super().__init__() + inner_dim = config.n_inner if config.n_inner is not None else 4 * config.n_embd + self.ln_1 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon) + self.attn = GPTJAttention(config) + self.mlp = GPTJMLP(inner_dim, config) + + def allocate_kv_cache(self, batch_size, seq_len, kv_cache_fp8): + self.attn.allocate_kv_cache(batch_size, seq_len, kv_cache_fp8) + + def reorder_kv_cache_first_token(self, kv_cache_shape): + return self.attn.reorder_kv_cache_first_token(kv_cache_shape) + + def reorder_kv_cache_next_token(self, start, end, beam_idx, kv_cache_shape): + return self.attn.reorder_kv_cache_next_token(start, end, beam_idx, kv_cache_shape) + + def forward( + self, + hidden_states: Optional[torch.FloatTensor], + layer_past: Optional[Tuple[torch.Tensor]] = None, + attention_mask: Optional[torch.FloatTensor] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + token_idx: Optional[torch.Tensor] = None, + reuse_cache: Optional[bool] = None, + kv_cache_shape: Tuple[int, int] = None, + sin: Optional[torch.Tensor] = None, + cos: Optional[torch.Tensor] = None, + start_end: Optional[torch.Tensor] = None, + ) -> Union[Tuple[torch.Tensor], Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]]]: + residual = hidden_states + hidden_states = self.ln_1(hidden_states) + attn_outputs = self.attn( + hidden_states=hidden_states, + layer_past=layer_past, + attention_mask=attention_mask, + use_cache=use_cache, + output_attentions=output_attentions, + token_idx=token_idx, + reuse_cache=reuse_cache, + kv_cache_shape=kv_cache_shape, + sin=sin, + cos=cos, + start_end=start_end, + ) + attn_output = attn_outputs[0] # output_attn: a, present, (attentions) + outputs = attn_outputs[1:] + + feed_forward_hidden_states = self.mlp(hidden_states) + hidden_states = attn_output + feed_forward_hidden_states + residual + + if use_cache: + outputs = (hidden_states,) + outputs + else: + outputs = (hidden_states,) + outputs[1:] + + return outputs # hidden_states, present, (attentions) + + +class GPTJPreTrainedModel(PreTrainedModel): + """ + An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained + models. + """ + + config_class = GPTJConfig + base_model_prefix = "transformer" + is_parallelizable = True + _no_split_modules = ["GPTJBlock"] + _skip_keys_device_placement = "past_key_values" + + def __init__(self, *inputs, **kwargs): + super().__init__(*inputs, **kwargs) + + def _init_weights(self, module): + """Initialize the weights.""" + if isinstance(module, (nn.Linear,)): + # Slightly different from Mesh Transformer JAX which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + module.bias.data.zero_() + elif isinstance(module, nn.Embedding): + module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + elif isinstance(module, nn.LayerNorm): + module.bias.data.zero_() + module.weight.data.fill_(1.0) + + +class GPTJModel(GPTJPreTrainedModel): + config_class = GPTJConfig + base_model_prefix = "transformer" + is_parallelizable = True + _no_split_modules = ["GPTJBlock"] + _skip_keys_device_placement = "past_key_values" + + def __init__(self, config): + super().__init__(config) + + self.embed_dim = config.n_embd + self.vocab_size = config.vocab_size + self.wte = nn.Embedding(config.vocab_size, self.embed_dim, dtype=torch.bfloat16) + self.drop = nn.Dropout(config.embd_pdrop) + self.h = nn.ModuleList([GPTJBlock(config) for _ in range(config.n_layer)]) + self.ln_f = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon) + self.register_buffer("embed_positions", + create_sinusoidal_positions(self.config.max_position_embeddings, self.config.rotary_dim), + persistent=False) + # Initialize weights and apply final processing + self.post_init() + + def _init_weights(self, module): + """Initialize the weights.""" + if isinstance(module, (nn.Linear,)): + # Slightly different from Mesh Transformer JAX which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + module.bias.data.zero_() + elif isinstance(module, nn.Embedding): + module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + elif isinstance(module, nn.LayerNorm): + module.bias.data.zero_() + module.weight.data.fill_(1.0) + + def get_input_embeddings(self): + return self.wte + + def set_input_embeddings(self, new_embeddings): + self.wte = new_embeddings + + def allocate_kv_cache(self, batch_size, seq_len, kv_cache_fp8): + for layer in self.h: + layer.allocate_kv_cache(batch_size, seq_len, kv_cache_fp8) + + def reorder_kv_cache_first_token(self, kv_cache_shape): + return tuple(layer.reorder_kv_cache_first_token(kv_cache_shape) for layer in self.h) + + def reorder_kv_cache_next_token(self, start, end, beam_idx, kv_cache_shape): + return tuple(layer.reorder_kv_cache_next_token(start, end, beam_idx, kv_cache_shape) for layer in self.h) + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, + attention_mask: Optional[torch.FloatTensor] = None, + token_type_ids: Optional[torch.LongTensor] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + token_idx: Optional[torch.Tensor] = None, + reuse_cache: Optional[bool] = None, + kv_cache_shape: Tuple[int, int] = None, + sin: Optional[torch.Tensor] = None, + cos: Optional[torch.Tensor] = None, + start_end: Optional[torch.Tensor] = None, + ) -> Union[Tuple, BaseModelOutputWithPast]: + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + use_cache = use_cache if use_cache is not None else self.config.use_cache + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + if input_ids is not None and inputs_embeds is not None: + raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") + elif input_ids is not None: + input_shape = input_ids.size() + input_ids = input_ids.view(-1, input_shape[-1]) + batch_size = input_ids.shape[0] + elif inputs_embeds is not None: + input_shape = inputs_embeds.size()[:-1] + batch_size = inputs_embeds.shape[0] + else: + raise ValueError("You have to specify either input_ids or inputs_embeds") + + if token_type_ids is not None: + token_type_ids = token_type_ids.view(-1, input_shape[-1]) + + if past_key_values is None: + past_key_values = tuple([None] * len(self.h)) + + # Attention mask. + if attention_mask is not None: + # TODO: try get value from GPTJAttention + num_attention_heads = 16 + attention_mask = torch.repeat_interleave( + attention_mask, num_attention_heads, 0, output_size=num_attention_heads*batch_size) + + if inputs_embeds is None: + inputs_embeds = self.wte(input_ids) + + hidden_states = inputs_embeds + + if token_type_ids is not None: + token_type_embeds = self.wte(token_type_ids) + hidden_states = hidden_states + token_type_embeds + + hidden_states = self.drop(hidden_states) + + output_shape = input_shape + (hidden_states.size(-1),) + + presents = () if use_cache else None + all_self_attentions = () if output_attentions else None + all_hidden_states = () if output_hidden_states else None + for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)): + if output_hidden_states: + all_hidden_states = all_hidden_states + (hidden_states,) + + outputs = block( + hidden_states=hidden_states, + layer_past=layer_past, + attention_mask=attention_mask, + use_cache=use_cache, + output_attentions=output_attentions, + token_idx=token_idx, + reuse_cache=reuse_cache, + kv_cache_shape=kv_cache_shape, + sin=sin, + cos=cos, + start_end=start_end, + ) + + hidden_states = outputs[0] + if use_cache is True: + presents = presents + (outputs[1],) + + if output_attentions: + all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],) + + hidden_states = self.ln_f(hidden_states) + + hidden_states = hidden_states.view(output_shape) + # Add last hidden state + if output_hidden_states: + all_hidden_states = all_hidden_states + (hidden_states,) + + if not return_dict: + return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None) + + return BaseModelOutputWithPast( + last_hidden_state=hidden_states, + past_key_values=presents, + hidden_states=all_hidden_states, + attentions=all_self_attentions, + ) + + +class GPTJForCausalLM(GPTJPreTrainedModel): + _keys_to_ignore_on_load_unexpected = [r"h\.\d+\.attn\.masked_bias", r"h\.\d+\.attn\.bias"] + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.transformer = GPTJModel(config) + self.lm_head = nn.Linear(config.n_embd, config.vocab_size) + + # Initialize weights and apply final processing + self.post_init() + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def prepare_inputs_for_generation(self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs): + token_type_ids = kwargs.get("token_type_ids", None) + # only last token for inputs_ids if past is defined in kwargs + if past_key_values: + input_ids = input_ids[:, -1].unsqueeze(-1) + if token_type_ids is not None: + token_type_ids = token_type_ids[:, -1].unsqueeze(-1) + + # if `inputs_embeds` are passed, we only want to use them in the 1st generation step + if inputs_embeds is not None and past_key_values is None: + model_inputs = {"inputs_embeds": inputs_embeds} + else: + model_inputs = {"input_ids": input_ids} + + model_inputs.update( + { + "past_key_values": past_key_values, + "use_cache": kwargs.get("use_cache"), + "token_type_ids": token_type_ids, + } + ) + + return model_inputs + + def allocate_kv_cache(self, batch_size, seq_len, kv_cache_fp8): + self.transformer.allocate_kv_cache(batch_size, seq_len, kv_cache_fp8) + + def reorder_kv_cache_first_token(self, kv_cache_shape): + return self.transformer.reorder_kv_cache_first_token(kv_cache_shape) + + def reorder_kv_cache_next_token(self, start, end, beam_idx, kv_cache_shape): + return self.transformer.reorder_kv_cache_next_token(start, end, beam_idx, kv_cache_shape) + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, + attention_mask: Optional[torch.FloatTensor] = None, + token_type_ids: Optional[torch.LongTensor] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + token_idx: Optional[torch.Tensor] = None, + reuse_cache: Optional[bool] = None, + trim_logits: Optional[bool] = None, + kv_cache_shape: Tuple[int, int] = None, + sin: Optional[torch.Tensor] = None, + cos: Optional[torch.Tensor] = None, + start_end: Optional[torch.Tensor] = None, + ) -> Union[Tuple, CausalLMOutputWithPast]: + r""" + labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): + Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set + `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` + are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` + """ + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + transformer_outputs = self.transformer( + input_ids, + past_key_values=past_key_values, + attention_mask=attention_mask, + token_type_ids=token_type_ids, + inputs_embeds=inputs_embeds, + use_cache=use_cache, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict, + token_idx=token_idx, + reuse_cache=reuse_cache, + kv_cache_shape=kv_cache_shape, + sin=sin, + cos=cos, + start_end=start_end, + ) + hidden_states = transformer_outputs[0] + _, seq_len, _ = hidden_states.shape + if seq_len > 1 and trim_logits: + if token_idx is not None: + hidden_states = hidden_states.index_select(1, token_idx - 1) + else: + hidden_states = hidden_states[:, -1, :] + + # make sure sampling in fp16 works correctly and + # compute loss in fp32 to match with mesh-tf version + # https://github.com/EleutherAI/gpt-neo/blob/89ce74164da2fb16179106f54e2269b5da8db333/models/gpt2/gpt2.py#L179 + lm_logits = self.lm_head(hidden_states) + + if not return_dict: + output = (lm_logits,) + transformer_outputs[1:] + return output + + return CausalLMOutputWithPast( + logits=lm_logits, + past_key_values=transformer_outputs.past_key_values, + hidden_states=transformer_outputs.hidden_states, + attentions=transformer_outputs.attentions, + ) + + @staticmethod + def _reorder_cache( + past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor + ) -> Tuple[Tuple[torch.Tensor]]: + """ + This function is used to re-order the `past_key_values` cache if [`~PretrainedModel.beam_search`] or + [`~PretrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct + beam_idx at every generation step. + """ + return tuple( + tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) + for layer_past in past_key_values + ) + diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/prepare-calibration.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/prepare-calibration.py new file mode 100644 index 0000000000000000000000000000000000000000..846e492d380e72d6aa778fd5db36c1cf7ec2a6d6 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/prepare-calibration.py @@ -0,0 +1,59 @@ +import os +import sys +import json +from argparse import ArgumentParser +from datasets import load_dataset + +def get_args(): + parser = ArgumentParser() + parser.add_argument("--calibration-list-file", required=True, help="Path to calibration list") + parser.add_argument("--output-dir", help="Output directory", default="calibration-data") + + return parser.parse_args() + +dataset_id='cnn_dailymail' +version='3.0.0' +split='train' + +instruction_template="Summarize the following news article:" + +def check_path(path): + return os.path.exists(path) + +def prepare_calibration_data(calibration_list_file, output_dir): + if not check_path(calibration_list_file): + print("Calibration list file not found: {}".format(calibration_list_file)) + sys.exit(1) + + dataset = load_dataset("cnn_dailymail", name="3.0.0", split='train') + train = dict((x['id'], x) for x in dataset) + + + with open(calibration_list_file, 'r') as fid: + calibration_ids = fid.read().splitlines() + + inputs = [] + for id in calibration_ids: + calibration_sample = train[id] + x = dict() + x["instruction"] = instruction_template + x["input"] = calibration_sample["article"] + x["output"] = calibration_sample["highlights"] + inputs.append(x) + + if not os.path.isdir(output_dir): + os.makedirs(output_dir) + + output_path = os.path.join(output_dir,"cnn_dailymail_calibration.json") + with open(output_path, 'w') as write_f: + json.dump(inputs, write_f, indent=4, ensure_ascii=False) + + print("Calibration data saved at {}".format(output_path)) + +def main(): + + args = get_args() + prepare_calibration_data(args.calibration_list_file, args.output_dir) + +if __name__=="__main__": + main() diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/quantization/configuration/config.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/quantization/configuration/config.py new file mode 100644 index 0000000000000000000000000000000000000000..ea3e653b1dce9a94ed58fbe0c85b461e9be5901e --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/quantization/configuration/config.py @@ -0,0 +1,69 @@ +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### + +import json +from os import path + + +# Configuration Aux strings +class CFGS: + ON = "on" + OFF = "off" + QUANTIZATION = "quantization" + MEASUREMENTS_PATH = "measurements_path" + BACKOFF_FACTOR = "backoff_factor" + +# QuantConfig class +class QuantConfig: + def __init__(self): + self._quantization_enabled = False + self._measurements_path = "" + self._backoff_factor = 1.0 + + + @property + def quantization_enabled(self): + return self._quantization_enabled + + @quantization_enabled.setter + def quantization_enabled(self, val): + self._quantization_enabled = val + + @property + def measurements_path(self): + return self._measurements_path + + @measurements_path.setter + def measurements_path(self, path): + self._measurements_path = path + + @property + def backoff_factor(self): + return self._backoff_factor + + @backoff_factor.setter + def backoff_factor(self, bo_factor): + self._backoff_factor = bo_factor + + +def parse_quant_config(json_file_path : str) -> QuantConfig: + quant_config = QuantConfig() + if not path.isfile(json_file_path): + print("Quantization configuration file not found. Path - {}".format( + json_file_path)) + else: + with open(json_file_path, 'r') as f: + quant_cfg_json = json.load(f) + if CFGS.QUANTIZATION in quant_cfg_json and quant_cfg_json[CFGS.QUANTIZATION] == CFGS.ON: + quant_config.quantization_enabled = True + if CFGS.BACKOFF_FACTOR in quant_cfg_json: + quant_config.backoff_factor = quant_cfg_json[CFGS.BACKOFF_FACTOR] + if CFGS.MEASUREMENTS_PATH in quant_cfg_json: + measurements_path = quant_cfg_json[CFGS.MEASUREMENTS_PATH] + if '$' in measurements_path : + print("Env var detected in path, expanding it") + measurements_path = path.expandvars(measurements_path) + quant_config.measurements_path = measurements_path + + return quant_config \ No newline at end of file diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/socket_worker.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/socket_worker.py new file mode 100644 index 0000000000000000000000000000000000000000..96317b7895e2255c9ffe7deed727aa9b4a74cebb --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/socket_worker.py @@ -0,0 +1,268 @@ +#!/usr/bin/env python3 + +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### + +import argparse +import os +import torch +import time +import random +import threading +import queue +from contextlib import contextmanager, nullcontext +from torch.utils.tensorboard import SummaryWriter + +import habana_generation_utils as hgu +import modeling_gptj as hpu_modeling_gptj +import quantization.quantize as quantize +from hgu_options import get_options_dict + +import socket_utils +from dataset import Dataset + + +MIN_NEW_TOKENS = 30 +MAX_NEW_TOKENS = 128 + + +def fatal(e): + import traceback + traceback.print_exc() + print("EXCEPTION:", e, flush=True) + os._exit(1) + + +def get_fake_delay(dtype: str) -> dict: + class FakeDelayDict(dict): + def __getitem__(self, length: int) -> int: + key = min([key for key in self.keys() if key >= length - MAX_NEW_TOKENS - 1]) + return dict.__getitem__(self, key) + + # dict { + # input_length: average processing time on real device [us] + # } + if dtype == 'float8': + return FakeDelayDict({ + 1919: 207946, + 1663: 177573, + 1407: 162134, + 1151: 141677, + 1023: 144127, + 895: 105898, + 767: 94835, + 639: 79685, + 511: 63538 + }) + else: + return FakeDelayDict({ + 1919: 418798, + 1663: 367299, + 1407: 337564, + 1151: 292790, + 1023: 289867, + 895: 234328, + 767: 211056, + 639: 156582, + 511: 143436 + }) + + +def get_args(): + parser = argparse.ArgumentParser() + parser.add_argument("--socket", type=str, required=True, help="Unix socket to connect to") + parser.add_argument("--quantization_file", "-qf", type=str, + help="Read quantization configuration from a file") + parser.add_argument("--model-path", required=True, help="Path to model checkpoint") + parser.add_argument("--dtype", choices=["bfloat16", "float32", "float8"], required=True, + help="data type of the model, choose from bfloat16, float32 and float8") + parser.add_argument("--dataset-path", required=True, help="") + parser.add_argument("--max_examples", type=int, required=True, help="Maximum number of examples to consider (not limited by default)") + parser.add_argument("--options", type=str, required=True, + help="Coma-seperated list of options used in generation") + parser.add_argument("--fake_device", action='store_true', help="Enable dummy device with estimated delay") + parser.add_argument("--fake_dataset", action='store_true', help="Enable dummy dataset") + parser.add_argument('--eager', action='store_true') + parser.add_argument('--enable-tensorboard-logging', action='store_true') + args = parser.parse_args() + return args + + +def handle(sock, prepare_input_func, pipeline_func, finalize_beams_func, options): + pipeline_queue = queue.Queue() + thread = threading.Thread(target=run_pipeline, args=(pipeline_queue, pipeline_func, finalize_beams_func)) + thread.start() + + while True: + try: + data = socket_utils.receive(sock) + if data is None: + break + pipeline_input = prepare_input_func(data, options) + pipeline_queue.put(pipeline_input) + except Exception as e: + fatal(e) + + pipeline_queue.put(None) + thread.join() + + +def prepare_input(data, options): + batch, new_options, batch_size = data + options.update(new_options) + + req_ids = [b[0][0] for b in batch] + sample_ids = [b[0][1] for b in batch] + while len(sample_ids) < batch_size: + sample_ids.append(sample_ids[0]) + + def getter(src): + def get(idx): + if idx != -1: + return src[idx] + else: + return torch.ones((1, 1), dtype=src[0].dtype) + return get + + src_input_ids = getter(dataset.source_encoded_input_ids) + src_attn_masks = getter(dataset.source_encoded_attn_masks) + input_ids = [src_input_ids(id) for id in sample_ids] + attention_mask = [src_attn_masks(id) for id in sample_ids] + batch, max_input_length = align_batch(input_ids, attention_mask, dataset.tokenizer.pad_token_id, options.max_input_length) + + options.set('max_input_length', max_input_length + MAX_NEW_TOKENS + 1) + options.set('max_length', max_input_length + MAX_NEW_TOKENS + 1) + options.set('min_length', max_input_length + MIN_NEW_TOKENS) + + batch, max_length, input_length = hgu.prepare_decoder_only_input_without_moving(dataset.tokenizer.pad_token_id, options, batch) + return (batch, options, max_length, input_length, req_ids) + +@contextmanager +def tensorboard_logger(): + global tb_counter, local_rank + t_start = time.time() + yield + t_end = time.time() + tb_writer.add_scalar(f'worker number {local_rank}, batch_time [seconds]', t_end - t_start, tb_counter) + tb_counter += 1 + +def run_pipeline(pipeline_queue, pipeline_func, finalize_beams_func): + try: + with torch.inference_mode(): + thread = None + while True: + items = pipeline_queue.get() + if items is None: + break + + batch, options, max_length, input_length, req_ids = items + with tensorboard_logger() if tb_writer else nullcontext(): + initial_ids, beam_trace = pipeline_func(batch, options, max_length, input_length) + thread = threading.Thread(target=finalize_beams_func, args=(initial_ids, beam_trace, max_length, req_ids)) + thread.start() + thread.join() + except Exception as e: + fatal(e) + + +def finalize_beams(initial_ids, beam_trace, max_input_length, req_ids): + try: + output = hgu.finalize_beams(initial_ids, beam_trace, model.config, options.length_penalty) + + response = [] + for req_id, output in zip(req_ids, output): + response.append((req_id, output[max_input_length:].numpy().tobytes())) + socket_utils.send(sock, response) + except Exception as e: + fatal(e) + +def left_pad(tensor, max_len, value): + return torch.nn.functional.pad(tensor, (max_len - tensor.size(-1), 0), value=value) + + +def align_batch(input_ids, attention_mask, pad_token_id, max_length=None): + input_lengths = [t.size(-1) for t in input_ids] + if max_length is None: + max_length = max(input_lengths) + input_ids = [left_pad(t, max_length, pad_token_id) for t in input_ids] + attention_mask = [left_pad(t, max_length, 0) for t in attention_mask] + return {"input_ids": torch.cat(input_ids), "attention_mask": torch.cat(attention_mask)}, max_length + + +if __name__ == "__main__": + args = get_args() + + tb_writer, tb_counter = (SummaryWriter(), 0) if args.enable_tensorboard_logging else (None, None) + + dataset = Dataset(args.model_path, args.dataset_path, total_count_override=args.max_examples, add_padding=False, fake_data=args.fake_dataset) + options = get_options_dict(args.options) + options = hgu.GenerationOptions(**options) + hgu_pipeline = None + device = torch.device("cpu") + + if not args.fake_device: + if int(os.environ.get('OMPI_COMM_WORLD_SIZE', 1)) > 1: + local_rank = os.environ.get('OMPI_COMM_WORLD_LOCAL_RANK', "0") + os.environ["HLS_MODULE_ID"] = local_rank + + import habana_frameworks.torch.core as htcore + device = torch.device('hpu') + + print("Loading PyTorch model...") + model_path = args.model_path + + model = hpu_modeling_gptj.GPTJForCausalLM.from_pretrained( + model_path, + low_cpu_mem_usage=True, + torch_dtype=torch.bfloat16 + ) + + if model.config.pad_token_id is None: + model.config.pad_token_id = model.config.eos_token_id + + model.to(torch.bfloat16) + model.to(device) + + if not args.eager: + import habana_frameworks.torch.hpu.graphs as htgraphs + model = htgraphs.wrap_in_hpu_graph(model) + + if args.quantization_file: + model = quantize.setup_quantization(model, args.quantization_file) + + def pipeline(batch, options, max_length, input_length): + return hgu.generate_on_prepared_input(model, options, batch, max_length, input_length) + + prepare_input_func = prepare_input + pipeline_func = pipeline + finalize_beams_func = finalize_beams + else: + fake_delay_dict = get_fake_delay(args.dtype) + + def fake_pipeline(batch, *args): + batch_size, length = batch['input_ids'].shape + fake_delay = fake_delay_dict[length] * random.uniform(0.9, 1.1) + total_fake_delay = batch_size * fake_delay / 1e6 + time.sleep(total_fake_delay / 10) + return batch['input_ids'], None + + def fake_finalize_beams(initial_ids, _, max_input_length, req_ids): + try: + output = initial_ids.repeat(1, 2) + response = [] + for req_id, output in zip(req_ids, output): + response.append((req_id, output[max_input_length:].numpy().tobytes())) + socket_utils.send(sock, response) + except Exception as e: + fatal(e) + + prepare_input_func = prepare_input + pipeline_func = fake_pipeline + finalize_beams_func = fake_finalize_beams + + if args.dtype == "float8": + options.kv_cache_fp8 = True + + sock = socket_utils.connect(args.socket) + handle(sock, prepare_input_func, pipeline_func, finalize_beams_func, options) diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/user.conf b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/user.conf new file mode 100644 index 0000000000000000000000000000000000000000..dbf2a824a3d66a3c5da455f45264f5b1361b25c9 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/gpt-j/user.conf @@ -0,0 +1,11 @@ +# The format of this config file is 'key = value'. +# The key has the format 'model.scenario.key'. Value is mostly int64_t. +# Model maybe '*' as wildcard. In that case the value applies to all models. +# All times are in milli seconds + +# TODO: We need to fine-tune this value so that we get the maximum possible +# server utilization, while still reaching the QOS criteria +*.Server.target_qps = 11 + +*.Server.min_query_count = 24576 +*.Server.target_latency = 20000 diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/prepare_and_check_submission.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/prepare_and_check_submission.py new file mode 100644 index 0000000000000000000000000000000000000000..26c6d41026238225b5069ccd756aa7f604bb82f2 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/prepare_and_check_submission.py @@ -0,0 +1,189 @@ +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### +import argparse +import yaml +import typing +import subprocess +import logging +import sys +import json +import shutil +import re +import os +from pathlib import Path +import time + +scenarios_config = yaml.full_load(open("scenarios.yaml")) +logging.basicConfig(level=logging.INFO) + +modes = ["Server", "Offline"] +system_desc_id = "HLS-Gaudi2-PT" +implementation_id = "PyTorch" + + +def get_configuration(scenarios) -> typing.Tuple[str, str]: + runs = [] + for scenario in scenarios: + if scenario in scenarios_config["scenarios"]: + for mode in modes: + runs.append((scenario, mode)) + else: + try: + scenario, mode = scenario.split("_") + assert mode in modes + runs.append((scenario, mode)) + except: + logging.error( + f"Scenario {scenario} not supported, see scenarios.yaml to view supported scenarios" + ) + exit() + return runs + + +def get_args(): + """Parse commandline.""" + parser = argparse.ArgumentParser() + parser.add_argument( + "scenarios", + nargs="+", + help="List of scenarios e.g. gpt-j_Server or gpt-j separated by space, to run all possible scenarios set first element to 'all'", + ) + parser.add_argument( + "--output-dir", + required=False, + default="./results", + help="Path to save results folder in", + ) + parser.add_argument( + "--mlperf-path", required=True, help="Path to mlperf inference directory" + ) + parser.add_argument("--systems-dir-path", required=True) + parser.add_argument("--measurements-dir-path", required=True) + args = parser.parse_args() + return args + + +def main(): + args = get_args() + + configuration = get_configuration(args.scenarios) + + output_dir = Path(args.output_dir).absolute() + logs_dir = output_dir / "logs" + # for reference https://github.com/mlcommons/policies/blob/master/submission_rules.adoc#563-inference + submission_dir = output_dir / "submission" + submission_dir.mkdir(exist_ok=True) + + division_dir = submission_dir / "closed" + division_dir.mkdir(exist_ok=True) + company_dir = division_dir / "Intel-HabanaLabs" + company_dir.mkdir(exist_ok=True) + + code_dir = company_dir / "code" + code_dir.mkdir(exist_ok=True) + + results_dir = company_dir / "results" + results_dir.mkdir(exist_ok=True) + + systems_dir = company_dir / "systems" + systems_dir.mkdir(exist_ok=True) + + measurements_dir = company_dir / "measurements" + measurements_dir.mkdir(exist_ok=True) + + mlperf_path = Path(args.mlperf_path) + # for each run + for scenario, mode in configuration: + benchmark = scenarios_config["scenarios"][scenario]["benchmark"] + + # systems dir + shutil.copyfile( + f"{args.systems_dir_path}/{system_desc_id}.json", + systems_dir / f"{system_desc_id}.json", + ) + + # code dir + current_dir = os.getcwd() + shutil.copytree( + current_dir, + code_dir / benchmark, + ignore=shutil.ignore_patterns( + ".graph_dumps", "__pycache__", ".gitignore", "internal", output_dir, "results" + ), + dirs_exist_ok=True, + ) + # move general README.md out of benchmark to code directory + shutil.move( + code_dir / benchmark / "README.md", + code_dir / "README.md" + ) + + # measurements dir + measurements_dir_path = Path(args.measurements_dir_path) + Path(measurements_dir / system_desc_id / benchmark / mode).mkdir( + exist_ok=True, parents=True + ) + shutil.copytree( + measurements_dir_path / benchmark, + measurements_dir / system_desc_id / benchmark, + dirs_exist_ok=True, + ) + code_dir_path = Path(scenarios_config["scenarios"][scenario]["code_dir"]) + shutil.copyfile( + code_dir_path / "mlperf.conf", + measurements_dir / system_desc_id / benchmark / mode / "mlperf.conf", + ) + shutil.copyfile( + measurements_dir_path / "calibration_process.md", + measurements_dir / system_desc_id / benchmark / mode / "calibration_process.md", + ) + if benchmark == "gptj-99.9": + config_file = "fp8-99.9.conf" + else: + config_file = "fp8-99.conf" + + shutil.copyfile( + code_dir_path / "configs" / config_file, + measurements_dir / system_desc_id / benchmark / mode / "user.conf", + ) + + # results dir + shutil.copytree( + logs_dir / scenario / mode / "accuracy", + results_dir / system_desc_id / benchmark / mode / "accuracy", + ignore=shutil.ignore_patterns("mlperf_log_trace.json"), + ) + shutil.copytree( + logs_dir / scenario / mode / "performance", + results_dir / system_desc_id / benchmark / mode / "performance", + ignore=shutil.ignore_patterns( + "mlperf_log_trace.json", "mlperf_log_accuracy.json" + ), + ) + + #truncate accuracy logs + accuracy_logs_backup = output_dir / "backup" + command = f"python {mlperf_path / 'tools/submission/truncate_accuracy_log.py'} --input {submission_dir} --submitter Intel-HabanaLabs --backup {accuracy_logs_backup}" + try: + subprocess.run(command, check=True, shell=True) + except subprocess.CalledProcessError as e: + sys.exit("Failed truncating logs") + + # submission checker + command = f"python {mlperf_path / 'tools/submission/submission_checker.py'} --input {submission_dir} --csv {output_dir / 'summary.csv'}" + try: + subprocess.run(command, check=True, shell=True) + except subprocess.CalledProcessError as e: + sys.exit("Submission checker failed") + + # zip submission folder + command = f"tar -cvzf {output_dir}/submission.gz -C {os.path.dirname(submission_dir)} {os.path.basename(submission_dir)}" + try: + subprocess.run(command, check=True, shell=True) + except subprocess.CalledProcessError as e: + sys.exit("Failed packaging submission folder") + + +if __name__ == "__main__": + main() diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/run_mlperf_scenarios.py b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/run_mlperf_scenarios.py new file mode 100644 index 0000000000000000000000000000000000000000..d5e568c0315ebb97dac385ec34132ed6f636ca4a --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/run_mlperf_scenarios.py @@ -0,0 +1,241 @@ +############################################################################### +# Copyright (C) 2023 Habana Labs, Ltd. an Intel Company +############################################################################### +import argparse +import yaml +import typing +import subprocess +import logging +import sys +import json +import shutil +import re +import os +from pathlib import Path +import time + +scenarios_config = yaml.full_load(open("scenarios.yaml")) +logging.basicConfig(level=logging.INFO) +modes = ["Server", "Offline"] +units_map = {"Server": "Queries/s", "Offline": "Samples/s"} + + +def get_configuration(scenarios) -> typing.Tuple[str, str]: + runs = [] + for scenario in scenarios: + if scenario in scenarios_config["scenarios"]: + for mode in modes: + runs.append((scenario, mode)) + else: + try: + scenario, mode = scenario.split("_") + assert mode in modes + runs.append((scenario, mode)) + except: + logging.error( + f"Scenario {scenario} not supported, see scenarios.yaml to view supported scenarios" + ) + exit() + return runs + + +def get_args(): + """Parse commandline.""" + parser = argparse.ArgumentParser() + parser.add_argument( + "scenarios", + nargs="+", + help="List of scenarios e.g. gpt-j_Server or gpt-j separated by space, to run all possible scenarios set first element to 'all'", + ) + parser.add_argument( + "--output-dir", + required=False, + default="./results", + help="Path to save results folder in", + ) + parser.add_argument( + "--mlperf-path", help="Path to mlperf inference directory" + ) + parser.add_argument("--mode", type=str, choices=["full", "perf", "acc"], default="full", help="dev options to shorten test time") + parser.add_argument("--eager", action="store_true", help="Eager mode enabled") + args = parser.parse_args() + return args + + +def run_inference(base_dir, command, mode, accuracy, scenario): + args = get_args() + command += f" --scenario {mode}" + if accuracy: + command += " --accuracy" + if args.eager: + command += " --eager" + logging.info(command) + try: + subprocess.run(command, check=True, shell=True, cwd=base_dir) + except subprocess.CalledProcessError as e: + sys.exit(f"Failed running {scenario}_{mode}") + + +def evaluate(base_dir): + start_time = time.time() + # Assuming script naming convention is consistent between models + command = "python evaluation.py | tee -a ./build/logs/accuracy.txt" + logging.info(command) + try: + subprocess.run(command, check=True, shell=True, cwd=base_dir) + except subprocess.CalledProcessError as e: + sys.exit(f"Failed evaluating {base_dir}") + return time.time() - start_time + + +def verify_thresholds(benchmark, results: typing.Dict[str, typing.Any]): + error = "" + valid = True + thresholds = scenarios_config["benchmarks"][benchmark] + for metric, threshold in thresholds.items(): + if results[metric] < threshold: + error += f"{metric} " + valid = False + results["valid"] = valid + results["error"] = error + return results + + +def get_results(accuracy_path, benchmark): + text = open(accuracy_path / "accuracy.txt").readlines() + results = None + for line in text: + object_results = re.match("(\{.*?\})", line) + if object_results is not None: + results = yaml.full_load(object_results.group(1)) + if results is None: + return sys.exit(f"No metrics found for {benchmark}") + results = verify_thresholds(benchmark, results) + return results + + +def get_performance(performance_path, mode): + perf = {} + text = open(performance_path / "mlperf_log_summary.txt").read() + perf_pattern = ( + "Samples per second: (.+?)\n" + if mode == "Offline" + else "Scheduled samples per second : (.+?)\n" + ) + validity_pattern = "Result is : (.+?)\n" + perf['samples_per_seconds'] = re.search(perf_pattern, text).group(1) + perf['validity'] = re.search(validity_pattern, text).group(1) + + return perf + +def verify_performance(perf_validity, results: typing.Dict[str, typing.Any]): + if perf_validity == "INVALID": + results["valid"] = False + results["error"] = "invalid" + return results + +def write_summary(output_dir, summary): + summary_json_path = f"{output_dir}/summary.json" + all_summaries = [] + if os.path.exists(summary_json_path): + with open(summary_json_path) as summary_file: + try: + all_summaries = json.load(summary_file) + except json.JSONDecodeError: + all_summaries = [] + all_summaries.append(summary) + logging.info(f"Writing summary to {summary_json_path}") + with open(summary_json_path, mode="w") as summary_file: + json.dump(all_summaries, summary_file) + + +def main(): + args = get_args() + configuration = get_configuration(args.scenarios) + output_dir = Path(args.output_dir).absolute() + logging.info(f"Saving results to {output_dir}") + output_dir.mkdir(exist_ok=True) + for scenario, mode in configuration: + logging.info(f"Running {scenario} {mode}") + base_dir = Path(scenarios_config["scenarios"][scenario]["code_dir"]) + benchmark = scenarios_config["scenarios"][scenario]["benchmark"] + command = scenarios_config["scenarios"][scenario]["command"] + + # logs are saved in the code/ dir + logs_path = base_dir / "build" / "logs" + + # start timer + total_time = 0 + start = time.time() + if args.mode == "perf": + # copy audit.config to get accuracy logs from performance mode + # this is equivalent to running compliance TEST01 + shutil.copyfile("accuracy_from_perf.config", base_dir / "audit.config") + + accuracy_path = output_dir / "logs" / scenario / mode / "compliance" / "TEST01" + # logs from performance are the same as accuracy in this mode + performance_path = accuracy_path + + run_inference(base_dir, command, mode, False, scenario) + evaluation_time = evaluate(base_dir) + # move logs + shutil.move(logs_path, accuracy_path) + # remove audit + os.remove(base_dir / "audit.config") + else: + # run accuracy + logging.info("Running accuracy") + run_inference(base_dir, command, mode, True, scenario) + evaluation_time = evaluate(base_dir) + accuracy_path = output_dir / "logs" / scenario / mode / "accuracy" + shutil.move(logs_path, accuracy_path) + if args.mode != "acc": + logging.info("Running performance") + run_inference(base_dir, command, mode, False, scenario) + performance_path = ( + output_dir / "logs" / scenario / mode / "performance" / "run_1" + ) + shutil.move(logs_path, performance_path) + + # get summary + precision = scenarios_config["scenarios"][scenario]["precision"] + batch_size = scenarios_config["scenarios"][scenario]["batch_size"] + total_time = time.time() - start + results = get_results(accuracy_path, benchmark) + units = units_map[mode] + if args.mode != "acc": + perf = get_performance(performance_path, mode) + performance = perf['samples_per_seconds'] + results = verify_performance(perf['validity'], results) + else: + performance = None + if "gptj" in scenario: + thresholds = scenarios_config["benchmarks"]["gptj"] + results["accuracy"] = ( + min( + results["rouge1"] / thresholds["rouge1"], + results["rouge2"] / thresholds["rouge2"], + results["rougeL"] / thresholds["rougeL"], + ) + * 100 + ) + summary = { + "model": benchmark, + "scenario": scenario, + "units": units, + "performance": performance, + "batch_size": batch_size, + "precision": precision, + "iterations": results["gen_num"], + "dataset": scenarios_config["scenarios"][scenario]["dataset"], + "total_time": total_time, + "eval_time": evaluation_time, + "warmup_time": 0, + **results, + } + write_summary(output_dir, summary) + shutil.rmtree(base_dir / "build") + + +if __name__ == "__main__": + main() diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/scenarios.yaml b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/scenarios.yaml new file mode 100644 index 0000000000000000000000000000000000000000..24116fac768c4bb284603fdc7001b7d42d50806f --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/code/scenarios.yaml @@ -0,0 +1,38 @@ +benchmarks: + gptj-99: + "rouge1": 42.556635 + "rouge2": 19.922265 + "rougeL": 29.688219 + "gen_len": 3615191 + gptj-99.9: + "rouge1": 42.9435135 + "rouge2": 20.1033765 + "rougeL": 29.9581119 + "gen_len": 3615191 + gptj: + "rouge1": 42.9865 + "rouge2": 20.1235 + "rougeL": 29.9881 + "gen_len": 3615191 +scenarios: + gptj-99.9-bf16: + dataset: cnn_dailymail + code_dir: gpt-j + benchmark: gptj-99.9 + command: python main.py --device socket --num_workers 8 --user_conf configs/bf16.conf + precision: bf16 + batch_size: 12 + gptj-99-fp8: + dataset: cnn_dailymail + code_dir: gpt-j + benchmark: gptj-99 + command: PT_USE_FP8_143=1 UPDATE_MME_OUTPUT_PRECISION_FILTER="v_proj,matmul_av" ENABLE_EXPERIMENTAL_FLAGS=true python main.py -qf quantization/configuration/examples/quant_on.json --device socket --num_workers 8 --user_conf configs/fp8-99.conf --dtype float8 + precision: fp8 + batch_size: 32 + gptj-99.9-fp8: + dataset: cnn_dailymail + code_dir: gpt-j + benchmark: gptj-99.9 + command: PT_USE_FP8_143=1 UPDATE_MME_OUTPUT_PRECISION_FILTER="v_proj,matmul_av" ENABLE_EXPERIMENTAL_FLAGS=true python main.py -qf quantization/configuration/examples/quant_on.json --device socket --num_workers 8 --user_conf configs/fp8-99.conf --dtype float8 + precision: fp8 + batch_size: 32 diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/measurements/gptj-99.9/Offline/HLS-Gaudi2-PT_PyTorch_Offline.json b/docker/bloom13b/Model-References/MLPERF3.1/Inference/measurements/gptj-99.9/Offline/HLS-Gaudi2-PT_PyTorch_Offline.json new file mode 100644 index 0000000000000000000000000000000000000000..86f4556cdc8c12a9cfdea79d194dd3cc3cd025ff --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/measurements/gptj-99.9/Offline/HLS-Gaudi2-PT_PyTorch_Offline.json @@ -0,0 +1,7 @@ +{ + "input_data_types": "int32", + "retraining": "No", + "starting_weights_filename": "https://cloud.mlcommons.org/index.php/s/QAZ2oM94MkFtbQx", + "weight_data_types": "fp8-E4M3", + "weight_transformations": "quantization" +} diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/measurements/gptj-99.9/Server/HLS-Gaudi2-PT_PyTorch_Server.json b/docker/bloom13b/Model-References/MLPERF3.1/Inference/measurements/gptj-99.9/Server/HLS-Gaudi2-PT_PyTorch_Server.json new file mode 100644 index 0000000000000000000000000000000000000000..86f4556cdc8c12a9cfdea79d194dd3cc3cd025ff --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/measurements/gptj-99.9/Server/HLS-Gaudi2-PT_PyTorch_Server.json @@ -0,0 +1,7 @@ +{ + "input_data_types": "int32", + "retraining": "No", + "starting_weights_filename": "https://cloud.mlcommons.org/index.php/s/QAZ2oM94MkFtbQx", + "weight_data_types": "fp8-E4M3", + "weight_transformations": "quantization" +} diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/measurements/gptj-99.9/Server/README.md b/docker/bloom13b/Model-References/MLPERF3.1/Inference/measurements/gptj-99.9/Server/README.md new file mode 100644 index 0000000000000000000000000000000000000000..03b79fd2a1f7d4c97802c1f814c6ce9d63770044 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/measurements/gptj-99.9/Server/README.md @@ -0,0 +1,25 @@ +# Steps to run gptj-99.9 Server + +### Environment setup +To setup the environment follow the steps described in [/closed/Intel-HabanaLabs/code/README.md](../../../code/README.md) + +### Commands +Run the following commands from [/closed/Intel-HabanaLabs/code/](../../../code/) directory. + +#### Run accuracy +```bash +source gptj-99.9/functions.sh +build_mlperf_inference --output-dir --submission gptj-99.9-fp8_Server --mode acc +``` + +#### Run performance +```bash +source gptj-99.9/functions.sh +build_mlperf_inference --output-dir --submission gptj-99.9-fp8_Server --mode perf +``` + +### Results + +You can find the logs under /output_dir/logs/gptj-99.9-fp8/Server + +For more details go to [/closed/Intel-HabanaLabs/code/README.md](../../../code/README.md) \ No newline at end of file diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Inference/systems/HLS-Gaudi2-PT.json b/docker/bloom13b/Model-References/MLPERF3.1/Inference/systems/HLS-Gaudi2-PT.json new file mode 100644 index 0000000000000000000000000000000000000000..9b36d51c4ec6e3caef782d4dbfb7b3934d7e668d --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Inference/systems/HLS-Gaudi2-PT.json @@ -0,0 +1,38 @@ +{ + "submitter": "Intel-HabanaLabs", + "division": "closed", + "status": "available", + "system_name": "HLS-Gaudi2-PT", + "system_type": "datacenter", + "number_of_nodes": "1", + "host_processors_per_node": "2", + "host_processor_model_name": "Intel(R) Xeon(R) Platinum 8380", + "host_processor_core_count": "40", + "host_processor_vcpu_count": "80", + "host_processor_frequency": "2.3 GHz", + "host_processor_caches": "L1d cache: 3.8 MiB, L1i cache: 2.5 MiB, L2 cache: 100 MiB, L3 cache: 120 MiB", + "host_processor_interconnect": "UPI", + "host_memory_capacity": "1024 GB", + "host_memory_configuration": "DDR4-3200", + "host_storage_type": "Weka", + "host_storage_capacity": "1 PB", + "host_networking": "2x Mellanox ConnectX-5 Ex 100Gb/s Ethernet", + "host_networking_topology": "L3 Fat Tree", + "accelerators_per_node": "8", + "accelerator_model_name": "Intel® Gaudi® 2 AI Accelerator", + "accelerator_host_interconnect": "4x PCIe 4.0 x16", + "accelerator_frequency": "1800MHz", + "accelerator_on-chip_memories": "6", + "accelerator_memory_configuration": "HBM2E", + "accelerator_memory_capacity": "96 GB", + "accelerator_interconnect": "24x 100Gb/s Ethernet", + "accelerator_interconnect_topology": "10x L3 Fat Tree", + "cooling": "Air-cooled", + "hw_notes": "", + "framework": "PyTorch 2.0.1a0", + "other_software_stack": "synapseAI 1.12.98", + "operating_system": "Ubuntu 20.04", + "sw_notes": "", + "system_type_detail": "", + "host_networking_card_count": "N/A" + } \ No newline at end of file diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/HLS-Gaudi2-TF/defaults.cfg b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/HLS-Gaudi2-TF/defaults.cfg new file mode 100644 index 0000000000000000000000000000000000000000..6c6f552fdffcbd89f7b5443d075d83519196aecc --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/HLS-Gaudi2-TF/defaults.cfg @@ -0,0 +1,40 @@ +#!/bin/bash +DATESTAMP=`date +'%y%m%d%H%M%S'` +export INPUT_FILES_DIR_UNPACKED=/root/datasets/tensorflow_bert/unpacked_data +export INPUT_FILES_DIR_PACKED=/root/datasets/tensorflow_bert/packed_data_500 +export EVAL_FILES_DIR=/root/datasets/tensorflow_bert/eval_dataset +export INITIAL_CHECKPOINT=/root/datasets/tensorflow_bert/checkpoint/model.ckpt-28252 +export BERT_CONFIG_DIR=/root/datasets/tensorflow_bert/checkpoint +export OUTPUT_DIR=/tmp/bert_pretrain/phase_2 +export LOG_DIR=/tmp/bert_pretrain/phase_2 +export TRAIN_BATCH_SIZE=28 +export EVAL_BATCH_SIZE=125 +export MAX_EVAL_STEPS=10 +export NUM_DIST_EVAL_WORKERS=8 +export TRAIN_STEPS=6700 +export WARMUP_STEPS=0 +export LEARNING_RATE=0.000425 +export LAMB_BETA_1=0.9 +export LAMB_BETA_2=0.999 +export EPSILON=1e-06 +export LAMB_WEIGHT_DECAY_RATE=0.01 +export LAMB_LEARNING_RATE_DECAY_POLY_POWER=1.0 +export NUM_ACCUMULATION_STEPS=2 +export SAMPLES_START_EVAL=0 +export SAVE_CHECKPOINTS_STEPS=335 +export PACKED_DATA=True +export USE_HOROVOD=True +export HLS_TYPE="HLS2" +export NUM_WORKERS_TOTAL=8 +export TF_CPU_RUNTIME_FALLBACK=forbid +export TF_HCCL_MEMORY_ALLOWANCE_MB=1536 +export HABANA_INITIAL_WORKSPACE_SIZE_MB=4600 +export CPU_BIND_TYPE=cpu +export USE_LIGHTWEIGHT_CHECKPOINT=True +export DO_TRAIN=True +export DO_EVAL=True +export USE_ASYNC_CHECKPOINTING=True +export EXPERIMENTAL_SLACK=True +export SIGNALING_FROM_GRAPH=0 + +unset MPI_TCP_INCLUDE diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/HLS-Gaudi2-TF/launch_bert_hvd.sh b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/HLS-Gaudi2-TF/launch_bert_hvd.sh new file mode 100644 index 0000000000000000000000000000000000000000..26039a5b65fbb511954f4c7481ecc56832142e38 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/HLS-Gaudi2-TF/launch_bert_hvd.sh @@ -0,0 +1,611 @@ +#!/bin/bash + +DEBUG=${DEBUG:-0} +if [[ $DEBUG -eq 1 ]]; then + set -x + env +fi + +# Basic paths +SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) +export BASE_PATH="$( cd "$(dirname "$(readlink -f ${SCRIPT_DIR}/defaults.cfg)" )" && pwd)" +exit_code=0 + +OMPI_PREFIX=$(which mpirun) +export OMPI_PREFIX=$(dirname $(dirname ${OMPI_PREFIX}) ) + +function help() +{ + echo "Usage:" + echo "$0 [ -key1 value1 -key2 value2 .... -keyn valuen ]" + echo "-c | --config Configuration file path (defaults to ./defaults.cfg)" + echo "-hf | --hostfile Host file path, 'localhost' is used if no file is provided" + echo "-u | --use_horovod Enable (0) or disable (1) horovod use" + echo "-ws | --warmup_steps" + echo "-lr | --learning_rate" + echo "-st | --stop_threshold" + echo "-acs | --num_accumul_steps" + echo "-tbs | --train_batchsize" + echo "-ebs | --eval_batchsize" + echo "-ts | --train_steps" + echo "-lb1 | --lamb_beta_1" + echo "-lb2 | --lamb_beta_2" + echo "-ep | --epsilon" + echo "-lwd | --lamb_weight_decay_rate" + echo "-ldp | --lamb_lr_decay_poly_power" + echo "-sbe | --samples_btw_eval" + echo "-sse | --samples_start_eval" + echo "-mes | --max_eval_steps" + echo "-w | --num_workers_total" + echo "-p | --packed_data Packed (0) or unpacked (1)" + echo "-sch | --save_checkpoints_steps" + echo "-cpu | --cpu_bind_type [ none | cpu | numa ]" + echo "-inputf | --input_files_dir" + echo "-evalf | --eval_files_dir" + echo "-od | --output_dir" + echo "-ckpt | --initial_checkpoint" + echo "-config | --config_dir" + echo "-hls | --hls_type" + echo "-tcp | --mpi_tcp_include" + echo "-dram | --use_dram_output" + echo "-lw | --light_weight" + echo "-lwi | --light_weight_impl [ basic (default) | sharded ]" + echo "-ac | --async_checkpointing" + echo "-ld | --log_dir" + echo "--do_train" + echo "--do_eval" + echo "--experimental_slack" + echo "-ndew | --num_dist_eval_workers Number of workers participating in distributed evaluation" + echo "-opt | --optimizer Type of optimizer, available options: 'lamb', 'sharded_lamb', 'adam'" + echo "-sfg | --signaling_from_graph Enable (1) or disable (0) SFG optimization." +} +#echo "-sws | --start_warmup_steps" + +# Parse command line options +unset __config +unset __hostfile +unset __use_horovod +unset __warmup_steps +unset __learning_rate +unset __stop_threshold +unset __num_accumul_steps +unset __train_batchsize +unset __eval_batchsize +unset __train_steps +#unset __start_warmup_steps +unset __lamb_beta_1 +unset __lamb_beta_2 +unset __epsilon +unset __lamb_weight_decay_rate +unset __lamb_lr_decay_poly_power +unset __samples_btw_eval +unset __samples_start_eval +unset __max_eval_steps +unset __num_workers_total +unset __packed_data +unset __save_checkpoints_steps +unset __cpu_bind_type +unset __input_files_dir +unset __eval_files_dir +unset __output_dir +unset __initial_checkpoint +unset __config_dir +unset __hls_type +unset __mpi_tcp_include +unset __use_dram_output +unset __light_weight +unset __light_weight_impl +unset __async_checkpointing +unset __log_dir +unset __do_train +unset __do_eval +unset __experimental_slack +unset __num_dist_eval_workers +unset __optimizer +unset __aux_scirpt_params +unset __ssh_port +unset __signaling_from_graph + +while [ -n "$1" ]; do + case $1 in + -c | --config ) + shift + __config=$1 + ;; + -hf | --hostfile) + shift + __hostfile=$1 + ;; + -u | --use_horovod ) + shift + __use_horovod=$1 + ;; + -ws | --warmup_steps ) + shift + __warmup_steps=$1 + ;; + -lr | --learning_rate ) + shift + __learning_rate=$1 + ;; + -st | --stop_threshold ) + shift + __stop_threshold=$1 + ;; + -acs | --num_accumul_steps ) + shift + __num_accumul_steps=$1 + ;; + -tbs | --train_batchsize ) + shift + __train_batchsize=$1 + ;; + -ebs | --eval_batchsize) + shift + __eval_batchsize=$1 + ;; + -ts | --train_steps) + shift + __train_steps=$1 + ;; + -lb1 | --lamb_beta_1) + shift + __lamb_beta_1=$1 + ;; + -lb2 | --lamb_beta_2) + shift + __lamb_beta_2=$1 + ;; + -ep | --epsilon) + shift + __epsilon=$1 + ;; + -lwd | --lamb_weight_decay_rate) + shift + __lamb_weight_decay_rate=$1 + ;; + -ldp | --lamb_lr_decay_poly_power) + shift + __lamb_lr_decay_poly_power=$1 + ;; + -sbe | --samples_btw_eval) + shift + __samples_btw_eval=$1 + ;; + -sse | --samples_start_eval) + shift + __samples_start_eval=$1 + ;; + -mes | --max_eval_steps) + shift + __max_eval_steps=$1 + ;; + -w | --num_workers_total) + shift + __num_workers_total=$1 + ;; + -p | --packed_data) + shift + __packed_data=$1 + ;; + -sch | --save_checkpoints_steps) + shift + __save_checkpoints_steps=$1 + ;; + -cpu | --cpu_bind_type) + shift + __cpu_bind_type=$1 + case ${__cpu_bind_type} in + numa | cpu | none ) + ;; + *) + echo "--cpu-pin must be one of the following numa | cpu | none " + exit 1 + esac + ;; + -inputf | --input_files_dir) + shift + __input_files_dir=$1 + ;; + -sfg | --signaling_from_graph) + shift + __signaling_from_graph=$1 + ;; + -evalf | --eval_files_dir) + shift + __eval_files_dir=$1 + ;; + -od | --output_dir) + shift + __output_dir=$1 + ;; + -ckpt | --initial_checkpoint) + shift + __initial_checkpoint=$1 + ;; + -config | --config_dir) + shift + __config_dir=$1 + ;; + -hls | --hls_type) + shift + __hls_type=$1 + ;; + -tcp | --mpi_tcp_include) + shift + __mpi_tcp_include=$1 + ;; + -dram | --use_dram_output) + shift + __use_dram_output=$1 + ;; + -lw | --light_weight) + shift + __light_weight=$1 + ;; + -lwi | --light_weight_impl) + shift + __light_weight_impl=$1 + ;; + -ac | --async_checkpointing) + shift + __async_checkpointing=$1 + ;; + -ld | --log_dir) + shift + __log_dir=$1 + ;; + --do_train) + shift + __do_train=$1 + ;; + --do_eval) + shift + __do_eval=$1 + ;; + --experimental_slack) + shift + __experimental_slack=$1 + ;; + -ndew | --num_dist_eval_workers) + shift + __num_dist_eval_workers=$1 + ;; + -opt | --optimizer) + shift + __optimizer=$1 + ;; + -port | --ssh_port) + shift + __ssh_port=$1 + ;; + -h | --help) + help + exit 1 + ;; + * ) + __aux_param=$1 + shift + echo "The parameter $1 will be passed directly to python script" + __aux_scirpt_params="${__aux_scirpt_params}:${__aux_param}=${1}" + ;; + esac + shift +done + +export CFG_FILE=${__config:-"${BASE_PATH}/defaults.cfg"} +if [[ -f ${CFG_FILE} ]]; then + source ${CFG_FILE} +else + echo "Could not find ${CFG_FILE}" + exit 1 +fi + +# Set default values for environmental variable +export HOST_FILE=${__hostfile:-"${OMPI_MCA_orte_default_hostfile}"} +export SSH_PORT=${__ssh_port:-"3022"} + +if [[ -z "${HABANA_LOGS}" ]]; then + export HABANA_LOGS="/var/logs/habana_logs" + echo "Creating default directory for habana_logs." + mkdir -p $HABANA_LOGS +fi +export EVAL_FILES_DIR=${EVAL_FILES_DIR} +export OUTPUT_DIR=${OUTPUT_DIR} +export PHASE1_CKPT=${INITIAL_CHECKPOINT} +export INITIAL_CHECKPOINT=${INITIAL_CHECKPOINT} +export BERT_CONFIG_DIR=${BERT_CONFIG_DIR} +export NUM_WORKERS_PER_HLS=${NUM_WORKERS_PER_HLS} +export OPTIMIZE_DMA_ENGINES_ALLOCATION=${OPTIMIZE_DMA_ENGINES_ALLOCATION} +export TF_CPU_RUNTIME_FALLBACK=${TF_CPU_RUNTIME_FALLBACK} +export TF_HCCL_MEMORY_ALLOWANCE_MB=${TF_HCCL_MEMORY_ALLOWANCE_MB} +export HABANA_INITIAL_WORKSPACE_SIZE_MB=${HABANA_INITIAL_WORKSPACE_SIZE_MB} + +# Override defaults with command line options if needed +export MPI_TCP_INCLUDE=${__mpi_tcp_include:-$MPI_TCP_INCLUDE} +export USE_HOROVOD=${__use_horovod:-$USE_HOROVOD} +export WARMUP_STEPS=${__warmup_steps:-$WARMUP_STEPS} +export LEARNING_RATE=${__learning_rate:-$LEARNING_RATE} +export STOP_THRESHOLD=${__stop_threshold:-$STOP_THRESHOLD} +export NUM_ACCUMULATION_STEPS=${__num_accumul_steps:-$NUM_ACCUMULATION_STEPS} +export TRAIN_BATCH_SIZE=${__train_batchsize:-$TRAIN_BATCH_SIZE} +export EVAL_BATCH_SIZE=${__eval_batchsize:-$EVAL_BATCH_SIZE} +export TRAIN_STEPS=${__train_steps:-$TRAIN_STEPS} +export LAMB_BETA_1=${__lamb_beta_1:-$LAMB_BETA_1} +export LAMB_BETA_2=${__lamb_beta_2:-$LAMB_BETA_2} +export EPSILON=${__epsilon:-$EPSILON} +export LAMB_WEIGHT_DECAY_RATE=${__lamb_weight_decay_rate:-$LAMB_WEIGHT_DECAY_RATE} +export LAMB_LEARNING_RATE_DECAY_POLY_POWER=${__lamb_lr_decay_poly_power:-$LAMB_LEARNING_RATE_DECAY_POLY_POWER} +export SAMPLES_START_EVAL=${__samples_start_eval:-$SAMPLES_START_EVAL} +export MAX_EVAL_STEPS=${__max_eval_steps:-$MAX_EVAL_STEPS} +export NUM_WORKERS_TOTAL=${__num_workers_total:-$NUM_WORKERS_TOTAL} +export PACKED_DATA=${__packed_data:-$PACKED_DATA} +export SAVE_CHECKPOINTS_STEPS=${__save_checkpoints_steps:-$SAVE_CHECKPOINTS_STEPS} +SAMPLES_BETWEEN_EVAL=$(($TRAIN_BATCH_SIZE*$NUM_WORKERS_TOTAL*$NUM_ACCUMULATION_STEPS*$SAVE_CHECKPOINTS_STEPS)) +export SAMPLES_BETWEEN_EVAL=${__samples_btw_eval:-$SAMPLES_BETWEEN_EVAL} +export CPU_BIND_TYPE=${__cpu_bind_type:-$CPU_BIND_TYPE} +export EVAL_FILES_DIR=${__eval_files_dir:-$EVAL_FILES_DIR} +export SIGNALING_FROM_GRAPH=${__signaling_from_graph:-$SIGNALING_FROM_GRAPH} +export OUTPUT_DIR=${__output_dir:-$OUTPUT_DIR} +export PHASE1_CKPT=${__initial_checkpoint:-$INITIAL_CHECKPOINT} +export BERT_CONFIG_DIR=${__config_dir:-$BERT_CONFIG_DIR} +export HLS_TYPE=${__hls_type:-$HLS_TYPE} +export USE_DRAM_OUTPUT=${__use_dram_output:-"True"} +export USE_LIGHTWEIGHT_CHECKPOINT=${__light_weight:-$USE_LIGHTWEIGHT_CHECKPOINT} +export LIGHTWEIGHT_CHECKPOINT_IMPL=${__light_weight_impl:-"basic"} +export USE_ASYNC_CHECKPOINTING=${__async_checkpointing:-$USE_ASYNC_CHECKPOINTING} +export LOG_DIR=${__log_dir:-$LOG_DIR} +export DO_TRAIN=${__do_train:-$DO_TRAIN} +export DO_EVAL=${__do_eval:-$DO_EVAL} +export EXPERIMENTAL_SLACK=${__experimental_slack:-$EXPERIMENTAL_SLACK} +export NUM_DIST_EVAL_WORKERS=${__num_dist_eval_workers:-$NUM_DIST_EVAL_WORKERS} +export AUX_PARAMS=${__aux_scirpt_params:-$AUX_PARAMS} +export OPTIMIZER=${__optimizer:-$OPTIMIZER} + +if [[ "$HLS_TYPE" == "HLS2" ]]; then + export NUM_WORKERS_PER_HLS=8 +else + "============== WRONG HLS TYPE!! ===============" + exit -1 +fi + +if [ "$PACKED_DATA" == "False" ]; then + export INPUT_FILES_DIR=${__input_files_dir:-$INPUT_FILES_DIR_UNPACKED} +else + export INPUT_FILES_DIR=${__input_files_dir:-$INPUT_FILES_DIR_PACKED} +fi + +if [ "$USE_HOROVOD" == "True" ]; then + export HOROVOD_STALL_CHECK_DISABLE=1 + echo HOROVOD_STALL_CHECK_DISABLE=$HOROVOD_STALL_CHECK_DISABLE + + # SAO:ON by default + export TF_DISABLE_SCOPED_ALLOCATOR=${TF_DISABLE_SCOPED_ALLOCATOR:-False} + echo TF_DISABLE_SCOPED_ALLOCATOR=$TF_DISABLE_SCOPED_ALLOCATOR +fi + +function getmulti_hls_ips() +{ + multi_hcl_ip="MULTI_HLS_IPS=" + hostsFile=$1 + firstHost=1 + hostCount=0 + + # iterate over non-empty and non-commented lines + for h in $(cat $hostsFile | sed '/^$/d' | grep -v '^#'); do + if [[ $firstHost -eq 1 ]]; then + firstHost=0 + else + multi_hcl_ip+="," + fi + multi_hcl_ip+=$h + hostCount=$((hostCount + 1)) + done + + echo "[getmulti_hls_ips] Host Count : $hostCount" + echo "[getmulti_hls_ips] Exporting : $multi_hcl_ip" + export $multi_hcl_ip +} + + +function run_per_ip() +{ + if [ -n "$OMPI_COMM_WORLD_SIZE" ]; then + print_error "Function run_per_ip is not meant to be ran from within an OpenMPI context. It is intended to invoke mpirun by itelf." + exit 1 + fi + _cmd="$@" + # Due to technical difficulties with the following solution, the _cmd stderr shall be redirected to stdout. + if [[ -z ${MULTI_HLS_IPS} ]]; then + echo "[launch_bert_hvd] MULTI_HLS_IPS undefined - maybe a missing /root/shared/hosts file?" + exit -1 + else + if [ -n "$MPI_TCP_INCLUDE" ]; then + _option_btl_tcp_if_include="--mca btl_tcp_if_include ${MPI_TCP_INCLUDE}" + else + _option_btl_tcp_if_include="" + fi + mpirun --allow-run-as-root \ + --mca plm_rsh_args -p${SSH_PORT} \ + ${_option_btl_tcp_if_include} \ + --tag-output \ + --merge-stderr-to-stdout \ + --prefix ${OMPI_PREFIX} \ + -H ${MULTI_HLS_IPS} \ + bash -c "`declare`; `declare -x`; ($_cmd 2>&1)" 2>/dev/null + fi +} + +export MULTI_HLS_IPS=localhost +if [[ -f ${HOST_FILE} ]]; then + getmulti_hls_ips ${HOST_FILE} +fi + +# Create recipes directory if it does not exist and adjust dirctory name +# if we are collecting traces - which require debug information +run_per_ip mkdir -p ${OUTPUT_DIR} # 2>/dev/null +run_per_ip rm -rf ${OUTPUT_DIR}/* # 2>/dev/null +run_per_ip mkdir -p ${LOG_DIR} +mkdir -p ${LOG_DIR} + +run_per_ip pip install -r $BASE_PATH/../TensorFlow/nlp/bert/requirements.txt + +#run_per_ip rm -rf /tmp/checkpoint /tmp/eval /tmp/events.out.tfevents.* /tmp/graph.pbtxt /tmp/model.ckpt-* +#run_per_ip rm -rf /tmp/rank_*/checkpoint /tmp/rank_*/eval /tmp/rank_*/events.out.tfevents.* /tmp/rank_*/graph.pbtxt /tmp/rank_*/model.ckpt-* + +function setup_libjemalloc() +{ + local libjemalloc_1_lib="libjemalloc.so.1" + local libjemalloc_2_lib="libjemalloc.so.2" + local is_v2_not_present=`LD_PRELOAD=${libjemalloc_2_lib} head -0 2>&1 > /dev/null` + + if [ -z "${is_v2_not_present}" ]; then + export LD_PRELOAD=${libjemalloc_2_lib}:$LD_PRELOAD + else + export LD_PRELOAD=${libjemalloc_1_lib}:$LD_PRELOAD + fi +} +run_per_ip setup_libjemalloc + +if [[ -z ${MULTI_HLS_IPS} ]]; then + echo "[launch_bert_hvd] MULTI_HLS_IPS undefined - maybe a missing /root/shared/hosts file?" + exit -1 +else + IFS=',' read -ra IPS <<< "$MULTI_HLS_IPS" + let MPI_NP=${#IPS[@]}*${NUM_WORKERS_PER_HLS} + export NUM_WORKERS_TOTAL=${NUM_WORKERS_TOTAL:-$MPI_NP} + + if [[ $NUM_WORKERS_TOTAL != $MPI_NP ]]; then + echo $NUM_WORKERS_TOTAL $MPI_NP + echo "=============== WRONG NUMBER_WORKERS_TOTAL!! ===============" + exit -1 + fi + + echo NUM_WORKERS_TOTAL=$NUM_WORKERS_TOTAL + + function generate_mpi_hostfile() + { + echo "Generating MPI hostfile..." + local num_nodes=${2:-8} + local file_name="hostfile" + export MPI_HOSTFILE_PATH=$1/${file_name} + + rm -rf ${MPI_HOSTFILE_PATH} + echo "PATH: ${MPI_HOSTFILE_PATH}" + touch ${MPI_HOSTFILE_PATH} + + IFS=',' read -ra IPS <<< "$MULTI_HLS_IPS" + for i in "${IPS[@]}"; do + echo "$i slots=${num_nodes}" >> ${MPI_HOSTFILE_PATH} + done + echo "Config: " + cat ${MPI_HOSTFILE_PATH} + } + + generate_mpi_hostfile ${OUTPUT_DIR} ${NUM_WORKERS_PER_HLS} + + export testdate=`date +%Y-%m-%d` + export testtime=`date +%H%M%S` + export OUTPUT_DIR=${__output_dir:-/root/scratch/bert/bert_gaudi${NUM_WORKERS_TOTAL}_${testdate}_${testtime}} + + run_per_ip mkdir -p ${OUTPUT_DIR} + + run_per_ip rm -f $LOG_DIR/result_* + run_per_ip rm -f ${LOG_DIR}/tf_bert_pretraining_lamb.log + + LOGFILE=$LOG_DIR/tf_bert_pretraining_lamb.log + export TF_RECIPE_CACHE_PATH=/tmp/bert_pretrain/phase_2 + run_per_ip mkdir -p $TF_RECIPE_CACHE_PATH + + MPI_MAP_BY=socket + MPI_MAP_BY_PE=`lscpu | grep "^CPU(s):"| awk -v NUM=${NUM_WORKERS_PER_HLS} '{print int($2/NUM/2)}'` + if [[ "$CPU_BIND_TYPE" == "numa" || "$CPU_BIND_TYPE" == "none" ]]; then + MPIRUN_ARGS_MAP_BY_PE="-bind-to none" + else + MPIRUN_ARGS_MAP_BY_PE="--bind-to core --map-by $MPI_MAP_BY:PE=$MPI_MAP_BY_PE" + fi + + if [ -n "$MPI_TCP_INCLUDE" ]; then + _option_btl_tcp_if_include="--mca btl_tcp_if_include ${MPI_TCP_INCLUDE}" + else + _option_btl_tcp_if_include="" + fi + + TRAINING_COMMAND="mpirun --allow-run-as-root \ + --display-map \ + --report-bindings \ + --bind-to none \ + -np ${NUM_WORKERS_TOTAL}\ + --hostfile ${MPI_HOSTFILE_PATH} \ + --prefix ${OMPI_PREFIX} \ + --mca plm_rsh_args -p${SSH_PORT} \ + ${_option_btl_tcp_if_include} \ + --merge-stderr-to-stdout \ + --tag-output \ + --output-filename ${LOG_DIR}/bert_log \ + -x USE_HOROVOD=${USE_HOROVOD} \ + -x TF_MODULES_RELEASE_BUILD=/usr/lib/habanalabs/ \ + -x HABANA_LOGS=${HABANA_LOGS} \ + -x LEARNING_RATE=${LEARNING_RATE} \ + -x STOP_THRESHOLD=${STOP_THRESHOLD} \ + -x NUM_ACCUMULATION_STEPS=${NUM_ACCUMULATION_STEPS} \ + -x TRAIN_BATCH_SIZE=${TRAIN_BATCH_SIZE} \ + -x EVAL_BATCH_SIZE=${EVAL_BATCH_SIZE} \ + -x TRAIN_STEPS=${TRAIN_STEPS} \ + -x NUM_WORKERS_TOTAL=${NUM_WORKERS_TOTAL} \ + -x WARMUP_STEPS=${WARMUP_STEPS} \ + -x LAMB_BETA_1=${LAMB_BETA_1} \ + -x LAMB_BETA_2=${LAMB_BETA_2} \ + -x EPSILON=${EPSILON} \ + -x LAMB_WEIGHT_DECAY_RATE=${LAMB_WEIGHT_DECAY_RATE} \ + -x LAMB_LEARNING_RATE_DECAY_POLY_POWER=${LAMB_LEARNING_RATE_DECAY_POLY_POWER} \ + -x SAMPLES_BETWEEN_EVAL=${SAMPLES_BETWEEN_EVAL} \ + -x SAMPLES_START_EVAL=${SAMPLES_START_EVAL} \ + -x MAX_EVAL_STEPS=${MAX_EVAL_STEPS} \ + -x INPUT_FILES_DIR=${INPUT_FILES_DIR} \ + -x EVAL_FILES_DIR=${EVAL_FILES_DIR} \ + -x OUTPUT_DIR=${OUTPUT_DIR} \ + -x PHASE1_CKPT=${PHASE1_CKPT} \ + -x BERT_CONFIG_DIR=${BERT_CONFIG_DIR} \ + -x OPTIMIZE_DMA_ENGINES_ALLOCATION=${OPTIMIZE_DMA_ENGINES_ALLOCATION} \ + -x TF_CPU_RUNTIME_FALLBACK=${TF_CPU_RUNTIME_FALLBACK} \ + -x TF_HCCL_MEMORY_ALLOWANCE_MB=${TF_HCCL_MEMORY_ALLOWANCE_MB} \ + -x HABANA_INITIAL_WORKSPACE_SIZE_MB=${HABANA_INITIAL_WORKSPACE_SIZE_MB} \ + -x HLS_TYPE=${HLS_TYPE} \ + -x MPI_TCP_INCLUDE=${MPI_TCP_INCLUDE} \ + -x SAVE_CHECKPOINTS_STEPS=${SAVE_CHECKPOINTS_STEPS} \ + -x PACKED_DATA=${PACKED_DATA} \ + -x TESTDATE=${testdate} \ + -x TESTTIME=${testtime} \ + -x CPU_BIND_TYPE=${CPU_BIND_TYPE} \ + ${MPIRUN_ARGS_MAP_BY_PE} \ + -x NUM_WORKERS_PER_HLS=${NUM_WORKERS_PER_HLS} \ + -x USE_DRAM_OUTPUT=${USE_DRAM_OUTPUT} \ + -x USE_LIGHTWEIGHT_CHECKPOINT=${USE_LIGHTWEIGHT_CHECKPOINT} \ + -x LIGHTWEIGHT_CHECKPOINT_IMPL=${LIGHTWEIGHT_CHECKPOINT_IMPL} \ + -x USE_ASYNC_CHECKPOINTING=${USE_ASYNC_CHECKPOINTING} \ + -x LOG_DIR=${LOG_DIR} \ + -x TF_RECIPE_CACHE_PATH \ + -x DO_TRAIN=${DO_TRAIN} \ + -x DO_EVAL=${DO_EVAL} \ + -x EXPERIMENTAL_SLACK=${EXPERIMENTAL_SLACK} \ + -x NUM_DIST_EVAL_WORKERS=${NUM_DIST_EVAL_WORKERS} \ + -x WARMUP_STEPS=${WARMUP_STEPS} + -x AUX_PARAMS=${AUX_PARAMS} \ + -x TF_ENABLE_DYNAMIC_SHAPES=${TF_ENABLE_DYNAMIC_SHAPES} \ + -x OPTIMIZER=${OPTIMIZER} \ + -x SIGNALING_FROM_GRAPH=${SIGNALING_FROM_GRAPH} \ + ${BASE_PATH}/run.sh" + + echo "TRAINING COMMAND = ${TRAINING_COMMAND}" + printf "[launch_bert_hvd] Starting training...\n\n" + time $TRAINING_COMMAND |& tee -a $LOGFILE +fi +run_per_ip rm -rf $OUTPUT_DIR/*/model.ckpt-* +rm -rf $BASE_PATH/log +cp /root/build_log.csv ${OUTPUT_DIR}/ +cp ${MPI_HOSTFILE_PATH} ${OUTPUT_DIR}/ +cp -r $LOG_DIR/bert_log $BASE_PATH/log +cp $TF_RECIPE_CACHE_PATH/tf_bert_pretraining* ${OUTPUT_DIR}/ +chmod -R 777 ${OUTPUT_DIR} +exit $exit_code diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/HLS-Gaudi2-TF/run.sh b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/HLS-Gaudi2-TF/run.sh new file mode 100644 index 0000000000000000000000000000000000000000..ddd5e6ca31648b8ac7353a463fc09f37c19ac613 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/HLS-Gaudi2-TF/run.sh @@ -0,0 +1,164 @@ +#! /bin/bash + +#set -x +############################################################################### +# Copyright (C) 2020-2023 Habana Labs, Ltd. an Intel Company +# +############################################################################### + +SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) +export BASE_PATH="$( cd "$(dirname "$(readlink -f ${SCRIPT_DIR}/defaults.cfg)" )" && pwd)" +export PYTHONPATH=${BASE_PATH}:${BASE_PATH}/../TensorFlow/common + +PT_VERSION=`python3 -c 'import sys; print(f"{sys.version_info[0]}.{sys.version_info[1]}")'` +TF_VERSION=`python3 -c "import tensorflow as tf; print(tf.__version__.replace('.', '_'))"` +PATCH_PATH=/usr/local/lib/python${PT_VERSION}/dist-packages/habana_frameworks/tensorflow/tf${TF_VERSION}/lib/habanalabs +export PYTHONPATH=${PATCH_PATH}:${PYTHONPATH} + +TRAIN_BATCH_SIZE=${TRAIN_BATCH_SIZE:-7} +EVAL_BATCH_SIZE=${EVAL_BATCH_SIZE:-125} +LEARNING_RATE=${LEARNING_RATE:-5e-5} +PRECISION=${PRECISION:-fp32} +WARMUP_STEPS=${WARMUP_STEPS:-0} +TRAIN_STEPS=${TRAIN_STEPS:-8103} +SAVE_CHECKPOINTS_STEPS=${SAVE_CHECKPOINTS_STEPS:-335} +NUM_ACCUMULATION_STEPS=${NUM_ACCUMULATION_STEPS:-4} +SAMPLES_BETWEEN_EVAL=${SAMPLES_BETWEEN_EVAL:-150080} +STOP_THRESHOLD=${STOP_THRESHOLD:-0.720} +SAMPLES_START_EVAL=${SAMPLES_START_EVAL:-3000000} +MAX_EVAL_STEPS=${MAX_EVAL_STEPS:-0} +IS_DIST_EVAL_ENABLED=${IS_DIST_EVAL_ENABLED:-false} +MAX_SEQ_LENGTH=${MAX_SEQ_LENGTH:-512} +MAX_PRED_PER_SEQ=${MAX_PRED_PER_SEQ:-76} +FAST_PERF_ONLY=${FAST_PERF_ONLY:-0} +PACKED_DATA=${PACKED_DATA:-False} +TESTDATE=${TESTDATE} +TESTTIME=${TESTTIME} +LAMB_BETA_1=${LAMB_BETA_1:-0.9} +LAMB_BETA_2=${LAMB_BETA_2:-0.999} +EPSILON=${EPSILON:-1e-6} +LAMB_WEIGHT_DECAY_RATE=${LAMB_WEIGHT_DECAY_RATE:-0.01} +LAMB_LEARNING_RATE_DECAY_POLY_POWER=${LAMB_LEARNING_RATE_DECAY_POLY_POWER:-1.0} +NUM_WORKERS_PER_HLS=${NUM_WORKERS_PER_HLS:-4} +DO_TRAIN=${DO_TRAIN:-True} +DO_EVAL=${DO_EVAL:-True} +EXPERIMENTAL_SLACK=${EXPERIMENTAL_SLACK:-True} +NUM_DIST_EVAL_WORKERS=${NUM_DIST_EVAL_WORKERS:-0} +OPTIMIZER=${OPTIMIZER:-'lamb'} + +export TF_BF16_CONVERSION=${BASE_PATH}/../TensorFlow/common/bf16_config/bert.json +export USE_LIGHTWEIGHT_CHECKPOINT=${USE_LIGHTWEIGHT_CHECKPOINT:-True} +export LIGHTWEIGHT_CHECKPOINT_IMPL=${LIGHTWEIGHT_CHECKPOINT_IMPL:-"basic"} +export USE_ASYNC_CHECKPOINTING=${USE_ASYNC_CHECKPOINTING:-False} +export BERT_CONFIG_FILE=${BERT_CONFIG_FILE:-${BERT_CONFIG_DIR}/bert_config.json} + +if [[ $SIGNALING_FROM_GRAPH -eq 1 ]]; then + export TF_DISABLE_SCOPED_ALLOCATOR=True + export HOROVOD_FUSION_THRESHOLD=0 + export TF_USE_SIGNALING_FROM_ENCAP_OP=1 +else + export TF_USE_SIGNALING_FROM_ENCAP_OP=0 +fi + +# Currently sharded LAMB works only when ScopedAllocator is disabled and loop unrolling is False +if [ $OPTIMIZER == "sharded_lamb" ]; then + export TF_DISABLE_SCOPED_ALLOCATOR=True + AUX_PARAMS="${AUX_PARAMS} --loop_unrolling_for_train_op=False" +fi + +# Under the hood, AMP (Arithmetic Mixed Precision) training is applied via TF_BF16_CONVERSION +# default precision is fp32. +precision="--noamp" + +USE_HOROVOD=${USE_HOROVOD:-"False"} +if [ $USE_HOROVOD == "True" ]; then + horovod="--horovod --allreduce_post_accumulation=True" + IS_DIST_EVAL_ENABLED="True" +else + horovod="" +fi + +#PHASE 1 Config +export PHASE1_CKPT=${PHASE1_CKPT:-/root/datasets/bert_pretraining/MLPerf_BERT_checkpoint/model.ckpt-28252} +export INPUT_FILES_DIR=${INPUT_FILES_DIR:-/root/datasets/bert_pretraining/training} +export EVAL_FILES_DIR=${EVAL_FILES_DIR:-/root/datasets/bert_pretraining/evaluation} + +#Generate Host Folder +if [ $USE_DRAM_OUTPUT == "True" ]; then + host=$(hostname) + if [ "$OMPI_COMM_WORLD_LOCAL_RANK" == "0" ]; then + mkdir -p /mnt/dramfs + mount -t tmpfs -o size=200g tmpfs /mnt/dramfs + fi + export OUTPUT_DIR=/mnt/dramfs/bert_gaudi${NUM_WORKERS_TOTAL}_${TESTDATE}_${TESTTIME}/${host} + mkdir -p $OUTPUT_DIR +fi + +# clear cache +if [[ $OMPI_COMM_WORLD_LOCAL_RANK -eq 0 ]]; then + PROC_FS=${PROC_FS:-"/proc"} + sync && echo 3 > $PROC_FS/sys/vm/drop_caches +fi + +if [ $PACKED_DATA == "False" ]; then + packing_arg="" +else + packing_arg="--enable_packed_data_mode --avg_seq_per_pack=2" +fi + +AUX_PARAMS=$(echo ${AUX_PARAMS} | sed s/:/\ /g) + +enable_device_warmup=True + +TRAIN_COMMAND="python3 ${BASE_PATH}/../TensorFlow/nlp/bert/run_pretraining.py \ + --input_files_dir=$INPUT_FILES_DIR \ + --init_checkpoint=$PHASE1_CKPT \ + --eval_files_dir=$EVAL_FILES_DIR\ + --output_dir=$OUTPUT_DIR \ + --bert_config_file=$BERT_CONFIG_FILE \ + --do_train=$DO_TRAIN \ + --do_eval=$DO_EVAL \ + --experimental_slack=$EXPERIMENTAL_SLACK \ + --is_dist_eval_enabled=$IS_DIST_EVAL_ENABLED \ + --train_batch_size=$TRAIN_BATCH_SIZE \ + --eval_batch_size=$EVAL_BATCH_SIZE \ + --max_eval_steps=$MAX_EVAL_STEPS \ + --max_seq_length=$MAX_SEQ_LENGTH \ + --max_predictions_per_seq=$MAX_PRED_PER_SEQ \ + --num_train_steps=$TRAIN_STEPS \ + --num_accumulation_steps=$NUM_ACCUMULATION_STEPS \ + --num_warmup_steps=$WARMUP_STEPS \ + --save_checkpoints_steps=$SAVE_CHECKPOINTS_STEPS \ + --learning_rate=$LEARNING_RATE \ + $horovod \ + $precision \ + $packing_arg \ + --enable_device_warmup=$enable_device_warmup \ + --samples_between_eval=$SAMPLES_BETWEEN_EVAL \ + --stop_threshold=$STOP_THRESHOLD \ + --samples_start_eval=$SAMPLES_START_EVAL \ + --beta_1=$LAMB_BETA_1 \ + --beta_2=$LAMB_BETA_2 \ + --epsilon=$EPSILON \ + --weight_decay_rate=$LAMB_WEIGHT_DECAY_RATE \ + --power=$LAMB_LEARNING_RATE_DECAY_POLY_POWER \ + --enable_habana_backend \ + --dllog_path=$LOG_DIR/bert_dllog.json \ + --use_lightweight_checkpoint=$USE_LIGHTWEIGHT_CHECKPOINT \ + --lightweight_checkpoint_impl=$LIGHTWEIGHT_CHECKPOINT_IMPL \ + --use_async_checkpointing=$USE_ASYNC_CHECKPOINTING \ + --num_dist_eval_workers=$NUM_DIST_EVAL_WORKERS \ + --optimizer_type=$OPTIMIZER \ + ${AUX_PARAMS} +" + +LD_PRELOAD=${PRELOAD_PATH} ${TRAIN_COMMAND} + +if [[ $OMPI_COMM_WORLD_LOCAL_RANK == "0" ]]; then + rm -rf $OUTPUT_DIR/*/model.ckpt-* + rm -rf $OUTPUT_DIR/*/checkpoint + if [[ $USE_DRAM_OUTPUT == "True" ]]; then + cp -r $LOG_DIR/result_* /root/scratch/bert/bert_gaudi${NUM_WORKERS_TOTAL}_${TESTDATE}_${TESTTIME} + rm -rf /mnt/dramfs/bert_gaudi${NUM_WORKERS_TOTAL}_${TESTDATE}_${TESTTIME} + fi +fi diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/chop_hdf5_files.py b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/chop_hdf5_files.py new file mode 100644 index 0000000000000000000000000000000000000000..58f6407a592376cd14cbce61ec8e3ad609047796 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/chop_hdf5_files.py @@ -0,0 +1,150 @@ +# Copyright (c) 2019-2022 NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import glob +import h5py +import multiprocessing +import numpy as np +from os import path, makedirs +from tqdm import tqdm +import argparse +import logging + +parser = argparse.ArgumentParser( + description="Training data sharding for BERT.") +parser.add_argument( + '--input_hdf5_dir', + type=str, + default='hdf5', + help='Input hdf5_file path') +parser.add_argument( + '--output_hdf5_dir', + type=str, + default='', + help='Output hdf5_file path') +parser.add_argument( + '--num_shards', + type=int, + default=2048, + help='Number of output shards (default 2048)') +parser.add_argument( + '--max_seq_length', + type=int, + default=512, + help='The maximum number of tokens within a sequence. (default 512)') +parser.add_argument( + '--max_predictions_per_seq', + type=int, + default=76, + help='The maximum number of predictions within a sequence. (default 76)') +args = parser.parse_args() + +max_seq_length = args.max_seq_length +max_predictions_per_seq = args.max_predictions_per_seq +n_output_shards = args.num_shards +input_path = args.input_hdf5_dir +logging.basicConfig(level=logging.INFO) + +hdf5_compression_method = None + +input_files = sorted(glob.glob(input_path + '/part-00???-of-00500.hdf5', recursive=False)) +logging.info('n_input_shards = {}'.format(len(input_files))) +logging.info('n_output_shards = {}'.format(n_output_shards)) + +output_shards_dir = path.join(args.output_hdf5_dir,'hdf5_{}_shards_uncompressed'.format(n_output_shards)) +try: + makedirs(output_shards_dir) +except OSError as error: + logging.info('Output directory : {} already exists. Overwritting ...'.format(output_shards_dir)) + +ofile_prefix = path.join(output_shards_dir, 'part_') +ofile_suffix = '_of_{:05d}.hdf5'.format(n_output_shards) + + +# First pass over data to get sample count (read only the smallest array to get count) +n_samples = 0 +for ifile in tqdm(input_files, total=len(input_files)): + h5_ifile = h5py.File(ifile, 'r') + n_samples += h5_ifile['next_sentence_labels'].shape[0] + h5_ifile.close() + +# Find a "nominal" number of samples per shard (calculated to always go over by one shard size) +# Find excess samples in last shard and distribute removal of excess over first "N" shards (could be done over last, but it doesn't matter and math is easier this way) +# (since 0 <= excess < nominal_shard_size, the max imbalance will be 1 sample to minimize the straggler effect) +n_sample_per_ofile_nominal = (n_samples + n_output_shards - 1) // n_output_shards +n_excess = n_output_shards * n_sample_per_ofile_nominal - n_samples # Always a positive number +logging.info('Total number of samples: {}. Sample per shard {}/{}'.format(n_samples, n_sample_per_ofile_nominal-1, n_sample_per_ofile_nominal)) + +logging.info('creating {} output file handles. This could take a while.'.format(n_output_shards)) +ofile_handles = [h5py.File('{}{:05d}{}'.format(ofile_prefix, shard, ofile_suffix), 'w') for shard in range(n_output_shards)] + +ofile_idx = 0 # which output file +ofile_entry_idx = 0 # index into an individual data element of an output file +ifile_entry_idx = 0 + +n_samples_in_this_shard = n_sample_per_ofile_nominal - 1 +o_input_ids = np.ndarray((n_samples_in_this_shard, max_seq_length)) +o_input_masks = np.ndarray((n_samples_in_this_shard, max_seq_length)) +o_segment_ids = np.ndarray((n_samples_in_this_shard, max_seq_length)) +o_masked_lm_positions = np.ndarray((n_samples_in_this_shard, max_predictions_per_seq)) +o_masked_lm_ids = np.ndarray((n_samples_in_this_shard, max_predictions_per_seq)) +o_next_sentence_labels = np.ndarray((n_samples_in_this_shard)) + +for ifile in tqdm(input_files, total=len(input_files)): + h5_ifile = h5py.File(ifile, 'r') + + ifile_entry_idx = 0 + f_input_ids = h5_ifile['input_ids'][:] + f_input_masks = h5_ifile['input_mask'][:] + f_segment_ids = h5_ifile['segment_ids'][:] + f_masked_lm_positions = h5_ifile['masked_lm_positions'][:] + f_masked_lm_ids = h5_ifile['masked_lm_ids'][:] + f_next_sentence_labels = h5_ifile['next_sentence_labels'][:] + + h5_ifile.close() + + # This could be vectorized but keeping it simple due to lack of time + while ifile_entry_idx < f_input_ids.shape[0]: + if ofile_entry_idx == n_samples_in_this_shard: + ofile_handles[ofile_idx].create_dataset("input_ids", data=o_input_ids, dtype='i2', compression=hdf5_compression_method) + ofile_handles[ofile_idx].create_dataset("input_mask", data=o_input_masks, dtype='i1', compression=hdf5_compression_method) + ofile_handles[ofile_idx].create_dataset("segment_ids", data=o_segment_ids, dtype='i1', compression=hdf5_compression_method) + ofile_handles[ofile_idx].create_dataset("masked_lm_positions", data=o_masked_lm_positions, dtype='i2', compression=hdf5_compression_method) + ofile_handles[ofile_idx].create_dataset("masked_lm_ids", data=o_masked_lm_ids, dtype='i2', compression=hdf5_compression_method) + ofile_handles[ofile_idx].create_dataset("next_sentence_labels", data=o_next_sentence_labels, dtype='i1', compression=hdf5_compression_method) + ofile_handles[ofile_idx].flush() + ofile_handles[ofile_idx].close() + + ofile_entry_idx = 0 + ofile_idx += 1 + + n_samples_in_this_shard = n_sample_per_ofile_nominal + if ofile_entry_idx < n_excess: + n_samples_in_this_shard -= 1 + + o_input_ids = np.ndarray((n_samples_in_this_shard, max_seq_length)) + o_input_masks = np.ndarray((n_samples_in_this_shard, max_seq_length)) + o_segment_ids = np.ndarray((n_samples_in_this_shard, max_seq_length)) + o_masked_lm_positions = np.ndarray((n_samples_in_this_shard, max_predictions_per_seq)) + o_masked_lm_ids = np.ndarray((n_samples_in_this_shard, max_predictions_per_seq)) + o_next_sentence_labels = np.ndarray((n_samples_in_this_shard)) + + o_input_ids[ofile_entry_idx] = f_input_ids[ifile_entry_idx] + o_input_masks[ofile_entry_idx] = f_input_masks[ifile_entry_idx] + o_segment_ids[ofile_entry_idx] = f_segment_ids[ifile_entry_idx] + o_masked_lm_positions[ofile_entry_idx] = f_masked_lm_positions[ifile_entry_idx] + o_masked_lm_ids[ofile_entry_idx] = f_masked_lm_ids[ifile_entry_idx] + o_next_sentence_labels[ofile_entry_idx] = f_next_sentence_labels[ifile_entry_idx] + ofile_entry_idx += 1 + + ifile_entry_idx += 1 \ No newline at end of file diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/create_pretraining_data.py b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/create_pretraining_data.py new file mode 100644 index 0000000000000000000000000000000000000000..2d8f1d8c39aa86f3dc1a18f6e0ced4c7e1741345 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/create_pretraining_data.py @@ -0,0 +1,455 @@ +# coding=utf-8 +# Copyright (c) 2019-2022 NVIDIA CORPORATION. All rights reserved. +# Copyright 2020 MLBenchmark Group. All rights reserved. + +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Create masked LM/next sentence masked_lm TF examples for BERT.""" + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import collections +import random +import tokenization +import tensorflow as tf + +import h5py +import numpy as np + +hdf5_compression_method = None + +#flags = tf.flags +flags = tf.compat.v1.flags + +FLAGS = flags.FLAGS + +flags.DEFINE_string("input_file", None, + "Input raw text file (or comma-separated list of files).") + +flags.DEFINE_string( + "output_file", None, + "Output TF example file (or comma-separated list of files).") + +flags.DEFINE_string("vocab_file", None, + "The vocabulary file that the BERT model was trained on.") + +flags.DEFINE_bool( + "do_lower_case", True, + "Whether to lower case the input text. Should be True for uncased " + "models and False for cased models.") + +flags.DEFINE_integer("max_seq_length", 128, "Maximum sequence length.") + +flags.DEFINE_integer("max_predictions_per_seq", 20, + "Maximum number of masked LM predictions per sequence.") + +flags.DEFINE_integer("random_seed", 12345, "Random seed for data generation.") + +flags.DEFINE_integer( + "dupe_factor", 10, + "Number of times to duplicate the input data (with different masks).") + +flags.DEFINE_float("masked_lm_prob", 0.15, "Masked LM probability.") + +flags.DEFINE_float( + "short_seq_prob", 0.1, + "Probability of creating sequences which are shorter than the " + "maximum length.") + + +class TrainingInstance(object): + """A single training instance (sentence pair).""" + + def __init__(self, tokens, segment_ids, masked_lm_positions, masked_lm_labels, + is_random_next): + self.tokens = tokens + self.segment_ids = segment_ids + self.is_random_next = is_random_next + self.masked_lm_positions = masked_lm_positions + self.masked_lm_labels = masked_lm_labels + + def __str__(self): + s = "" + s += "tokens: %s\n" % (" ".join( + [tokenization.printable_text(x) for x in self.tokens])) + s += "segment_ids: %s\n" % (" ".join([str(x) for x in self.segment_ids])) + s += "is_random_next: %s\n" % self.is_random_next + s += "masked_lm_positions: %s\n" % (" ".join( + [str(x) for x in self.masked_lm_positions])) + s += "masked_lm_labels: %s\n" % (" ".join( + [tokenization.printable_text(x) for x in self.masked_lm_labels])) + s += "\n" + return s + + def __repr__(self): + return self.__str__() + + +def write_instance_to_example_files(instances, tokenizer, max_seq_length, + max_predictions_per_seq, output_files): + """Create TF example files from `TrainingInstance`s.""" + writers = [] + h5_writers = [] + + expected_instances_per_file = len(instances) // len(output_files) + 500 # Over-allocation to avoid resizing + for output_file in output_files: + h5_writers.append({ + 'handle' : h5py.File(output_file + ".hdf5", 'w'), + 'input_ids' : np.zeros([expected_instances_per_file, max_seq_length], dtype="int32"), + 'input_mask' : np.zeros([expected_instances_per_file, max_seq_length], dtype="int32"), + 'segment_ids' : np.zeros([expected_instances_per_file, max_seq_length], dtype="int32"), + 'masked_lm_positions' : np.zeros([expected_instances_per_file, max_predictions_per_seq], dtype="int32"), + 'masked_lm_ids' : np.zeros([expected_instances_per_file, max_predictions_per_seq], dtype="int32"), + 'next_sentence_labels' : np.zeros(expected_instances_per_file, dtype="int32"), + 'len' : 0 }) + + writer_index = 0 + + total_written = 0 + + features_h5 = collections.OrderedDict() + + for (inst_index, instance) in enumerate(instances): + input_ids = tokenizer.convert_tokens_to_ids(instance.tokens) + input_mask = [1] * len(input_ids) + segment_ids = list(instance.segment_ids) + assert len(input_ids) <= max_seq_length + + while len(input_ids) < max_seq_length: + input_ids.append(0) + input_mask.append(0) + segment_ids.append(0) + + assert len(input_ids) == max_seq_length + assert len(input_mask) == max_seq_length + assert len(segment_ids) == max_seq_length + + masked_lm_positions = list(instance.masked_lm_positions) + masked_lm_ids = tokenizer.convert_tokens_to_ids(instance.masked_lm_labels) + masked_lm_weights = [1.0] * len(masked_lm_ids) + + while len(masked_lm_positions) < max_predictions_per_seq: + masked_lm_positions.append(0) + masked_lm_ids.append(0) + masked_lm_weights.append(0.0) + + next_sentence_label = 1 if instance.is_random_next else 0 + + h5_writers[writer_index]['input_ids'][inst_index] = input_ids + h5_writers[writer_index]['input_mask'][inst_index] = input_mask + h5_writers[writer_index]['segment_ids'][inst_index] = segment_ids + h5_writers[writer_index]['masked_lm_positions'][inst_index] = masked_lm_positions + h5_writers[writer_index]['masked_lm_ids'][inst_index] = masked_lm_ids + h5_writers[writer_index]['next_sentence_labels'][inst_index] = next_sentence_label + h5_writers[writer_index]['len'] += 1 + + writer_index = (writer_index + 1) % len(h5_writers) + + total_written += 1 + + if inst_index < 20: + tf.compat.v1.logging.info("*** Example ***") + tf.compat.v1.logging.info("tokens: %s" % " ".join( + [tokenization.printable_text(x) for x in instance.tokens])) + + print("saving data") + for h5_writer in h5_writers: + my_size = h5_writer['len'] + h5_writer['handle'].create_dataset('input_ids', data=h5_writer['input_ids'][:my_size], dtype='i2', compression=hdf5_compression_method) + h5_writer['handle'].create_dataset('input_mask', data=h5_writer['input_mask'][:my_size], dtype='i1', compression=hdf5_compression_method) + h5_writer['handle'].create_dataset('segment_ids', data=h5_writer['segment_ids'][:my_size], dtype='i1', compression=hdf5_compression_method) + h5_writer['handle'].create_dataset('masked_lm_positions', data=h5_writer['masked_lm_positions'][:my_size], dtype='i2', compression=hdf5_compression_method) + h5_writer['handle'].create_dataset('masked_lm_ids', data=h5_writer['masked_lm_ids'][:my_size], dtype='i2', compression=hdf5_compression_method) + h5_writer['handle'].create_dataset('next_sentence_labels', data=h5_writer['next_sentence_labels'][:my_size], dtype='i1', compression=hdf5_compression_method) + h5_writer['handle'].flush() + h5_writer['handle'].close() + + tf.compat.v1.logging.info("Wrote %d total instances", total_written) + + +def create_int_feature(values): + feature = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values))) + return feature + +def create_float_feature(values): + feature = tf.train.Feature(float_list=tf.train.FloatList(value=list(values))) + return feature + +def create_training_instances(input_files, tokenizer, max_seq_length, + dupe_factor, short_seq_prob, masked_lm_prob, + max_predictions_per_seq, rng): + """Create `TrainingInstance`s from raw text.""" + all_documents = [[]] + + # Input file format: + # (1) One sentence per line. These should ideally be actual sentences, not + # entire paragraphs or arbitrary spans of text. (Because we use the + # sentence boundaries for the "next sentence prediction" task). + # (2) Blank lines between documents. Document boundaries are needed so + # that the "next sentence prediction" task doesn't span between documents. + for input_file in input_files: + with tf.compat.v1.gfile.GFile(input_file, "r") as reader: + while True: + line = tokenization.convert_to_unicode(reader.readline()) + if not line: + break + line = line.strip() + + # Empty lines are used as document delimiters + if not line: + all_documents.append([]) + tokens = tokenizer.tokenize(line) + if tokens: + all_documents[-1].append(tokens) + + # Remove empty documents + all_documents = [x for x in all_documents if x] + rng.shuffle(all_documents) + + vocab_words = list(tokenizer.vocab.keys()) + instances = [] + for _ in range(dupe_factor): + for document_index in range(len(all_documents)): + instances.extend( + create_instances_from_document( + all_documents, document_index, max_seq_length, short_seq_prob, + masked_lm_prob, max_predictions_per_seq, vocab_words, rng)) + + rng.shuffle(instances) + return instances + + +def create_instances_from_document( + all_documents, document_index, max_seq_length, short_seq_prob, + masked_lm_prob, max_predictions_per_seq, vocab_words, rng): + """Creates `TrainingInstance`s for a single document.""" + document = all_documents[document_index] + + # Account for [CLS], [SEP], [SEP] + max_num_tokens = max_seq_length - 3 + + # We *usually* want to fill up the entire sequence since we are padding + # to `max_seq_length` anyways, so short sequences are generally wasted + # computation. However, we *sometimes* + # (i.e., short_seq_prob == 0.1 == 10% of the time) want to use shorter + # sequences to minimize the mismatch between pre-training and fine-tuning. + # The `target_seq_length` is just a rough target however, whereas + # `max_seq_length` is a hard limit. + target_seq_length = max_num_tokens + if rng.random() < short_seq_prob: + target_seq_length = rng.randint(2, max_num_tokens) + + # We DON'T just concatenate all of the tokens from a document into a long + # sequence and choose an arbitrary split point because this would make the + # next sentence prediction task too easy. Instead, we split the input into + # segments "A" and "B" based on the actual "sentences" provided by the user + # input. + instances = [] + current_chunk = [] + current_length = 0 + i = 0 + while i < len(document): + segment = document[i] + current_chunk.append(segment) + current_length += len(segment) + if i == len(document) - 1 or current_length >= target_seq_length: + if current_chunk: + # `a_end` is how many segments from `current_chunk` go into the `A` + # (first) sentence. + a_end = 1 + if len(current_chunk) >= 2: + a_end = rng.randint(1, len(current_chunk) - 1) + + tokens_a = [] + for j in range(a_end): + tokens_a.extend(current_chunk[j]) + + tokens_b = [] + # Random next + is_random_next = False + if len(current_chunk) == 1 or rng.random() < 0.5: + is_random_next = True + target_b_length = target_seq_length - len(tokens_a) + + # This should rarely go for more than one iteration for large + # corpora. However, just to be careful, we try to make sure that + # the random document is not the same as the document + # we're processing. + for _ in range(10): + random_document_index = rng.randint(0, len(all_documents) - 1) + if random_document_index != document_index: + break + + random_document = all_documents[random_document_index] + random_start = rng.randint(0, len(random_document) - 1) + for j in range(random_start, len(random_document)): + tokens_b.extend(random_document[j]) + if len(tokens_b) >= target_b_length: + break + # We didn't actually use these segments so we "put them back" so + # they don't go to waste. + num_unused_segments = len(current_chunk) - a_end + i -= num_unused_segments + # Actual next + else: + is_random_next = False + for j in range(a_end, len(current_chunk)): + tokens_b.extend(current_chunk[j]) + truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng) + + assert len(tokens_a) >= 1 + assert len(tokens_b) >= 1 + + tokens = [] + segment_ids = [] + tokens.append("[CLS]") + segment_ids.append(0) + for token in tokens_a: + tokens.append(token) + segment_ids.append(0) + + tokens.append("[SEP]") + segment_ids.append(0) + + for token in tokens_b: + tokens.append(token) + segment_ids.append(1) + tokens.append("[SEP]") + segment_ids.append(1) + + (tokens, masked_lm_positions, + masked_lm_labels) = create_masked_lm_predictions( + tokens, masked_lm_prob, max_predictions_per_seq, vocab_words, rng) + instance = TrainingInstance( + tokens=tokens, + segment_ids=segment_ids, + is_random_next=is_random_next, + masked_lm_positions=masked_lm_positions, + masked_lm_labels=masked_lm_labels) + instances.append(instance) + current_chunk = [] + current_length = 0 + i += 1 + + return instances + +MaskedLmInstance = collections.namedtuple("MaskedLmInstance", + ["index", "label"]) + +def create_masked_lm_predictions(tokens, masked_lm_prob, + max_predictions_per_seq, vocab_words, rng): + """Creates the predictions for the masked LM objective.""" + + cand_indexes = [] + for (i, token) in enumerate(tokens): + if token == "[CLS]" or token == "[SEP]": + continue + cand_indexes.append(i) + + rng.shuffle(cand_indexes) + + output_tokens = list(tokens) + + num_to_predict = min(max_predictions_per_seq, + max(1, int(round(len(tokens) * masked_lm_prob)))) + + masked_lms = [] + covered_indexes = set() + for index in cand_indexes: + if len(masked_lms) >= num_to_predict: + break + if index in covered_indexes: + continue + covered_indexes.add(index) + + masked_token = None + # 80% of the time, replace with [MASK] + if rng.random() < 0.8: + masked_token = "[MASK]" + else: + # 10% of the time, keep original + if rng.random() < 0.5: + masked_token = tokens[index] + # 10% of the time, replace with random word + else: + masked_token = vocab_words[rng.randint(0, len(vocab_words) - 1)] + + output_tokens[index] = masked_token + + masked_lms.append(MaskedLmInstance(index=index, label=tokens[index])) + + masked_lms = sorted(masked_lms, key=lambda x: x.index) + + masked_lm_positions = [] + masked_lm_labels = [] + for p in masked_lms: + masked_lm_positions.append(p.index) + masked_lm_labels.append(p.label) + + return (output_tokens, masked_lm_positions, masked_lm_labels) + + +def truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng): + """Truncates a pair of sequences to a maximum sequence length.""" + while True: + total_length = len(tokens_a) + len(tokens_b) + if total_length <= max_num_tokens: + break + + trunc_tokens = tokens_a if len(tokens_a) > len(tokens_b) else tokens_b + assert len(trunc_tokens) >= 1 + + # We want to sometimes truncate from the front and sometimes from the + # back to add more randomness and avoid biases. + if rng.random() < 0.5: + del trunc_tokens[0] + else: + trunc_tokens.pop() + + +def main(_): + tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO) + + tokenizer = tokenization.FullTokenizer( + vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case) + + input_files = [] + for input_pattern in FLAGS.input_file.split(","): + input_files.extend(tf.compat.v1.gfile.Glob(input_pattern)) + + tf.compat.v1.logging.info("*** Reading from input files ***") + for input_file in input_files: + tf.compat.v1.logging.info(" %s", input_file) + + rng = random.Random(FLAGS.random_seed) + instances = create_training_instances( + input_files, tokenizer, FLAGS.max_seq_length, FLAGS.dupe_factor, + FLAGS.short_seq_prob, FLAGS.masked_lm_prob, FLAGS.max_predictions_per_seq, + rng) + + output_files = FLAGS.output_file.split(",") + tf.compat.v1.logging.info("*** Writing to output files ***") + for output_file in output_files: + tf.compat.v1.logging.info(" %s", output_file) + + write_instance_to_example_files(instances, tokenizer, FLAGS.max_seq_length, + FLAGS.max_predictions_per_seq, output_files) + + +if __name__ == "__main__": + flags.mark_flag_as_required("input_file") + flags.mark_flag_as_required("output_file") + flags.mark_flag_as_required("vocab_file") + tf.compat.v1.app.run() diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/create_pretraining_data_wrapper.sh b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/create_pretraining_data_wrapper.sh new file mode 100644 index 0000000000000000000000000000000000000000..69328233ec2c6be8c482f12e53595de7885d156b --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/create_pretraining_data_wrapper.sh @@ -0,0 +1,30 @@ +#!/bin/bash +# Copyright (c) 2019-2022 NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +SCRIPT_DIR="$(dirname "$(readlink -f "$0")")" + +INPUT=${1} +OUTPUT=${2}/$(basename $INPUT) +VOCAB=${3} + +python3 ${SCRIPT_DIR}/create_pretraining_data.py \ + --input_file=${INPUT} \ + --output_file=${OUTPUT} \ + --vocab_file=${VOCAB} \ + --do_lower_case=True \ + --max_seq_length=512 \ + --max_predictions_per_seq=76 \ + --masked_lm_prob=0.15 \ + --random_seed=12345 \ + --dupe_factor=10 \ No newline at end of file diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/pick_eval_samples.py b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/pick_eval_samples.py new file mode 100644 index 0000000000000000000000000000000000000000..b4cab2ed2ba886f368116402df054bfc23112456 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/pick_eval_samples.py @@ -0,0 +1,83 @@ +"""Script for picking certain number of samples. +""" + +import argparse +import time +import logging +import collections +import h5py +import numpy as np + +parser = argparse.ArgumentParser( + description="Eval sample picker for BERT.") +parser.add_argument( + '--input_hdf5_file', + type=str, + default='', + help='Input hdf5_file path') +parser.add_argument( + '--output_hdf5_file', + type=str, + default='', + help='Output hdf5_file path') +parser.add_argument( + '--num_examples_to_pick', + type=int, + default=10000, + help='Number of examples to pick') +parser.add_argument( + '--max_seq_length', + type=int, + default=512, + help='The maximum number of tokens within a sequence.') +parser.add_argument( + '--max_predictions_per_seq', + type=int, + default=76, + help='The maximum number of predictions within a sequence.') +args = parser.parse_args() + +max_seq_length = args.max_seq_length +max_predictions_per_seq = args.max_predictions_per_seq +logging.basicConfig(level=logging.INFO) + +if __name__ == '__main__': + tic = time.time() + h5_ifile = h5py.File(args.input_hdf5_file, 'r') + num_examples = h5_ifile.get('next_sentence_labels').shape[0] + + input_ids = np.zeros([args.num_examples_to_pick, max_seq_length], dtype="int16") + input_mask = np.zeros([args.num_examples_to_pick, max_seq_length], dtype="int8") + segment_ids = np.zeros([args.num_examples_to_pick, max_seq_length], dtype="int8") + masked_lm_positions = np.zeros([args.num_examples_to_pick, max_predictions_per_seq], dtype="int16") + masked_lm_ids = np.zeros([args.num_examples_to_pick, max_predictions_per_seq], dtype="int16") + next_sentence_labels = np.zeros(args.num_examples_to_pick, dtype="int8") + +# hdf5_compression_method = "gzip" + hdf5_compression_method = None + i = 0 + pick_ratio = num_examples / args.num_examples_to_pick + num_examples_picked = 0 + for i in range(args.num_examples_to_pick): + idx = int(i * pick_ratio) + input_ids[i,:] = h5_ifile['input_ids'][idx,:] + input_mask[i,:] = h5_ifile['input_mask'][idx,:] + segment_ids[i,:] = h5_ifile['segment_ids'][idx,:] + masked_lm_positions[i,:] = h5_ifile['masked_lm_positions'][idx,:] + masked_lm_ids[i,:] = h5_ifile['masked_lm_ids'][idx,:] + next_sentence_labels[i] = h5_ifile['next_sentence_labels'][idx] + num_examples_picked += 1 + + h5_writer = h5py.File(args.output_hdf5_file+".hdf5", 'w') + h5_writer.create_dataset('input_ids', data=input_ids, dtype='i2', compression=hdf5_compression_method) + h5_writer.create_dataset('input_mask', data=input_mask, dtype='i1', compression=hdf5_compression_method) + h5_writer.create_dataset('segment_ids', data=segment_ids, dtype='i1', compression=hdf5_compression_method) + h5_writer.create_dataset('masked_lm_positions', data=masked_lm_positions, dtype='i2', compression=hdf5_compression_method) + h5_writer.create_dataset('masked_lm_ids', data=masked_lm_ids, dtype='i2', compression=hdf5_compression_method) + h5_writer.create_dataset('next_sentence_labels', data=next_sentence_labels, dtype='i1', compression=hdf5_compression_method) + h5_writer.flush() + h5_writer.close() + + toc = time.time() + logging.info("Picked %d examples out of %d samples in %.2f sec", + args.num_examples_to_pick, num_examples, toc - tic) diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/pick_eval_samples_varlength.py b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/pick_eval_samples_varlength.py new file mode 100644 index 0000000000000000000000000000000000000000..a7a37be5b831c6ebd92a30f0f00ec33ba16aa390 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/pick_eval_samples_varlength.py @@ -0,0 +1,76 @@ +"""Script for picking certain number of samples. +""" + +import argparse +import time +import logging +import collections +import h5py +import numpy as np + +parser = argparse.ArgumentParser( + description="Eval sample picker for BERT.") +parser.add_argument( + '--input_hdf5_file', + type=str, + default='', + help='Input hdf5_file path') +parser.add_argument( + '--output_hdf5_file', + type=str, + default='', + help='Output hdf5_file path') +parser.add_argument( + '--num_examples_to_pick', + type=int, + default=10000, + help='Number of examples to pick') +parser.add_argument( + '--max_seq_length', + type=int, + default=512, + help='The maximum number of tokens within a sequence.') +parser.add_argument( + '--max_predictions_per_seq', + type=int, + default=76, + help='The maximum number of predictions within a sequence.') +args = parser.parse_args() + +max_seq_length = args.max_seq_length +max_predictions_per_seq = args.max_predictions_per_seq +logging.basicConfig(level=logging.INFO) + +if __name__ == '__main__': + tic = time.time() + h5_ifile = h5py.File(args.input_hdf5_file, 'r') + num_examples = h5_ifile.get('next_sentence_labels').shape[0] + +# hdf5_compression_method = "gzip" + hdf5_compression_method = None + + h5_writer = h5py.File(args.output_hdf5_file+".hdf5", 'w') + input_ids = h5_writer.create_dataset('input_ids', (args.num_examples_to_pick,), dtype=h5py.vlen_dtype(np.dtype('int16')), compression=hdf5_compression_method) + segment_ids = h5_writer.create_dataset('segment_ids', (args.num_examples_to_pick,), dtype=h5py.vlen_dtype(np.dtype('int8')), compression=hdf5_compression_method) + masked_lm_positions = h5_writer.create_dataset('masked_lm_positions', (args.num_examples_to_pick,), dtype=h5py.vlen_dtype(np.dtype('int16')), compression=hdf5_compression_method) + masked_lm_ids = h5_writer.create_dataset('masked_lm_ids', (args.num_examples_to_pick,), dtype=h5py.vlen_dtype(np.dtype('int16')), compression=hdf5_compression_method) + next_sentence_labels = h5_writer.create_dataset('next_sentence_labels', data=np.zeros(args.num_examples_to_pick, dtype="int8"), dtype='i1', compression=hdf5_compression_method) + + i = 0 + pick_ratio = num_examples / args.num_examples_to_pick + num_examples_picked = 0 + for i in range(args.num_examples_to_pick): + idx = int(i * pick_ratio) + input_ids[i] = h5_ifile['input_ids'][idx, :sum(h5_ifile['input_mask'][idx])] + segment_ids[i] = h5_ifile['segment_ids'][idx, :sum(h5_ifile['input_mask'][idx])] + masked_lm_positions[i] = h5_ifile['masked_lm_positions'][idx, :sum(h5_ifile['masked_lm_positions'][idx]!=0)] + masked_lm_ids[i] = h5_ifile['masked_lm_ids'][idx, :sum(h5_ifile['masked_lm_positions'][idx]!=0)] + next_sentence_labels[i] = h5_ifile['next_sentence_labels'][idx] + num_examples_picked += 1 + + h5_writer.flush() + h5_writer.close() + + toc = time.time() + logging.info("Picked %d examples out of %d samples in %.2f sec", + args.num_examples_to_pick, num_examples, toc - tic) diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/prepare_data.sh b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/prepare_data.sh new file mode 100644 index 0000000000000000000000000000000000000000..c78e10b96e2e3b1cdd5ea287cfed0262212bd013 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/prepare_data.sh @@ -0,0 +1,154 @@ +#!/bin/bash +# Copyright (c) 2019-2022 NVIDIA CORPORATION. All rights reserved. + +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +############################################################################### +# Copyright (c) 2022, Habana Labs Ltd. All rights reserved. +############################################################################### + +function usage() +{ + cat << HEREDOC + + Usage: $progname [-o|--outputdir PATH] [-h|--help TIME_STR] + + optional arguments: + -h, --help show this help message and exit + -o, --outputdir PATH pass in a localization of resulting dataset + -s, --skip-download skip downloading raw files from GDrive (assuming it already has been done) + -p, --shards number of resulting shards. For small scales (less than 256 nodes) use 2048. For sacles >256 4320 is recommended (default 4320) + +HEREDOC +} + +SCRIPT_DIR="$(dirname "$(readlink -f "$0")")" + +#if no arguments passed +DATADIR=/workspace/bert_data +SKIP=0 +SHARDS=4320 + +#parse passed arguments +while [[ $# -gt 0 ]]; do + key="$1" + + case $key in + -h|--help) + usage + exit 0 + ;; + -o|--outputdir) + DATADIR="$2" + shift # past argument + shift # past value + ;; + -p|--shards) + SHARDS="$2" + shift # past argument + shift # past value + ;; + -s|--skip-download) + SKIP=1 + shift + ;; + *) # unknown option + usage + exit 1 + ;; + esac +done + + +echo "Preparing Mlperf BERT dataset in ${DATADIR}" +mkdir -p ${DATADIR} + +if (( SKIP==0 )) ; then + + mkdir -p ${DATADIR}/phase1 && cd ${DATADIR}/phase1 + ### Download + # bert_config.json + gdown https://drive.google.com/uc?id=1fbGClQMi2CoMv7fwrwTC5YYPooQBdcFW + # vocab.txt + gdown https://drive.google.com/uc?id=1USK108J6hMM_d27xCHi738qBL8_BT1u1 + + ### Download dataset + mkdir -p ${DATADIR}/download && cd ${DATADIR}/download + # md5 sums + gdown https://drive.google.com/uc?id=1tmMgLwoBvbEJEHXh77sqrXYw5RpqT8R_ + # processed chunks + gdown https://drive.google.com/uc?id=14xV2OUGSQDG_yDBrmbSdcDC-QGeqpfs_ + # unpack results and verify md5sums + tar -xzf results_text.tar.gz && (cd results4 && md5sum --check ../bert_reference_results_text_md5.txt) + + + ### Download TF1 checkpoint + mkdir -p ${DATADIR}/phase1 && cd ${DATADIR}/phase1 + # model.ckpt-28252.data-00000-of-00001 + gdown https://drive.google.com/uc?id=1chiTBljF0Eh1U5pKs6ureVHgSbtU8OG_ + # model.ckpt-28252.index + gdown https://drive.google.com/uc?id=1Q47V3K3jFRkbJ2zGCrKkKk-n0fvMZsa0 + # model.ckpt-28252.meta + gdown https://drive.google.com/uc?id=1vAcVmXSLsLeQ1q7gvHnQUSth5W_f_pwv + + cd ${DATADIR} + +fi +### Create HDF5 files for training +mkdir -p ${DATADIR}/hdf5/training +bash ${SCRIPT_DIR}/parallel_create_hdf5.sh -i ${DATADIR}/download/results4 -o ${DATADIR}/hdf5/training -v ${DATADIR}/phase1/vocab.txt + +### Chop HDF5 files into chunks +ulimit -n 10000 # handles potential OSError Too many open files +python3 ${SCRIPT_DIR}/chop_hdf5_files.py \ + --num_shards ${SHARDS} \ + --input_hdf5_dir ${DATADIR}/hdf5/training \ + --output_hdf5_dir ${DATADIR}/hdf5/training-${SHARDS} + +### Convert fixed length to variable length format +mkdir -p ${DATADIR}/hdf5/training-${SHARDS}/hdf5_${SHARDS}_shards_varlength +CPUS=$( ls -d /sys/devices/system/cpu/cpu[[:digit:]]* | wc -w ) +CPUS=$((CPUS / 2)) +ls -1 ${DATADIR}/hdf5/training-${SHARDS}/hdf5_${SHARDS}_shards_uncompressed | \ + xargs --max-args=1 --max-procs=${CPUS} -I{} python3 ${SCRIPT_DIR}/convert_fixed2variable.py \ + --input_hdf5_file ${DATADIR}/hdf5/training-${SHARDS}/hdf5_${SHARDS}_shards_uncompressed/{} \ + --output_hdf5_file ${DATADIR}/hdf5/training-${SHARDS}/hdf5_${SHARDS}_shards_varlength/{} + +#### Create full HDF5 files for evaluation +mkdir -p ${DATADIR}/hdf5/eval +python3 ${SCRIPT_DIR}/create_pretraining_data.py \ + --input_file=${DATADIR}/download/results4/eval.txt \ + --output_file=${DATADIR}/hdf5/eval/eval_all \ + --vocab_file=${DATADIR}/phase1/vocab.txt \ + --do_lower_case=True \ + --max_seq_length=512 \ + --max_predictions_per_seq=76 \ + --masked_lm_prob=0.15 \ + --random_seed=12345 \ + --dupe_factor=10 + +#### pick 10k samples for evaluation +python3 ${SCRIPT_DIR}/pick_eval_samples.py \ + --input_hdf5_file=${DATADIR}/hdf5/eval/eval_all.hdf5 \ + --output_hdf5_file=${DATADIR}/hdf5/eval/part_eval_10k \ + --num_examples_to_pick=10000 + +#### Convert fixed length to variable length format +mkdir -p ${DATADIR}/hdf5/eval_varlength +python3 ${SCRIPT_DIR}/convert_fixed2variable.py --input_hdf5_file ${DATADIR}/hdf5/eval/part_eval_10k.hdf5 \ + --output_hdf5_file ${DATADIR}/hdf5/eval_varlength/part_eval_10k.hdf5 + +#### Convert Tensorflow checkpoint to Pytorch one +python3 ${SCRIPT_DIR}/../convert_tf_checkpoint.py \ + --tf_checkpoint ${DATADIR}/phase1/model.ckpt-28252 \ + --bert_config_path ${DATADIR}/phase1/bert_config.json \ + --output_checkpoint ${DATADIR}/phase1/model.ckpt-28252.pt diff --git a/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/tokenization.py b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/tokenization.py new file mode 100644 index 0000000000000000000000000000000000000000..4beb5b35c2e8e7f7df9193c0d0daa641d0432a68 --- /dev/null +++ b/docker/bloom13b/Model-References/MLPERF3.1/Training/benchmarks/bert/implementations/PyTorch/input_preprocessing/tokenization.py @@ -0,0 +1,413 @@ +"""Tokenization classes.""" + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import collections +import re +import unicodedata + +from absl import flags +import six +import tensorflow.compat.v1 as tf + +FLAGS = flags.FLAGS + +flags.DEFINE_bool( + "preserve_unused_tokens", False, + "If True, Wordpiece tokenization will not be applied to words in the vocab." +) + +_UNUSED_TOKEN_RE = re.compile("^\\[unused\\d+\\]$") + + +def preserve_token(token, vocab): + """Returns True if the token should forgo tokenization and be preserved.""" + if not FLAGS.preserve_unused_tokens: + return False + if token not in vocab: + return False + return bool(_UNUSED_TOKEN_RE.search(token)) + + +def validate_case_matches_checkpoint(do_lower_case, init_checkpoint): + """Checks whether the casing config is consistent with the checkpoint name.""" + + # The casing has to be passed in by the user and there is no explicit check + # as to whether it matches the checkpoint. The casing information probably + # should have been stored in the bert_config.json file, but it's not, so + # we have to heuristically detect it to validate. + + if not init_checkpoint: + return + + m = re.match("^.*?([A-Za-z0-9_-]+)/bert_model.ckpt", init_checkpoint) + if m is None: + return + + model_name = m.group(1) + + lower_models = [ + "uncased_L-24_H-1024_A-16", "uncased_L-12_H-768_A-12", + "multilingual_L-12_H-768_A-12", "chinese_L-12_H-768_A-12" + ] + + cased_models = [ + "cased_L-12_H-768_A-12", "cased_L-24_H-1024_A-16", + "multi_cased_L-12_H-768_A-12" + ] + + is_bad_config = False + if model_name in lower_models and not do_lower_case: + is_bad_config = True + actual_flag = "False" + case_name = "lowercased" + opposite_flag = "True" + + if model_name in cased_models and do_lower_case: + is_bad_config = True + actual_flag = "True" + case_name = "cased" + opposite_flag = "False" + + if is_bad_config: + raise ValueError( + "You passed in `--do_lower_case=%s` with `--init_checkpoint=%s`. " + "However, `%s` seems to be a %s model, so you " + "should pass in `--do_lower_case=%s` so that the fine-tuning matches " + "how the model was pre-training. If this error is wrong, please " + "just comment out this check." % (actual_flag, init_checkpoint, + model_name, case_name, opposite_flag)) + + +def convert_to_unicode(text): + """Converts `text` to Unicode (if it's not already), assuming utf-8 input.""" + if six.PY3: + if isinstance(text, str): + return text + elif isinstance(text, bytes): + return text.decode("utf-8", "ignore") + else: + raise ValueError("Unsupported string type: %s" % (type(text))) + elif six.PY2: + if isinstance(text, str): + return text.decode("utf-8", "ignore") + elif isinstance(text, unicode): + return text + else: + raise ValueError("Unsupported string type: %s" % (type(text))) + else: + raise ValueError("Not running on Python2 or Python 3?") + + +def printable_text(text): + """Returns text encoded in a way suitable for print or `tf.logging`.""" + + # These functions want `str` for both Python2 and Python3, but in one case + # it's a Unicode string and in the other it's a byte string. + if six.PY3: + if isinstance(text, str): + return text + elif isinstance(text, bytes): + return text.decode("utf-8", "ignore") + else: + raise ValueError("Unsupported string type: %s" % (type(text))) + elif six.PY2: + if isinstance(text, str): + return text + elif isinstance(text, unicode): + return text.encode("utf-8") + else: + raise ValueError("Unsupported string type: %s" % (type(text))) + else: + raise ValueError("Not running on Python2 or Python 3?") + + +def load_vocab(vocab_file): + """Loads a vocabulary file into a dictionary.""" + vocab = collections.OrderedDict() + with tf.gfile.GFile(vocab_file, "r") as reader: + while True: + token = convert_to_unicode(reader.readline()) + if not token: + break + token = token.strip() + if token not in vocab: + vocab[token] = len(vocab) + return vocab + + +def convert_by_vocab(vocab, items): + """Converts a sequence of [tokens|ids] using the vocab.""" + output = [] + for item in items: + output.append(vocab[item]) + return output + + +def convert_tokens_to_ids(vocab, tokens): + return convert_by_vocab(vocab, tokens) + + +def convert_ids_to_tokens(inv_vocab, ids): + return convert_by_vocab(inv_vocab, ids) + + +def whitespace_tokenize(text): + """Runs basic whitespace cleaning and splitting on a piece of text.""" + text = text.strip() + if not text: + return [] + tokens = text.split() + return tokens + + +class FullTokenizer(object): + """Runs end-to-end tokenziation.""" + + def __init__(self, vocab_file, do_lower_case=True): + self.vocab = load_vocab(vocab_file) + self.inv_vocab = {v: k for k, v in self.vocab.items()} + self.basic_tokenizer = BasicTokenizer( + do_lower_case=do_lower_case, vocab=self.vocab) + self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab) + + def tokenize(self, text): + split_tokens = [] + for token in self.basic_tokenizer.tokenize(text): + if preserve_token(token, self.vocab): + split_tokens.append(token) + continue + for sub_token in self.wordpiece_tokenizer.tokenize(token): + split_tokens.append(sub_token) + + return split_tokens + + def convert_tokens_to_ids(self, tokens): + return convert_by_vocab(self.vocab, tokens) + + def convert_ids_to_tokens(self, ids): + return convert_by_vocab(self.inv_vocab, ids) + + +class BasicTokenizer(object): + """Runs basic tokenization (punctuation splitting, lower casing, etc.).""" + + def __init__(self, do_lower_case=True, vocab=tuple()): + """Constructs a BasicTokenizer. + + Args: + do_lower_case: Whether to lower case the input. + vocab: A container of tokens to not mutate during tokenization. + """ + self.do_lower_case = do_lower_case + self.vocab = vocab + + def tokenize(self, text): + """Tokenizes a piece of text.""" + text = convert_to_unicode(text) + text = self._clean_text(text) + + # This was added on November 1st, 2018 for the multilingual and Chinese + # models. This is also applied to the English models now, but it doesn't + # matter since the English models were not trained on any Chinese data + # and generally don't have any Chinese data in them (there are Chinese + # characters in the vocabulary because Wikipedia does have some Chinese + # words in the English Wikipedia.). + text = self._tokenize_chinese_chars(text) + + orig_tokens = whitespace_tokenize(text) + split_tokens = [] + for token in orig_tokens: + if preserve_token(token, self.vocab): + split_tokens.append(token) + continue + if self.do_lower_case: + token = token.lower() + token = self._run_strip_accents(token) + split_tokens.extend(self._run_split_on_punc(token)) + + output_tokens = whitespace_tokenize(" ".join(split_tokens)) + return output_tokens + + def _run_strip_accents(self, text): + """Strips accents from a piece of text.""" + text = unicodedata.normalize("NFD", text) + output = [] + for char in text: + cat = unicodedata.category(char) + if cat == "Mn": + continue + output.append(char) + return "".join(output) + + def _run_split_on_punc(self, text): + """Splits punctuation on a piece of text.""" + chars = list(text) + i = 0 + start_new_word = True + output = [] + while i < len(chars): + char = chars[i] + if _is_punctuation(char): + output.append([char]) + start_new_word = True + else: + if start_new_word: + output.append([]) + start_new_word = False + output[-1].append(char) + i += 1 + + return ["".join(x) for x in output] + + def _tokenize_chinese_chars(self, text): + """Adds whitespace around any CJK character.""" + output = [] + for char in text: + cp = ord(char) + if self._is_chinese_char(cp): + output.append(" ") + output.append(char) + output.append(" ") + else: + output.append(char) + return "".join(output) + + def _is_chinese_char(self, cp): + """Checks whether CP is the codepoint of a CJK character.""" + # This defines a "chinese character" as anything in the CJK Unicode block: + # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) + # + # Note that the CJK Unicode block is NOT all Japanese and Korean characters, + # despite its name. The modern Korean Hangul alphabet is a different block, + # as is Japanese Hiragana and Katakana. Those alphabets are used to write + # space-separated words, so they are not treated specially and handled + # like the all of the other languages. + if ((cp >= 0x4E00 and cp <= 0x9FFF) or # + (cp >= 0x3400 and cp <= 0x4DBF) or # + (cp >= 0x20000 and cp <= 0x2A6DF) or # + (cp >= 0x2A700 and cp <= 0x2B73F) or # + (cp >= 0x2B740 and cp <= 0x2B81F) or # + (cp >= 0x2B820 and cp <= 0x2CEAF) or + (cp >= 0xF900 and cp <= 0xFAFF) or # + (cp >= 0x2F800 and cp <= 0x2FA1F)): # + return True + + return False + + def _clean_text(self, text): + """Performs invalid character removal and whitespace cleanup on text.""" + output = [] + for char in text: + cp = ord(char) + if cp == 0 or cp == 0xfffd or _is_control(char): + continue + if _is_whitespace(char): + output.append(" ") + else: + output.append(char) + return "".join(output) + + +class WordpieceTokenizer(object): + """Runs WordPiece tokenziation.""" + + def __init__(self, vocab, unk_token="[UNK]", max_input_chars_per_word=200): + self.vocab = vocab + self.unk_token = unk_token + self.max_input_chars_per_word = max_input_chars_per_word + + def tokenize(self, text): + """Tokenizes a piece of text into its word pieces. + + This uses a greedy longest-match-first algorithm to perform tokenization + using the given vocabulary. + + For example: + input = "unaffable" + output = ["un", "##aff", "##able"] + + Args: + text: A single token or whitespace separated tokens. This should have + already been passed through `BasicTokenizer. + + Returns: + A list of wordpiece tokens. + """ + + text = convert_to_unicode(text) + + output_tokens = [] + for token in whitespace_tokenize(text): + chars = list(token) + if len(chars) > self.max_input_chars_per_word: + output_tokens.append(self.unk_token) + continue + + is_bad = False + start = 0 + sub_tokens = [] + while start < len(chars): + end = len(chars) + cur_substr = None + while start < end: + substr = "".join(chars[start:end]) + if start > 0: + substr = "##" + substr + if substr in self.vocab: + cur_substr = substr + break + end -= 1 + if cur_substr is None: + is_bad = True + break + sub_tokens.append(cur_substr) + start = end + + if is_bad: + output_tokens.append(self.unk_token) + else: + output_tokens.extend(sub_tokens) + return output_tokens + + +def _is_whitespace(char): + """Checks whether `chars` is a whitespace character.""" + # \t, \n, and \r are technically control characters but we treat them + # as whitespace since they are generally considered as such. + if char == " " or char == "\t" or char == "\n" or char == "\r": + return True + cat = unicodedata.category(char) + if cat == "Zs": + return True + return False + + +def _is_control(char): + """Checks whether `chars` is a control character.""" + # These are technically control characters but we count them as whitespace + # characters. + if char == "\t" or char == "\n" or char == "\r": + return False + cat = unicodedata.category(char) + if cat in ("Cc", "Cf"): + return True + return False + + +def _is_punctuation(char): + """Checks whether `chars` is a punctuation character.""" + cp = ord(char) + # We treat all non-letter/number ASCII as punctuation. + # Characters such as "^", "$", and "`" are not in the Unicode + # Punctuation class but we treat them as punctuation anyways, for + # consistency. + if ((cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or + (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126)): + return True + cat = unicodedata.category(char) + if cat.startswith("P"): + return True + return False diff --git a/docker/intel_code/.gitignore b/docker/intel_code/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..d214ebc848fca17e40cd11ebf387ce2ea7bd92c0 --- /dev/null +++ b/docker/intel_code/.gitignore @@ -0,0 +1,3 @@ +llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/bgpt-env/ +llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/ +llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/lm-evaluation-harness/ diff --git a/docker/intel_code/README.md b/docker/intel_code/README.md new file mode 100644 index 0000000000000000000000000000000000000000..e7eb44e905657415e5e01bc35f50c30d9389e47d --- /dev/null +++ b/docker/intel_code/README.md @@ -0,0 +1,2 @@ +# intel_code +intel code for llama diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/CODEOWNERS b/docker/intel_code/llama13b/Megatron-DeepSpeed/CODEOWNERS new file mode 100644 index 0000000000000000000000000000000000000000..60a921d7f6d586692195e89dba62a54f7d8ec23d --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/CODEOWNERS @@ -0,0 +1 @@ +* @jeffra @samyam @tjruwase @ShadenSmith @conglongli @awan-10 @cli99 @eltonzheng @minjiaz @RezaYazdaniAminabadi @duli2012 @mrwyattii @yaozhewei @arashb @xiaoxiawu-microsoft @guanhuawang diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_gpt.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_gpt.py new file mode 100644 index 0000000000000000000000000000000000000000..886fcecc9ba0b39475c2a3c6ad9a3cf2ccb5e437 --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_gpt.py @@ -0,0 +1,385 @@ +# Copyright (C) 2024 Habana Labs, Ltd. an Intel Company. +# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved. + +"""Pretrain GPT""" + +import torch +import math +from functools import partial +from megatron import get_args +from megatron import print_rank_0 +from megatron import get_timers +from megatron import get_tokenizer +from megatron.core import mpu, tensor_parallel +from megatron.core.enums import ModelType +from megatron.data.gpt_dataset import build_train_valid_test_datasets +from megatron.model import GPTModel, GPTModelPipe +from megatron.training import pretrain +from megatron.utils import get_ltor_masks_and_position_ids +from megatron.utils import average_losses_across_data_parallel_group, update_rotary_pos_emb +from megatron.arguments import core_transformer_config_from_args + +import deepspeed +from deepspeed.runtime.utils import see_memory_usage +from deepspeed.accelerator.real_accelerator import get_accelerator +import os +import subprocess + +from torch import nn +import torch.nn.functional as F + + +def model_provider(pre_process=True, post_process=True): + """Build the model.""" + + print_rank_0('building GPT model ...') + see_memory_usage(f"Before Building Model", force=True) + + if get_accelerator().device_name() == "hpu": + os.environ['DEEPSPEED_HPU_SYNC_INSIDE_INIT'] = "1" + os.environ['DEEPSPEED_SYNC_MICRO_BATCH_STEP'] = "1" + + args = get_args() + config = core_transformer_config_from_args(args) + with deepspeed.zero.Init(sequence_data_parallel_group=mpu.get_sequence_data_parallel_group(), + remote_device=None if args.remote_device == 'none' else args.remote_device, + config_dict_or_path=args.deepspeed_config, + enabled=args.zero_stage == 3, + mpu=mpu): + if args.deepspeed and not args.no_pipeline_parallel: + model = GPTModelPipe( + config=config, + num_tokentypes=0, + parallel_output=True + ) + # This is a hack to give us a reference to get_batch_pipe from within training.py + # We need to call model.set_batch_fn after deepspeed.initialize + model._megatron_batch_fn = get_batch_pipe + + # Predompute the attention mask and store it in args. This avoids having to + # pipeline it as an activation during training. The mask is constant, and thus + # we can reuse it. + attention_mask = torch.tril(torch.ones( + (1, args.seq_length, args.seq_length), device=get_accelerator().current_device_name())).view( + 1, 1, args.seq_length, args.seq_length) + + # Convert attention mask to binary: + attention_mask = (attention_mask < 0.5) + if args.fp16: + attention_mask = attention_mask.half() + elif args.bf16: + attention_mask = attention_mask.bfloat16() + + if args.mask_tensor_adding: + args.attn_mask = attention_mask * -10000.0 + else: + # Attention mask must be bool. + args.attn_mask = attention_mask.to(torch.bool) + + # For prertaining, since sequence length is fixed, cache rotary embedding in args, to avoid communicating around + if args.use_rotary_position_embeddings: + update_rotary_pos_emb(args.seq_length) + + else: + model = GPTModel( + config=config, + num_tokentypes=0, + parallel_output=True, + pre_process=pre_process, + post_process=post_process + ) + see_memory_usage(f"After Building Model", force=True) + return model + + +def get_batch(data_iterator): + """Generate a batch""" + args = get_args() + tokenizer = get_tokenizer() + + # Items and their type. + keys = ['text'] + datatype = torch.int64 + + # Broadcast data. + if data_iterator is not None: + data = next(data_iterator) + else: + data = None + data_b = tensor_parallel.broadcast_data(keys, data, datatype) + + # Unpack. + tokens_ = data_b['text'].long() + if not args.use_seq_len_plus_one_tokens: + labels = torch.roll(tokens_, shifts=-1, dims=1) + labels[:, -1] = -1 + tokens = tokens_ + else: + labels = tokens_[:, 1:].contiguous() + tokens = tokens_[:, :-1].contiguous() + + # Get the masks and postition ids. + skip_mask = args.use_flash_attn or args.use_flash_attn_triton + attention_mask, loss_mask, position_ids = get_ltor_masks_and_position_ids( + tokens, + tokenizer.eod, + args.reset_position_ids, + args.reset_attention_mask, + args.eod_mask_loss, + skip_mask, + labels = labels, + dummy_sample= None,) + + # For DS's sequence parallel + seq_parallel_world_size = mpu.get_sequence_parallel_world_size() + seq_parallel_world_rank = mpu.get_sequence_parallel_rank() + + # For Megatron's sequence parallel + if args.sequence_parallel: + seq_parallel_world_size = mpu.get_tensor_model_parallel_world_size() + seq_parallel_world_rank = mpu.get_tensor_model_parallel_rank() + seq_length = tokens.size(1) + + assert seq_length % seq_parallel_world_size == 0 + sub_seq_length = seq_length // seq_parallel_world_size + sub_seq_start = seq_parallel_world_rank * sub_seq_length + sub_seq_end = (seq_parallel_world_rank + 1) * sub_seq_length + + tokens[tokens == -1] = 0 + labels[labels == -1] = 0 + + tokens = tokens[:, sub_seq_start:sub_seq_end] + position_ids = position_ids[:, sub_seq_start:sub_seq_end] + # For DS's sequence parallel + if mpu.get_sequence_parallel_world_size() > 1: + labels = labels[:, sub_seq_start:sub_seq_end] + + return tokens, labels, loss_mask, attention_mask, position_ids + +def data_post_process(data, data_sampler_state_dict): + args = get_args() + if args.data_efficiency_curriculum_learning: + if 'seqlen_truncate' in data_sampler_state_dict['current_difficulties']: + args.data_efficiency_curriculum_learning_seqlen_type = 'seqlen_truncate' + current_seqlen = data_sampler_state_dict['current_difficulties']['seqlen_truncate'] + if current_seqlen < args.seq_length: + data['text'] = data['text'][:, :(current_seqlen+1)].contiguous() + elif 'seqlen_reshape' in data_sampler_state_dict['current_difficulties']: + args.data_efficiency_curriculum_learning_seqlen_type = 'seqlen_reshape' + current_seqlen = data_sampler_state_dict['current_difficulties']['seqlen_reshape'] + if current_seqlen < args.seq_length: + orig_num_token = torch.numel(data['text']) + reshape_len = (data['text'].size()[1] // (current_seqlen+1)) * (current_seqlen+1) + data['text'] = torch.cat((data['text'][:, :reshape_len].contiguous().view(-1, current_seqlen+1), + data['text'][:, -(current_seqlen+1):]), 0).contiguous() + num_row = math.ceil(orig_num_token / (current_seqlen+1)) + num_row = min(num_row, data['text'].size()[0]) + if num_row > 1 and num_row % 2 != 0: + num_row -= 1 + data['text'] = data['text'][:num_row, :].contiguous() + else: + args.data_efficiency_curriculum_learning_seqlen_type = None + return data + +def get_batch_pipe(data): + """Modification of `get_batch` to work on `next(data_iterator)` instead of `data_iterator`""" + args = get_args() + tokenizer = get_tokenizer() + + # Items and their type. + keys = ['text'] + datatype = torch.int64 + + # Broadcast data. + data_b = tensor_parallel.broadcast_data(keys, data, datatype) + + # Unpack. + tokens_ = data_b['text'].long() + if not args.use_seq_len_plus_one_tokens: + labels = torch.roll(tokens_, shifts=-1, dims=1) + labels[:, -1] = -1 + tokens = tokens_ + else: + labels = tokens_[:, 1:].contiguous() + tokens = tokens_[:, :-1].contiguous() + + # Get the masks and postition ids. + attention_mask, loss_mask, position_ids = get_ltor_masks_and_position_ids( + tokens, + tokenizer.eod, + args.reset_position_ids, + args.reset_attention_mask, + args.eod_mask_loss, + labels = labels, + dummy_sample = None,) + + tokens[tokens == -1] = 0 + labels[labels == -1] = 0 + + if args.curriculum_learning_legacy and args.curriculum_seqlen < tokens.size()[1]: + # seqlen-based curriculum learning + # tokens, position_ids, labels, loss_mask have size [batch size, seqlen] + tokens = tokens[:, :args.curriculum_seqlen].contiguous() + position_ids = position_ids[:, :args.curriculum_seqlen].contiguous() + if labels is not None: + labels = labels[:, :args.curriculum_seqlen].contiguous() + loss_mask = loss_mask[:, :args.curriculum_seqlen].contiguous() + + return (tokens, position_ids, attention_mask), (labels, loss_mask) + + +def loss_func(loss_mask, moe_loss, mos_loss, output_tensor): + args = get_args() + losses = output_tensor.float() + loss_mask = loss_mask.view(-1).float() + loss = torch.sum(losses.view(-1) * loss_mask) / loss_mask.sum() + + # Reduce loss for logging. + averaged_loss = average_losses_across_data_parallel_group([loss]) + if args.mos or args.kd: + # assert max(args.num_experts) >= 1 + loss = loss + moe_loss + mos_loss + if args.mos: + return loss, {'total loss': loss, 'lm loss': averaged_loss[0], 'moe loss': moe_loss, 'mos loss': mos_loss} + elif args.kd: + return loss, {'total loss': loss, 'lm loss': averaged_loss[0], 'moe loss': moe_loss, 'kd loss': mos_loss} + print_rank_0('>>> total loss: {}, lm loss {}, kd loss {}'.format(loss, averaged_loss[0], mos_loss)) + else: + if max(args.num_experts) <= 1: + return loss, {'lm loss': averaged_loss[0]} + else: + loss = loss + moe_loss + return loss, {'lm loss': averaged_loss[0], 'moe loss': moe_loss} + +def calculate_mos_loss(args, stu_output, teacher_model, tokens, position_ids, attention_mask): + mos_loss = 0 + alpha = args.kd_alpha_ce + beta = args.kd_beta_ce + kd_temp = args.kd_temp + + if teacher_model: + with torch.no_grad(): + if args.curriculum_learning_legacy and args.curriculum_seqlen < args.seq_length: + assert args.curriculum_seqlen is not None + curriculum_seqlen = args.curriculum_seqlen + tokens = tokens[:, :curriculum_seqlen].contiguous() + position_ids = position_ids[:, :curriculum_seqlen].contiguous() + attention_mask = attention_mask[:, :, :curriculum_seqlen, :curriculum_seqlen].contiguous() + # No need to truncate labels as we do not need it for the teacher logits + tea_output, tea_other_losses = teacher_model(tokens, position_ids, attention_mask) + assert stu_output.size() == tea_output.size(), 'teacher and student output should match in size. Student: {}, Teacher: {}, CL seq length {}'.format(stu_output.size(), tea_output.size(), args.curriculum_seqlen) + + student_logits = F.log_softmax(stu_output / kd_temp, dim=2) + tea_logits = F.softmax(tea_output / kd_temp, dim=2) # The target logits is expected to be probabilities. If we use log_softmax, then we need to set target_log to true when initializing the KLDivLoss. + + mos_loss = kd_temp * kd_temp * nn.KLDivLoss(reduction='batchmean')(student_logits, tea_logits) + + mos_loss = mos_loss.div(args.seq_length) * beta + return mos_loss + +def forward_step(data_iterator, model): + """Forward step.""" + args = get_args() + timers = get_timers() + + # Get the batch. + timers('batch-generator', log_level=2).start() + tokens, labels, loss_mask, attention_mask, position_ids = get_batch( + data_iterator) + timers('batch-generator').stop() + + if args.data_efficiency_curriculum_learning: + args.curriculum_seqlen = tokens.size()[1] + if hasattr(args, 'data_efficiency_curriculum_learning_seqlen_type') and \ + args.data_efficiency_curriculum_learning_seqlen_type == 'seqlen_reshape': + args.data_efficiency_curriculum_learning_numel = torch.numel(tokens) + + if args.mos or args.kd: + # The forward func can return either the loss or the logits, depending on whether passing in the labels or not. + stu_output, other_losses = model(tokens, position_ids, attention_mask) + if args.curriculum_learning_legacy and args.curriculum_seqlen < args.seq_length: + assert args.curriculum_seqlen is not None + labels = labels[:, :args.curriculum_seqlen].contiguous() + output_tensor = tensor_parallel.vocab_parallel_cross_entropy(stu_output.contiguous().float(), labels) + else: + output_tensor, other_losses = model(tokens, position_ids, attention_mask, + labels=labels) + if args.curriculum_learning_legacy and args.curriculum_seqlen < args.seq_length: + loss_mask = loss_mask[:, :args.curriculum_seqlen].contiguous() + + moe_losses = [] + for moe_loss in other_losses: + if moe_loss is not None: + moe_losses.append(moe_loss) + moe_loss = sum(moe_losses) * args.moe_loss_coeff + + mos_loss = 0 + if args.mos or args.kd: + assert model.training + if args.teacher_forward and args.teacher_model is not None: + mos_loss = calculate_mos_loss(args, stu_output, + args.teacher_model[0], tokens, position_ids, attention_mask) + + # Output_tensor stores the standard loss, loos_func calculates the total loss. + return output_tensor, partial(loss_func, loss_mask, moe_loss, mos_loss) + + +def train_valid_test_datasets_provider(train_val_test_num_samples): + """Build train, valid, and test datasets.""" + args = get_args() + + print_rank_0('> building train, validation, and test datasets ' + 'for GPT ...') + train_ds, valid_ds, test_ds = build_train_valid_test_datasets( + data_prefix=args.data_path, + data_impl=args.data_impl, + splits_string=args.split, + train_valid_test_num_samples=train_val_test_num_samples, + seq_length=args.seq_length, + seed=args.seed, + skip_warmup=(not args.mmap_warmup), + train_data_prefix=args.train_data_path, + valid_data_prefix=args.valid_data_path, + test_data_prefix=args.test_data_path, + data_cache_path=args.data_cache_path, + use_seq_len_plus_one_tokens=args.use_seq_len_plus_one_tokens) + print_rank_0("> finished creating GPT datasets ...") + + return train_ds, valid_ds, test_ds + + +def command_exists(cmd): + result = subprocess.Popen(f'type {cmd}', stdout=subprocess.PIPE, shell=True) + return result.wait() == 0 + + +def git_ds_info(): + from deepspeed.env_report import main as ds_report + ds_report() + + # Write out version/git info + git_hash_cmd = "git rev-parse --short HEAD" + git_branch_cmd = "git rev-parse --abbrev-ref HEAD" + if command_exists('git'): + try: + result = subprocess.check_output(git_hash_cmd, shell=True) + git_hash = result.decode('utf-8').strip() + result = subprocess.check_output(git_branch_cmd, shell=True) + git_branch = result.decode('utf-8').strip() + except subprocess.CalledProcessError: + git_hash = "unknown" + git_branch = "unknown" + else: + git_hash = "unknown" + git_branch = "unknown" + print(f'**** Git info for Megatron: git_hash={git_hash} git_branch={git_branch} ****') + + +if __name__ == "__main__": + git_ds_info() + pretrain(train_valid_test_datasets_provider, + model_provider, + ModelType.encoder_or_decoder, + forward_step, + args_defaults={'tokenizer_type': 'GPT2BPETokenizer'}, + data_post_process=data_post_process) diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_gpt_core.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_gpt_core.py new file mode 100644 index 0000000000000000000000000000000000000000..3c5651aaf3df9536e9868188d0d0577d3559b893 --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_gpt_core.py @@ -0,0 +1,127 @@ +# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. + +"""Pretrain GPT""" + +import torch +from functools import partial +from megatron import get_args +from megatron.arguments import core_transformer_config_from_args +from megatron import print_rank_0 +from megatron import get_timers +from megatron import get_tokenizer +from megatron.core import tensor_parallel +from megatron.core.enums import ModelType +from megatron.data.gpt_dataset import build_train_valid_test_datasets +from megatron.core.models.gpt import GPTModel +from megatron.training import pretrain +from megatron.utils import get_ltor_masks_and_position_ids +from megatron.utils import average_losses_across_data_parallel_group + +def model_provider(pre_process=True, post_process=True): + """Build the model.""" + + args = get_args() + config = core_transformer_config_from_args(args) + + print_rank_0('building GPT model ...') + model = GPTModel( + config=config, + vocab_size=args.padded_vocab_size, + max_sequence_length=args.max_position_embeddings, + pre_process=pre_process, + post_process=post_process, + fp16_lm_cross_entropy=args.fp16_lm_cross_entropy, + parallel_output=True, + share_embeddings_and_output_weights=not args.untie_embeddings_and_output_weights + ) + return model + + +def get_batch(data_iterator): + """Generate a batch""" + args = get_args() + tokenizer = get_tokenizer() + + # Items and their type. + keys = ['text'] + datatype = torch.int64 + + # Broadcast data. + if data_iterator is not None: + data = next(data_iterator) + else: + data = None + data_b = tensor_parallel.broadcast_data(keys, data, datatype) + + # Unpack. + tokens_ = data_b['text'].long() + labels = tokens_[:, 1:].contiguous() + tokens = tokens_[:, :-1].contiguous() + + # Get the masks and postition ids. + attention_mask, loss_mask, position_ids = get_ltor_masks_and_position_ids( + tokens, + tokenizer.eod, + args.reset_position_ids, + args.reset_attention_mask, + args.eod_mask_loss) + + return tokens, labels, loss_mask, attention_mask, position_ids + +def loss_func(loss_mask, output_tensor): + losses = output_tensor.float() + loss_mask = loss_mask.view(-1).float() + loss = torch.sum(losses.view(-1) * loss_mask) / loss_mask.sum() + + # Reduce loss for logging. + averaged_loss = average_losses_across_data_parallel_group([loss]) + + return loss, {'lm loss': averaged_loss[0]} + + +def forward_step(data_iterator, model): + """Forward step.""" + args = get_args() + timers = get_timers() + + # Get the batch. + timers('batch-generator', log_level=2).start() + tokens, labels, loss_mask, attention_mask, position_ids = get_batch( + data_iterator) + timers('batch-generator').stop() + + output_tensor = model(tokens, position_ids, attention_mask, + labels=labels) + + return output_tensor, partial(loss_func, loss_mask) + + +def train_valid_test_datasets_provider(train_val_test_num_samples): + """Build train, valid, and test datasets.""" + args = get_args() + + print_rank_0('> building train, validation, and test datasets ' + 'for GPT ...') + train_ds, valid_ds, test_ds = build_train_valid_test_datasets( + data_prefix=args.data_path, + data_impl=args.data_impl, + splits_string=args.split, + train_valid_test_num_samples=train_val_test_num_samples, + seq_length=args.seq_length, + seed=args.seed, + skip_warmup=(not args.mmap_warmup), + train_data_prefix=args.train_data_path, + valid_data_prefix=args.valid_data_path, + test_data_prefix=args.test_data_path) + print_rank_0("> finished creating GPT datasets ...") + + return train_ds, valid_ds, test_ds + + +if __name__ == "__main__": + + pretrain(train_valid_test_datasets_provider, model_provider, + ModelType.encoder_or_decoder, + forward_step, + args_defaults={'tokenizer_type': 'GPT2BPETokenizer'} + ) diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_retro.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_retro.py new file mode 100644 index 0000000000000000000000000000000000000000..597bbf0f6af0adc36a0304584921d9f42b65e3a0 --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_retro.py @@ -0,0 +1,123 @@ +# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved. + +"""Pretrain Retro.""" + +from functools import partial +import torch + +from megatron import get_args, get_retro_args +from megatron import get_timers +from megatron import get_tokenizer +from megatron import print_rank_0 +from megatron.core import mpu, tensor_parallel +from megatron.core.enums import ModelType +from megatron.model import GPTModel +from megatron.training import pretrain +from megatron.utils import get_ltor_masks_and_position_ids +from tools.retro.query.retro_dataset import get_retro_datasets + +from pretrain_gpt import ( + loss_func, + model_provider, + train_valid_test_datasets_provider as standard_datasets_provider, +) + + +def get_batch(data_iterator): + """Generate a batch""" + args = get_args() + retro_args = get_retro_args() + tokenizer = get_tokenizer() + + # Items and their type. + keys = ['text'] + datatype = torch.int64 + + if args.retro_add_retriever: + keys += 'neighbor_tokens', + + # Broadcast data. + if data_iterator is not None: + data = next(data_iterator) + else: + data = None + + data_b = tensor_parallel.broadcast_data(keys, data, datatype) + + # Unpack. + tokens_ = data_b['text'].long() + labels = tokens_[:, 1:].contiguous() + tokens = tokens_[:, :-1].contiguous() + + if args.retro_add_retriever: + # note: [bs * l * k, r] + # note: 2x == neighbor, continuation + neighbor_tokens = data_b['neighbor_tokens'] \ + .view(-1, retro_args.retro_gpt_retrieved_length).long() + + # Get the masks and postition ids. + attention_mask, loss_mask, position_ids = get_ltor_masks_and_position_ids( + tokens, + tokenizer.eod, + args.reset_position_ids, + args.reset_attention_mask, + args.eod_mask_loss) + + if args.retro_add_retriever: + _, _, neighbor_position_ids = get_ltor_masks_and_position_ids( + neighbor_tokens, + tokenizer.eod, + args.reset_position_ids, + args.reset_attention_mask, + args.eod_mask_loss) + neighbor_attention_mask = None + return tokens, labels, loss_mask, attention_mask, position_ids, \ + neighbor_tokens, neighbor_attention_mask, neighbor_position_ids + else: + return tokens, labels, loss_mask, attention_mask, position_ids + + +def forward_step(data_iterator, model): + """Forward step.""" + args = get_args() + timers = get_timers() + + # Get the batch. + timers('batch-generator').start() + if args.retro_add_retriever: + tokens, labels, loss_mask, attention_mask, position_ids, \ + neighbor_tokens, neighbor_attention_mask, neighbor_position_ids = \ + get_batch(data_iterator) + else: + tokens, labels, loss_mask, attention_mask, position_ids = get_batch( + data_iterator) + neighbor_tokens, neighbor_attention_mask, neighbor_position_ids = \ + None, None, None + timers('batch-generator').stop() + + output_tensor = model(tokens, position_ids, attention_mask, + retriever_input_ids=neighbor_tokens, + retriever_position_ids=neighbor_position_ids, + retriever_attn_mask=neighbor_attention_mask, + labels=labels) + + return output_tensor, partial(loss_func, loss_mask) + + +def train_valid_test_datasets_provider(train_val_test_num_samples): + """Build train, valid, and test datasets.""" + args = get_args() + if args.retro_add_retriever: + return get_retro_datasets() + else: + return standard_datasets_provider(train_val_test_num_samples) + + +if __name__ == "__main__": + + pretrain(train_valid_test_datasets_provider, + model_provider, + ModelType.retro_decoder, + forward_step, + args_defaults={'tokenizer_type': 'GPT2BPETokenizer', + 'retro_add_retriever': True}) diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_t5.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_t5.py new file mode 100644 index 0000000000000000000000000000000000000000..0d7021aa12df5f576b504d1bb1cc96def61cfaa9 --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_t5.py @@ -0,0 +1,163 @@ +# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. + +"""Pretrain T5""" + +from functools import partial + +import torch + +from megatron import ( + get_args, + get_timers, + print_rank_0 +) +from megatron.core import tensor_parallel +from megatron.core.enums import ModelType +from megatron.data.dataset_utils import build_train_valid_test_datasets +from megatron.model import T5Model +from megatron.training import pretrain +from megatron.utils import average_losses_across_data_parallel_group +from megatron.arguments import core_transformer_config_from_args + + +""" +Pipeline parallelism for T5 +=========================== + +T5 is a model architecture with both encoder and decoder blocks. +Consequently, pipeline parallelism is implemented slightly differently +compared to architectures like GPT and BERT. + +In particular, when pipeline_model_parallel_world_size > 1, each stage +either executes an encoder block or a decoder block. The +--pipeline-model-parallel-split-rank argument controls the rank at which +the split happens: all ranks lower than this argument execute the +encoder block, and all ranks equal to or higher than this argument value +execute the decoder block. + +In the encoder section of the model, only one tensor is sent downstream: +the intermediate encoder_hidden_state. In the decoder section of the +model, two tensors are sent downstream in the forward pass: the fully +computed encoder_hidden_state, and the intermediate decoder_hidden_state. + +In particular, these are the shapes of the tensors sent between +different workers: + If rank is in decoder section: + intermediate decoder_hidden_state (pre-transpose), + complete encoder_hidden_state (post-transpose). + If rank is at boundary between encoder and decoder sections: + complete encoder_hidden_state (post-transpose). + If rank is in encoder section: + intermediate encoder_hidden_state (pre-transpose). + +Additionally, we have code in the backward_step function in schedules.py +to accumulate the encoder_hidden_state gradient across skip connections +(encoder_hidden_state fed in as input to each layer in the decoder). +""" + + +def model_provider(pre_process=True, post_process=True, + add_encoder=True, add_decoder=True): + """Build the model.""" + + print_rank_0('building T5 model ...') + config = core_transformer_config_from_args(get_args()) + model = T5Model(config=config, + num_tokentypes=0, + parallel_output=True, + pre_process=pre_process, + post_process=post_process, + add_encoder=add_encoder, + add_decoder=add_decoder) + return model + + +def get_batch(data_iterator): + """Build the batch.""" + + keys = ['text_enc', 'text_dec', 'labels', 'loss_mask', + 'enc_mask', 'dec_mask', 'enc_dec_mask'] + datatype = torch.int64 + + # Broadcast data. + if data_iterator is not None: + data = next(data_iterator) + else: + data = None + data_b = tensor_parallel.broadcast_data(keys, data, datatype) + + # Unpack. + tokens_enc = data_b['text_enc'].long() + tokens_dec = data_b['text_dec'].long() + labels = data_b['labels'].long() + loss_mask = data_b['loss_mask'].float() + + enc_mask = (data_b['enc_mask'] < 0.5) + dec_mask = (data_b['dec_mask'] < 0.5) + enc_dec_mask = (data_b['enc_dec_mask'] < 0.5) + + return tokens_enc, tokens_dec, loss_mask, labels, \ + enc_mask, dec_mask, enc_dec_mask + + +def loss_func(loss_mask, output_tensor): + lm_loss_ = output_tensor.float() + lm_loss = torch.sum( + lm_loss_.view(-1) * loss_mask.reshape(-1)) / loss_mask.sum() + + loss = lm_loss + averaged_losses = average_losses_across_data_parallel_group([lm_loss]) + + return loss, {'lm loss': averaged_losses[0]} + + +def forward_step(data_iterator, model): + """Forward step.""" + args = get_args() + timers = get_timers() + + # Get the batch. + timers('batch generator', log_level=2).start() + tokens_enc, tokens_dec, loss_mask, lm_labels, enc_mask, dec_mask, enc_dec_mask \ + = get_batch(data_iterator) + timers('batch generator').stop() + + # Forward model lm_labels + output_tensor = model(tokens_enc, + tokens_dec, + enc_mask, + dec_mask, + enc_dec_mask, + tokentype_ids=None, + lm_labels=lm_labels) + + return output_tensor, partial(loss_func, loss_mask) + + +def train_valid_test_datasets_provider(train_val_test_num_samples): + """Build train, valid, and test datasets.""" + args = get_args() + + print_rank_0('> building train, validation, and test datasets ' + 'for T5 ...') + train_ds, valid_ds, test_ds = build_train_valid_test_datasets( + data_prefix=args.data_path, + data_impl=args.data_impl, + splits_string=args.split, + train_valid_test_num_samples=train_val_test_num_samples, + max_seq_length=args.encoder_seq_length, + max_seq_length_dec=args.decoder_seq_length, + masked_lm_prob=args.mask_prob, + short_seq_prob=args.short_seq_prob, + seed=args.seed, + skip_warmup=(not args.mmap_warmup), + dataset_type='t5') + print_rank_0("> finished creating T5 datasets ...") + + return train_ds, valid_ds, test_ds + + +if __name__ == "__main__": + + pretrain(train_valid_test_datasets_provider, model_provider, ModelType.encoder_and_decoder, + forward_step, args_defaults={'tokenizer_type': 'BertWordPieceLowerCase'}) diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_vision_classify.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_vision_classify.py new file mode 100644 index 0000000000000000000000000000000000000000..e7dc2a7ee89cf677dc5a0548b47df907976cf21d --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_vision_classify.py @@ -0,0 +1,105 @@ +# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. + +"""Pretrain VIT""" + +import torch +import torch.nn.functional as F +from functools import partial +from megatron import get_args, get_timers, print_rank_0 +from megatron.core.enums import ModelType +from megatron.data.vit_dataset import build_train_valid_datasets +from megatron.model.vision.classification import VitClassificationModel +from megatron.model.vision.classification import MitClassificationModel +from megatron.training import pretrain +from megatron.utils import average_losses_across_data_parallel_group +from megatron.arguments import core_transformer_config_from_args + + +def model_provider(pre_process=True, post_process=True): + """Build the model.""" + + args = get_args() + config = core_transformer_config_from_args(args) + if args.vision_backbone_type == 'vit': + print_rank_0("building VIT model ...") + model = VitClassificationModel(config=config, + num_classes=args.num_classes, + pre_process=pre_process, + post_process=post_process) + elif args.vision_backbone_type == 'mit': + print_rank_0("building MIT model ...") + model = MitClassificationModel(num_classes=args.num_classes, + pre_process=pre_process, + post_process=post_process) + else: + raise Exception('{} vision backbone is not supported.'.format( + args.vision_backbone_type)) + return model + + +def get_batch(data_iterator): + """Build the batch.""" + data = next(data_iterator) + + # only data parallelism; no need for broadcast + images = data[0].cuda() + labels = data[1].cuda() + + return images, labels + + +def loss_func(labels, output_tensor): + logits = output_tensor.contiguous().float() + loss = F.cross_entropy(logits, labels) + + outputs = torch.argmax(logits, -1) + correct = (outputs == labels).float() + accuracy = torch.mean(correct) + + averaged_loss = average_losses_across_data_parallel_group([loss, accuracy]) + + return loss, {"loss": averaged_loss[0], "accuracy": averaged_loss[1]} + + +def forward_step(data_iterator, model): + """Forward step.""" + timers = get_timers() + + # Get the batch. + timers("batch-generator", log_level=2).start() + ( + images, + labels, + ) = get_batch(data_iterator) + timers("batch-generator").stop() + + # Forward model. lm_labels + output_tensor = model(images) + + return output_tensor, partial(loss_func, labels) + +def train_valid_test_datasets_provider(train_val_test_num_samples): + """Build train, valid, and test datasets.""" + args = get_args() + + print_rank_0( + "> building train, validation, and test datasets " "for VIT ..." + ) + train_ds, valid_ds = build_train_valid_datasets( + data_path=args.data_path, + image_size=(args.img_h, args.img_w) + ) + print_rank_0("> finished creating VIT datasets ...") + + return train_ds, valid_ds, None + + +if __name__ == "__main__": + + pretrain( + train_valid_test_datasets_provider, + model_provider, + ModelType.encoder_or_decoder, + forward_step, + args_defaults={'dataloader_type': 'cyclic', 'vision_pretraining': True} + ) diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_vision_dino.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_vision_dino.py new file mode 100644 index 0000000000000000000000000000000000000000..179445af256da51fa4811a7f06e2c5331e4e92b3 --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_vision_dino.py @@ -0,0 +1,110 @@ +# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. + +import torch +import torch.nn.functional as F +import torch.nn as nn +import numpy as np +import torch.distributed as dist +from functools import partial +from megatron import get_args, get_timers, print_rank_0 +from megatron.core.enums import ModelType +from megatron.data.vit_dataset import build_train_valid_datasets +from megatron.model.vision.dino import DINOPretrainModel +from megatron.model.vision.knn_monitor import knn_predict, get_feature_bank +from megatron.training import pretrain +from megatron.utils import average_losses_across_data_parallel_group, unwrap_model +from torch.nn.parallel.distributed import DistributedDataParallel as torchDDP +from megatron.model import DistributedDataParallel as LocalDDP +from megatron.model import Float16Module +from megatron.arguments import core_transformer_config_from_args + +def model_provider(pre_process=True, post_process=True): + """Build the model.""" + config = core_transformer_config_from_args(get_args()) + return DINOPretrainModel(config, pre_process=pre_process, post_process=post_process) + +def get_batch(data_iterator): + """Build the batch.""" + data = next(data_iterator) + + # only data parallelism; no need for broadcast + if isinstance(data[0], list): + images = [aug.cuda() for aug in data[0]] + else: + images = data[0].cuda() + labels = data[1].cuda() + + return images, labels + + +def loss_func(model, labels, output_tensor, collect_data=False): + args = get_args() + + model = unwrap_model( + model, + (torchDDP, LocalDDP, Float16Module) + ) + if model.training: + student_output, teacher_output = output_tensor + loss = model.dino_loss(student_output, teacher_output, args.curr_iteration) + averaged_loss = average_losses_across_data_parallel_group([loss]) + return loss, {"loss": averaged_loss[0]} + else: + _, teacher_feature = output_tensor + feature_bank, feature_labels, classes = get_feature_bank() + feature = F.normalize(teacher_feature.float(), dim=1) + + knn_accs = [] + for k in [10, 20, 100, 200]: + pred_labels = knn_predict(feature, feature_bank, + feature_labels, classes, k, 0.07) + knn_acc = (pred_labels[:, 0] == labels).float().mean() + knn_accs.append(knn_acc) + + averaged_loss = average_losses_across_data_parallel_group(knn_accs) + return 0, {"knn_acc_10": averaged_loss[0], + "knn_acc_20": averaged_loss[1], + "knn_acc_100": averaged_loss[2], + "knn_acc_200": averaged_loss[3]} + + +def forward_step(data_iterator, model): + """Forward step.""" + timers = get_timers() + + # Get the batch. + timers("batch-generator", log_level=2).start() + ( + images, + labels, + ) = get_batch(data_iterator) + timers("batch-generator").stop() + + return model(images), partial(loss_func, model, labels) + + +def train_valid_test_datasets_provider(train_val_test_num_samples): + """Build train, valid, and test datasets.""" + args = get_args() + + print_rank_0( + "> building train, validation, and test datasets " "for VIT ..." + ) + train_ds, valid_ds = build_train_valid_datasets( + data_path=args.data_path, + image_size=(args.img_h, args.img_w) + ) + print_rank_0("> finished creating VIT datasets ...") + + return train_ds, valid_ds, None + + +if __name__ == "__main__": + pretrain( + train_valid_test_datasets_provider, + model_provider, + ModelType.encoder_or_decoder, + forward_step, + args_defaults={'dataloader_type': 'cyclic', 'vision_pretraining': True} + ) + diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_vision_inpaint.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_vision_inpaint.py new file mode 100644 index 0000000000000000000000000000000000000000..509a38d2aff04bb7321d4efab4ac56ba6da15bb4 --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/pretrain_vision_inpaint.py @@ -0,0 +1,140 @@ +# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. + +"""Pretrain VIT""" + +import torch +import torch.nn.functional as F +from functools import partial +from megatron import get_args, get_timers, print_rank_0, print_rank_last +from megatron.core.enums import ModelType +from megatron.data.vit_dataset import build_train_valid_datasets +from megatron.model.vision.inpainting import VitInpaintingModel +from megatron.model.vision.inpainting import MitInpaintingModel +from megatron.training import pretrain +from megatron.utils import average_losses_across_data_parallel_group +from tasks.vision.metrics import SSIM, PSNR +from megatron.arguments import core_transformer_config_from_args + +def model_provider(pre_process=True, post_process=True): + """Build the model.""" + args = get_args() + config = core_transformer_config_from_args(args) + if args.vision_backbone_type == 'vit': + model = VitInpaintingModel(config, + pre_process=pre_process, + post_process=post_process) + elif args.vision_backbone_type == 'mit': + model = MitInpaintingModel(pre_process=pre_process, + post_process=post_process) + else: + raise Exception('{} vision backbone is not supported.'.format( + args.vision_backbone_type)) + return model + + +def get_batch(data_iterator): + """Build the batch.""" + data = next(data_iterator) + + # only data parallelism; no need for broadcast + images = data[0][0].cuda() + masks = data[0][1].cuda() + return images, masks + + +def loss_func(images, masks, masked_images, outputs, collect_data=False): + outputs = outputs.contiguous().float() + masks_flip = 1-masks + flip_masked_outputs = outputs.masked_fill(masks_flip.bool(), 0) + flip_masked_images = images.masked_fill(masks_flip.bool(), 0) + + ssim_fun = SSIM() + psnr_fun = PSNR() + + if not collect_data: + mask_count = torch.count_nonzero(masks) + loss = F.mse_loss( + flip_masked_outputs, + flip_masked_images.float(), + reduction="sum" + ) + loss = loss/mask_count + ssim = ssim_fun(flip_masked_outputs, flip_masked_images.float()) + psnr = psnr_fun(flip_masked_outputs, flip_masked_images.float()) + + averaged_loss = average_losses_across_data_parallel_group( + [loss, psnr, ssim] + ) + + return loss, {"loss": averaged_loss[0], + "psnr": averaged_loss[1], + 'ssim': averaged_loss[2]} + else: + synth_images = masked_images.float() + flip_masked_outputs + ssim = ssim_fun(synth_images, images.float()) + psnr = psnr_fun(synth_images, images.float()) + return torch.cat((images, masked_images, synth_images), dim=2), ssim, psnr + + +def forward_step(data_iterator, model): + """Forward step.""" + timers = get_timers() + + # Get the batch. + timers("batch-generator", log_level=2).start() + ( + images, + masks, + ) = get_batch(data_iterator) + timers("batch-generator").stop() + + masked_images = images.masked_fill(masks.bool(), 0) + outputs = model(masked_images) + + # Forward mode + return outputs, partial(loss_func, images, masks, masked_images) + + +def process_non_loss_data(data, iteration, writer): + psnr_sum = 0 + ssim_sum = 0 + for (output_tb, ssim, psnr) in data: + output_tb[output_tb < 0] = 0 + output_tb[output_tb > 1] = 1 + writer.add_images("gt-input-output-vald", output_tb, + global_step=iteration, walltime=None, + dataformats='NCHW') + psnr_sum = psnr_sum + psnr.item() + ssim_sum = ssim_sum + ssim.item() + psnr = psnr_sum/len(data) + ssim = ssim_sum/len(data) + writer.add_scalar('PSNR generate value-validation', psnr, iteration) + writer.add_scalar('SSIM generate value-validation', ssim, iteration) + + +def train_valid_test_datasets_provider(train_val_test_num_samples): + """Build train, valid, and test datasets.""" + args = get_args() + + print_rank_0( + "> building train, validation, and test datasets " "for VIT ..." + ) + train_ds, valid_ds = build_train_valid_datasets( + data_path=args.data_path, + image_size=(args.img_h, args.img_w) + ) + print_rank_0("> finished creating VIT datasets ...") + + return train_ds, valid_ds, None + + +if __name__ == "__main__": + + pretrain( + train_valid_test_datasets_provider, + model_provider, + ModelType.encoder_or_decoder, + forward_step, + process_non_loss_data, + args_defaults={'dataloader_type': 'cyclic', 'vision_pretraining': True} + ) diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/setup.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/setup.py new file mode 100644 index 0000000000000000000000000000000000000000..53f01214e6297594b01d64d8e7ca16816d692e9a --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/setup.py @@ -0,0 +1,111 @@ +from setuptools import setup, find_packages + +"""Setup for pip package.""" + +import importlib.util +import os +import setuptools + +spec = importlib.util.spec_from_file_location('package_info', 'megatron/core/package_info.py') +package_info = importlib.util.module_from_spec(spec) +spec.loader.exec_module(package_info) + + +__contact_emails__ = package_info.__contact_emails__ +__contact_names__ = package_info.__contact_names__ +__description__ = package_info.__description__ +__download_url__ = package_info.__download_url__ +__homepage__ = package_info.__homepage__ +__keywords__ = package_info.__keywords__ +__license__ = package_info.__license__ +__package_name__ = package_info.__package_name__ +__repository_url__ = package_info.__repository_url__ +__version__ = package_info.__version__ + + +if os.path.exists('megatron/core/README.md'): + with open("megatron/core/README.md", "r", encoding='utf-8') as fh: + long_description = fh.read() + long_description_content_type = "text/markdown" + +else: + long_description = 'See ' + __homepage__ + long_description_content_type = "text/plain" + + +############################################################################### +# Dependency Loading # +# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% # + +def req_file(filename, folder="megatron/core"): + with open(os.path.join(folder, filename), encoding='utf-8') as f: + content = f.readlines() + # you may also want to remove whitespace characters + # Example: `\n` at the end of each line + return [x.strip() for x in content] + +install_requires = req_file("requirements.txt") + +############################################################################### + +setuptools.setup( + name=__package_name__, + # Versions should comply with PEP440. For a discussion on single-sourcing + # the version across setup.py and the project code, see + # https://packaging.python.org/en/latest/single_source_version.html + version=__version__, + description=__description__, + long_description=long_description, + long_description_content_type=long_description_content_type, + # The project's main homepage. + url=__repository_url__, + download_url=__download_url__, + # Author details + author=__contact_names__, + author_email=__contact_emails__, + # maintainer Details + maintainer=__contact_names__, + maintainer_email=__contact_emails__, + # The licence under which the project is released + license=__license__, + classifiers=[ + # How mature is this project? Common values are + # 1 - Planning + # 2 - Pre-Alpha + # 3 - Alpha + # 4 - Beta + # 5 - Production/Stable + # 6 - Mature + # 7 - Inactive + 'Development Status :: 5 - Production/Stable', + # Indicate who your project is intended for + 'Intended Audience :: Developers', + 'Intended Audience :: Science/Research', + 'Intended Audience :: Information Technology', + # Indicate what your project relates to + 'Topic :: Scientific/Engineering', + 'Topic :: Scientific/Engineering :: Mathematics', + 'Topic :: Scientific/Engineering :: Image Recognition', + 'Topic :: Scientific/Engineering :: Artificial Intelligence', + 'Topic :: Software Development :: Libraries', + 'Topic :: Software Development :: Libraries :: Python Modules', + 'Topic :: Utilities', + # Pick your license as you wish (should match "license" above) + 'License :: OSI Approved :: BSD License', + # Supported python versions + 'Programming Language :: Python :: 3', + 'Programming Language :: Python :: 3.8', + 'Programming Language :: Python :: 3.9', + # Additional Setting + 'Environment :: Console', + 'Natural Language :: English', + 'Operating System :: OS Independent', + ], + packages=setuptools.find_packages(), + install_requires=install_requires, + + # Add in any packaged data. + include_package_data=True, + # PyPI package information. + keywords=__keywords__, +) \ No newline at end of file diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/synpasecrash.txt b/docker/intel_code/llama13b/Megatron-DeepSpeed/synpasecrash.txt new file mode 100644 index 0000000000000000000000000000000000000000..2cc8d7e4d3dee3eaea31a477a1761cacde6dce70 --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/synpasecrash.txt @@ -0,0 +1,2246 @@ +/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. + warnings.warn( +[2024-05-13 22:46:04,270] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +[2024-05-13 22:46:05,590] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env +[2024-05-13 22:46:05,592] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.164,100.83.37.175,100.83.37.188,100.83.37.165 +[2024-05-13 22:46:05,592] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.164,100.83.37.175,100.83.37.188,100.83.37.165 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTY0IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE3NSI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XSwgIjEwMC44My4zNy4xODgiOiBbMCwgMSwgMiwgMywgNCwgNSwgNiwgN10sICIxMDAuODMuMzcuMTY1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddfQ== --node_rank=%n --master_addr=100.83.37.164 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' python3 -u /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/pretrain_gpt.py --deepspeed --tensor-model-parallel-size 2 --pipeline-model-parallel-size 2 --num-layers 40 --hidden-size 5120 --ffn-hidden-size 13824 --num-attention-heads 40 --seq-length 2048 --micro-batch-size 1 --global-batch-size 2048 --train-iters 250000 --log-interval 10 --eval-iters 100 --eval-interval 1000 --data-path /data/arxiv/tokenized_text_document --optimizer fusedadamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-8 --lr 3e-4 --min-lr 3e-5 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --tensorboard-dir /data/output/llama13b//tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b//checkpoints --deepspeed_config=/data/output/llama13b//ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --no-gradient-accumulation-fusion --bf16 --max-position-embeddings 2048 --use-rotary-position-embeddings --untie-embeddings-and-output-weights --swiglu --normalization rmsnorm --disable-bias-linear --no-query-key-layer-scaling --attention-dropout 0.0 --hidden-dropout 0.0 --use-fused-sdpa 1 --use-fused-sdpa-with-recompute 0 --use-fused-rmsnorm 1 --tokenizer-type GPT2BPETokenizer --vocab-file /data/arxiv/gpt2-vocab.json --merge-file /data/arxiv/gpt2-merges.txt --sequence-parallel --save /data/output/llama13b//checkpoints --save-interval 2000 --verify-checkpoint --verify-checkpoint-model-type LLAMA' +100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts. +100.83.37.164: Warning: Permanently added '[100.83.37.164]:3122' (ED25519) to the list of known hosts. +100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts. +100.83.37.165: Warning: Permanently added '[100.83.37.165]:3122' (ED25519) to the list of known hosts. +100.83.37.164: * Starting OpenBSD Secure Shell server sshd +100.83.37.164: ...done. +100.83.37.188: * Starting OpenBSD Secure Shell server sshd +100.83.37.175: * Starting OpenBSD Secure Shell server sshd +100.83.37.165: * Starting OpenBSD Secure Shell server sshd +100.83.37.175: ...done. +100.83.37.188: ...done. +100.83.37.165: ...done. +100.83.37.164: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.164: warnings.warn( +100.83.37.164: [2024-05-13 22:46:07,230] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.175: warnings.warn( +100.83.37.175: [2024-05-13 22:46:07,276] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.188: warnings.warn( +100.83.37.188: [2024-05-13 22:46:07,276] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.165: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.165: warnings.warn( +100.83.37.165: [2024-05-13 22:46:07,283] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.164: [2024-05-13 22:46:08,631] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.164': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.165': [0, 1, 2, 3, 4, 5, 6, 7]} +100.83.37.164: [2024-05-13 22:46:08,631] [INFO] [launch.py:152:main] nnodes=4, num_local_procs=8, node_rank=0 +100.83.37.164: [2024-05-13 22:46:08,631] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(, {'100.83.37.164': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.175': [8, 9, 10, 11, 12, 13, 14, 15], '100.83.37.188': [16, 17, 18, 19, 20, 21, 22, 23], '100.83.37.165': [24, 25, 26, 27, 28, 29, 30, 31]}) +100.83.37.164: [2024-05-13 22:46:08,631] [INFO] [launch.py:164:main] dist_world_size=32 +100.83.37.164: [2024-05-13 22:46:08,631] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 +100.83.37.164: * Starting OpenBSD Secure Shell server sshd +100.83.37.164: * Starting OpenBSD Secure Shell server sshd +100.83.37.164: * Starting OpenBSD Secure Shell server sshd +100.83.37.164: * Starting OpenBSD Secure Shell server sshd +100.83.37.164: * Starting OpenBSD Secure Shell server sshd +100.83.37.164: * Starting OpenBSD Secure Shell server sshd +100.83.37.164: * Starting OpenBSD Secure Shell server sshd +100.83.37.164: * Starting OpenBSD Secure Shell server sshd +100.83.37.164: ...done. +100.83.37.164: ...done. +100.83.37.164: ...done. +100.83.37.164: ...done. +100.83.37.164: ...done. +100.83.37.164: ...done. +100.83.37.164: ...done. +100.83.37.164: ...done. +100.83.37.175: [2024-05-13 22:46:08,695] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.164': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.165': [0, 1, 2, 3, 4, 5, 6, 7]} +100.83.37.175: [2024-05-13 22:46:08,695] [INFO] [launch.py:152:main] nnodes=4, num_local_procs=8, node_rank=1 +100.83.37.175: [2024-05-13 22:46:08,695] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(, {'100.83.37.164': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.175': [8, 9, 10, 11, 12, 13, 14, 15], '100.83.37.188': [16, 17, 18, 19, 20, 21, 22, 23], '100.83.37.165': [24, 25, 26, 27, 28, 29, 30, 31]}) +100.83.37.175: [2024-05-13 22:46:08,695] [INFO] [launch.py:164:main] dist_world_size=32 +100.83.37.175: [2024-05-13 22:46:08,695] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 +100.83.37.188: [2024-05-13 22:46:08,699] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.164': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.165': [0, 1, 2, 3, 4, 5, 6, 7]} +100.83.37.188: [2024-05-13 22:46:08,700] [INFO] [launch.py:152:main] nnodes=4, num_local_procs=8, node_rank=2 +100.83.37.188: [2024-05-13 22:46:08,700] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(, {'100.83.37.164': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.175': [8, 9, 10, 11, 12, 13, 14, 15], '100.83.37.188': [16, 17, 18, 19, 20, 21, 22, 23], '100.83.37.165': [24, 25, 26, 27, 28, 29, 30, 31]}) +100.83.37.188: [2024-05-13 22:46:08,700] [INFO] [launch.py:164:main] dist_world_size=32 +100.83.37.188: [2024-05-13 22:46:08,700] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 +100.83.37.175: * Starting OpenBSD Secure Shell server sshd +100.83.37.175: * Starting OpenBSD Secure Shell server sshd +100.83.37.175: * Starting OpenBSD Secure Shell server sshd +100.83.37.175: * Starting OpenBSD Secure Shell server sshd +100.83.37.175: * Starting OpenBSD Secure Shell server sshd +100.83.37.175: ...done. +100.83.37.175: ...done. +100.83.37.175: ...done. +100.83.37.175: * Starting OpenBSD Secure Shell server sshd +100.83.37.175: ...done. +100.83.37.175: ...done. +100.83.37.175: ...done. +100.83.37.175: * Starting OpenBSD Secure Shell server sshd +100.83.37.175: * Starting OpenBSD Secure Shell server sshd +100.83.37.175: ...done. +100.83.37.175: ...done. +100.83.37.165: [2024-05-13 22:46:08,709] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.164': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.165': [0, 1, 2, 3, 4, 5, 6, 7]} +100.83.37.165: [2024-05-13 22:46:08,709] [INFO] [launch.py:152:main] nnodes=4, num_local_procs=8, node_rank=3 +100.83.37.165: [2024-05-13 22:46:08,709] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(, {'100.83.37.164': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.175': [8, 9, 10, 11, 12, 13, 14, 15], '100.83.37.188': [16, 17, 18, 19, 20, 21, 22, 23], '100.83.37.165': [24, 25, 26, 27, 28, 29, 30, 31]}) +100.83.37.165: [2024-05-13 22:46:08,709] [INFO] [launch.py:164:main] dist_world_size=32 +100.83.37.165: [2024-05-13 22:46:08,709] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 +100.83.37.188: * Starting OpenBSD Secure Shell server sshd +100.83.37.188: * Starting OpenBSD Secure Shell server sshd +100.83.37.188: * Starting OpenBSD Secure Shell server sshd +100.83.37.188: * Starting OpenBSD Secure Shell server sshd +100.83.37.188: ...done. +100.83.37.188: ...done. +100.83.37.188: ...done. +100.83.37.188: * Starting OpenBSD Secure Shell server sshd +100.83.37.188: * Starting OpenBSD Secure Shell server sshd +100.83.37.188: ...done. +100.83.37.188: * Starting OpenBSD Secure Shell server sshd +100.83.37.188: ...done. +100.83.37.188: ...done. +100.83.37.188: ...done. +100.83.37.188: * Starting OpenBSD Secure Shell server sshd +100.83.37.188: ...done. +100.83.37.165: * Starting OpenBSD Secure Shell server sshd +100.83.37.165: * Starting OpenBSD Secure Shell server sshd +100.83.37.165: * Starting OpenBSD Secure Shell server sshd +100.83.37.165: * Starting OpenBSD Secure Shell server sshd +100.83.37.165: ...done. +100.83.37.165: ...done. +100.83.37.165: ...done. +100.83.37.165: * Starting OpenBSD Secure Shell server sshd +100.83.37.165: ...done. +100.83.37.165: * Starting OpenBSD Secure Shell server sshd +100.83.37.165: * Starting OpenBSD Secure Shell server sshd +100.83.37.165: * Starting OpenBSD Secure Shell server sshd +100.83.37.165: ...done. +100.83.37.165: ...done. +100.83.37.165: ...done. +100.83.37.165: ...done. +100.83.37.164: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.164: warnings.warn( +100.83.37.164: [2024-05-13 22:46:10,333] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.165: [2024-05-13 22:46:10,350] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.165: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.165: warnings.warn( +100.83.37.165: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.165: warnings.warn( +100.83.37.165: [2024-05-13 22:46:10,352] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.164: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.164: warnings.warn( +100.83.37.164: [2024-05-13 22:46:10,362] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.164: [2024-05-13 22:46:10,367] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.164: [2024-05-13 22:46:10,367] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.164: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.164: warnings.warn( +100.83.37.164: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.164: warnings.warn( +100.83.37.164: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.164: warnings.warn( +100.83.37.164: [2024-05-13 22:46:10,370] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.164: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.164: warnings.warn( +100.83.37.164: [2024-05-13 22:46:10,370] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.164: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.164: warnings.warn( +100.83.37.164: [2024-05-13 22:46:10,371] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.164: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.164: warnings.warn( +100.83.37.164: [2024-05-13 22:46:10,378] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: [2024-05-13 22:46:10,406] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.175: warnings.warn( +100.83.37.165: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.165: warnings.warn( +100.83.37.165: [2024-05-13 22:46:10,414] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.175: warnings.warn( +100.83.37.175: [2024-05-13 22:46:10,417] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.175: warnings.warn( +100.83.37.175: [2024-05-13 22:46:10,420] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.175: warnings.warn( +100.83.37.175: [2024-05-13 22:46:10,425] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.175: warnings.warn( +100.83.37.175: [2024-05-13 22:46:10,435] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.188: [2024-05-13 22:46:10,441] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.188: warnings.warn( +100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.188: warnings.warn( +100.83.37.188: [2024-05-13 22:46:10,442] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.188: warnings.warn( +100.83.37.188: [2024-05-13 22:46:10,444] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.188: warnings.warn( +100.83.37.188: [2024-05-13 22:46:10,447] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: [2024-05-13 22:46:10,451] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.175: warnings.warn( +100.83.37.188: [2024-05-13 22:46:10,462] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.188: warnings.warn( +100.83.37.188: [2024-05-13 22:46:10,485] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.188: warnings.warn( +100.83.37.188: [2024-05-13 22:46:10,497] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.188: warnings.warn( +100.83.37.165: [2024-05-13 22:46:10,518] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.165: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.165: warnings.warn( +100.83.37.165: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.165: warnings.warn( +100.83.37.165: [2024-05-13 22:46:10,520] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.188: [2024-05-13 22:46:10,524] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.188: warnings.warn( +100.83.37.175: [2024-05-13 22:46:10,524] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.175: warnings.warn( +100.83.37.165: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.165: warnings.warn( +100.83.37.165: [2024-05-13 22:46:10,526] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.165: [2024-05-13 22:46:10,537] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.165: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.165: warnings.warn( +100.83.37.165: [2024-05-13 22:46:10,560] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.165: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.165: warnings.warn( +100.83.37.175: [2024-05-13 22:46:10,941] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) +100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. +100.83.37.175: warnings.warn( +100.83.37.164: -------------------------------------------------- +100.83.37.164: DeepSpeed C++/CUDA extension op report +100.83.37.164: -------------------------------------------------- +100.83.37.164: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.164: runtime if needed. Op compatibility means that your system +100.83.37.164: meet the required dependencies to JIT install the op. +100.83.37.164: -------------------------------------------------- +100.83.37.164: JIT compiled ops requires ninja +100.83.37.164: -------------------------------------------------- +100.83.37.164: DeepSpeed C++/CUDA extension op report +100.83.37.164: -------------------------------------------------- +100.83.37.164: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.164: runtime if needed. Op compatibility means that your system +100.83.37.164: meet the required dependencies to JIT install the op. +100.83.37.164: -------------------------------------------------- +100.83.37.164: JIT compiled ops requires ninja +100.83.37.164: -------------------------------------------------- +100.83.37.164: ----------------------------------------------------------------------------------------------------DeepSpeed C++/CUDA extension op report-------------------------------------------------- +100.83.37.164: +100.83.37.164: +100.83.37.164: +100.83.37.164: DeepSpeed C++/CUDA extension op reportDeepSpeed C++/CUDA extension op report-------------------------------------------------- +100.83.37.164: DeepSpeed C++/CUDA extension op report +100.83.37.164: +100.83.37.164: +100.83.37.164: ----------------------------------------------------------------------------------------------------NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.164: runtime if needed. Op compatibility means that your system +100.83.37.164: meet the required dependencies to JIT install the op.-------------------------------------------------- +100.83.37.164: +100.83.37.164: +100.83.37.164: +100.83.37.164: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.164: runtime if needed. Op compatibility means that your system +100.83.37.164: meet the required dependencies to JIT install the op.NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.164: runtime if needed. Op compatibility means that your system +100.83.37.164: meet the required dependencies to JIT install the op.--------------------------------------------------NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.164: runtime if needed. Op compatibility means that your system +100.83.37.164: meet the required dependencies to JIT install the op. +100.83.37.164: +100.83.37.164: +100.83.37.164: +100.83.37.164: --------------------------------------------------JIT compiled ops requires ninja---------------------------------------------------------------------------------------------------- +100.83.37.164: +100.83.37.164: +100.83.37.164: +100.83.37.164: JIT compiled ops requires ninjaJIT compiled ops requires ninja +100.83.37.164: JIT compiled ops requires ninja +100.83.37.164: +100.83.37.164: ninjaninja .................. ..................[OKAY] +100.83.37.164: [OKAY] +100.83.37.164: -------------------------------------------------- +100.83.37.164: --------------------------------------------------op name +100.83.37.164: ................ op nameinstalled .................. installedcompatible +100.83.37.164: .. --------------------------------------------------compatible +100.83.37.164: +100.83.37.164: -------------------------------------------------- +100.83.37.164: cpu_adam ............... cpu_adam[NO] ...................... [OKAY][NO]ninja +100.83.37.164: ninja ninjafused_adam .......................................................................... [OKAY][OKAY][OKAY][OKAY]ninja +100.83.37.164: [NO] +100.83.37.164: -------------------------------------------------- +100.83.37.164: +100.83.37.164: +100.83.37.164: fused_adam..................--------------------------------------------------op name.......-------------------------------------------------- +100.83.37.164: +100.83.37.164: [OKAY]............. +100.83.37.164: [OKAY]op name................deepspeed_not_implemented +100.83.37.164: op name ................installed --------------------------------------------------[NO] installed .. +100.83.37.164: .................. [NO]compatible....... op nameinstalled compatible +100.83.37.164: ....... -------------------------------------------------- ................ +100.83.37.164: [OKAY][OKAY] +100.83.37.164: +100.83.37.164: .. deepspeed_not_implemented--------------------------------------------------installed +100.83.37.164: +100.83.37.164: cpu_adam transformer_inference[NO]cpu_adamcompatible................. [NO] +100.83.37.164: ............... --------------------------------------------------......... +100.83.37.164: ....... compatible +100.83.37.164: [NO][NO][OKAY]cpu_adam[OKAY] -------------------------------------------------- +100.83.37.164: +100.83.37.164: ...................... +100.83.37.164: .......transformer_inference[OKAY]fused_adam [OKAY] +100.83.37.164: +100.83.37.164: [NO]cpu_adam--------------------------------------------------............... .......fused_adam +100.83.37.164: ............................[NO] [NO] [OKAY] [NO][NO]....... +100.83.37.164: .............. fused_adam .......[OKAY][OKAY][OKAY]............. +100.83.37.164: +100.83.37.164: +100.83.37.164: [OKAY]deepspeed_not_implemented--------------------------------------------------[NO]deepspeed_not_implemented +100.83.37.164: +100.83.37.164: DeepSpeed general environment info:.......fused_adam[NO] +100.83.37.164: ....... [OKAY][OKAY][NO]torch install path ............. +100.83.37.164: +100.83.37.164: ...................... deepspeed_not_implemented transformer_inference[NO][OKAY] DeepSpeed general environment info:['/usr/local/lib/python3.10/dist-packages/torch'] [NO] +100.83.37.164: ....... +100.83.37.164: +100.83.37.164: .. transformer_inferencetorch install path[OKAY]....... torch version +100.83.37.164: ............... ..[NO] [OKAY] ....................deepspeed_not_implemented [NO] +100.83.37.164: ['/usr/local/lib/python3.10/dist-packages/torch'].......2.1.1a0+gitb51c9f6 .......transformer_inference[NO] +100.83.37.164: +100.83.37.164: deepspeed install path ..torch version[OKAY] [OKAY]....... +100.83.37.164: ........... +100.83.37.164: [NO][OKAY].................... ---------------------------------------------------------------------------------------------------- ['/usr/local/lib/python3.10/dist-packages/deepspeed']....... +100.83.37.164: +100.83.37.164: +100.83.37.164: +100.83.37.164: 2.1.1a0+gitb51c9f6transformer_inferencedeepspeed info[OKAY] +100.83.37.164: +100.83.37.164: ...................deepspeed install path.. --------------------------------------------------........... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.164: [NO] +100.83.37.164: deepspeed wheel compiled w.['/usr/local/lib/python3.10/dist-packages/deepspeed']....... +100.83.37.164: ......deepspeed info [OKAY]...................torch 2.1 +100.83.37.164: +100.83.37.164: 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0shared memory (/dev/shm) size +100.83.37.164: --------------------------------------------------....deepspeed wheel compiled w. +100.83.37.164: 503.75 GB...... +100.83.37.164: DeepSpeed general environment info:torch 2.1 +100.83.37.164: +100.83.37.164: shared memory (/dev/shm) sizetorch install path ....DeepSpeed general environment info:DeepSpeed general environment info:............... +100.83.37.164: 503.75 GB +100.83.37.164: +100.83.37.164: torch install pathtorch install path ['/usr/local/lib/python3.10/dist-packages/torch']............... +100.83.37.164: ...............torch version ['/usr/local/lib/python3.10/dist-packages/torch'].................... +100.83.37.164: 2.1.1a0+gitb51c9f6DeepSpeed general environment info:torch version['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.164: +100.83.37.164: +100.83.37.164: ....................deepspeed install path torch install path 2.1.1a0+gitb51c9f6........... +100.83.37.164: torch version deepspeed install path...................................['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.164: 2.1.1a0+gitb51c9f6...........deepspeed info +100.83.37.164: ['/usr/local/lib/python3.10/dist-packages/deepspeed']['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.164: ...................deepspeed install pathdeepspeed info +100.83.37.164: ........... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0torch version ................... +100.83.37.164: deepspeed wheel compiled w.['/usr/local/lib/python3.10/dist-packages/deepspeed']0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.164: .................... +100.83.37.164: deepspeed info......deepspeed wheel compiled w. torch 2.1 ...................2.1.1a0+gitb51c9f6 +100.83.37.164: ...... +100.83.37.164: 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0shared memory (/dev/shm) sizetorch 2.1 deepspeed install path +100.83.37.164: +100.83.37.164: shared memory (/dev/shm) size .... deepspeed wheel compiled w................ 503.75 GB +100.83.37.164: 503.75 GB...... +100.83.37.164: ['/usr/local/lib/python3.10/dist-packages/deepspeed']torch 2.1 +100.83.37.164: +100.83.37.164: deepspeed info ...................shared memory (/dev/shm) size 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0.... +100.83.37.164: deepspeed wheel compiled w.503.75 GB ...... +100.83.37.164: torch 2.1 +100.83.37.164: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.164: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.164: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.164: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.164: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.164: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.164: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.164: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.164: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.164: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.164: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.164: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.164: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.164: -------------------------------------------------- +100.83.37.164: DeepSpeed C++/CUDA extension op report +100.83.37.164: -------------------------------------------------- +100.83.37.164: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.164: runtime if needed. Op compatibility means that your system +100.83.37.164: meet the required dependencies to JIT install the op. +100.83.37.164: -------------------------------------------------- +100.83.37.164: JIT compiled ops requires ninja +100.83.37.164: ninja .................. [OKAY] +100.83.37.164: -------------------------------------------------- +100.83.37.164: op name ................ installed .. compatible +100.83.37.164: -------------------------------------------------- +100.83.37.164: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.164: fused_adam ............. [NO] ....... [OKAY] +100.83.37.164: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.164: transformer_inference .. [NO] ....... [OKAY] +100.83.37.164: -------------------------------------------------- +100.83.37.164: DeepSpeed general environment info: +100.83.37.164: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.164: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.164: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.164: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.164: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.164: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.164: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.164: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.165: -------------------------------------------------- +100.83.37.165: ----------------------------------------------------------------------------------------------------DeepSpeed C++/CUDA extension op report +100.83.37.165: +100.83.37.165: +100.83.37.165: --------------------------------------------------DeepSpeed C++/CUDA extension op reportDeepSpeed C++/CUDA extension op report +100.83.37.165: +100.83.37.165: +100.83.37.165: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.165: runtime if needed. Op compatibility means that your system +100.83.37.165: meet the required dependencies to JIT install the op.---------------------------------------------------------------------------------------------------- +100.83.37.165: +100.83.37.165: +100.83.37.165: --------------------------------------------------NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.165: runtime if needed. Op compatibility means that your system +100.83.37.165: meet the required dependencies to JIT install the op.NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.165: runtime if needed. Op compatibility means that your system +100.83.37.165: meet the required dependencies to JIT install the op. +100.83.37.165: +100.83.37.165: +100.83.37.165: --------------------------------------------------JIT compiled ops requires ninja-------------------------------------------------- +100.83.37.165: +100.83.37.165: +100.83.37.165: JIT compiled ops requires ninjaJIT compiled ops requires ninja +100.83.37.165: +100.83.37.165: ninjaninja .................................... [OKAY][OKAY] +100.83.37.165: +100.83.37.165: ---------------------------------------------------------------------------------------------------- +100.83.37.165: +100.83.37.165: op nameop nameninja ................................ installed..................installed .. [OKAY]..compatible +100.83.37.165: +100.83.37.165: compatible +100.83.37.165: ---------------------------------------------------------------------------------------------------- +100.83.37.165: -------------------------------------------------- +100.83.37.165: +100.83.37.165: op name ................ installed cpu_adam.. cpu_adam ............... ...............compatible +100.83.37.165: [NO][NO] --------------------------------------------------.............. +100.83.37.165: [OKAY][OKAY] +100.83.37.165: +100.83.37.165: fused_adamfused_adam cpu_adam.......................... ...............[NO][NO] [NO].............. .......[OKAY][OKAY] +100.83.37.165: +100.83.37.165: [OKAY] +100.83.37.165: deepspeed_not_implementeddeepspeed_not_implemented fused_adam ............. [NO][NO][NO] ..................... [OKAY][OKAY][OKAY] +100.83.37.165: +100.83.37.165: +100.83.37.165: transformer_inferencedeepspeed_not_implementedtransformer_inference .... [NO] [NO][NO]....... ..............[OKAY] +100.83.37.165: [OKAY][OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: +100.83.37.165: -------------------------------------------------- +100.83.37.165: transformer_inference .. [NO] ....... [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed general environment info: +100.83.37.165: DeepSpeed general environment info:torch install path +100.83.37.165: DeepSpeed general environment info:...............torch install path +100.83.37.165: ............... torch install path['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.165: ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.165: torch version ['/usr/local/lib/python3.10/dist-packages/torch']....................torch version +100.83.37.165: ....................2.1.1a0+gitb51c9f6torch version +100.83.37.165: 2.1.1a0+gitb51c9f6....................deepspeed install path +100.83.37.165: deepspeed install path...........2.1.1a0+gitb51c9f6 +100.83.37.165: ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']deepspeed install path +100.83.37.165: ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.165: deepspeed info........... deepspeed info ................... ...................['/usr/local/lib/python3.10/dist-packages/deepspeed']0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.165: +100.83.37.165: 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0deepspeed info +100.83.37.165: deepspeed wheel compiled w. ...................deepspeed wheel compiled w....... ......0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0torch 2.1 +100.83.37.165: +100.83.37.165: torch 2.1 deepspeed wheel compiled w. +100.83.37.165: shared memory (/dev/shm) size ......shared memory (/dev/shm) size .... ....torch 2.1 +100.83.37.165: 503.75 GB503.75 GB +100.83.37.165: +100.83.37.165: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.164: -------------------------------------------------- +100.83.37.164: DeepSpeed C++/CUDA extension op report +100.83.37.164: -------------------------------------------------- +100.83.37.164: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.164: runtime if needed. Op compatibility means that your system +100.83.37.164: meet the required dependencies to JIT install the op. +100.83.37.164: -------------------------------------------------- +100.83.37.164: JIT compiled ops requires ninja +100.83.37.164: ninja .................. [OKAY] +100.83.37.164: -------------------------------------------------- +100.83.37.164: op name ................ installed .. compatible +100.83.37.164: -------------------------------------------------- +100.83.37.164: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.164: fused_adam ............. [NO] ....... [OKAY] +100.83.37.164: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.164: transformer_inference .. [NO] ....... [OKAY] +100.83.37.164: -------------------------------------------------- +100.83.37.165: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.164: DeepSpeed general environment info: +100.83.37.165: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.164: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.164: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.164: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.164: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.164: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.165: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.165: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.165: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.165: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.164: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.164: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.164: INFO: overriding default arguments for tokenizer_type:GPT2BPETokenizer with tokenizer_type:GPT2BPETokenizer +100.83.37.164: using world size: 32, data-parallel-size: 8, sequence-parallel size: 1, tensor-model-parallel size: 2, pipeline-model-parallel size: 2 +100.83.37.164: accumulate and all-reduce gradients in fp32 for bfloat16 data type. +100.83.37.164: using torch.bfloat16 for parameters ... +100.83.37.164: ------------------------ arguments ------------------------ +100.83.37.164: accumulate_allreduce_grads_in_fp32 .............. True +100.83.37.164: adam_beta1 ...................................... 0.9 +100.83.37.164: adam_beta2 ...................................... 0.95 +100.83.37.164: adam_eps ........................................ 1e-08 +100.83.37.164: add_bias_linear ................................. False +100.83.37.164: add_position_embedding .......................... False +100.83.37.164: adlr_autoresume ................................. False +100.83.37.164: adlr_autoresume_interval ........................ 1000 +100.83.37.164: aml_data_download_path .......................... None +100.83.37.164: apply_layernorm_1p .............................. False +100.83.37.164: apply_query_key_layer_scaling ................... False +100.83.37.164: apply_residual_connection_post_layernorm ........ False +100.83.37.164: async_tensor_model_parallel_allreduce ........... False +100.83.37.164: attention_dropout ............................... 0.0 +100.83.37.164: attention_softmax_in_fp32 ....................... False +100.83.37.164: barrier_with_L1_time ............................ True +100.83.37.164: bert_binary_head ................................ True +100.83.37.164: bert_embedder_type .............................. megatron +100.83.37.164: bert_load ....................................... None +100.83.37.164: bf16 ............................................ True +100.83.37.164: bias_dropout_fusion ............................. False +100.83.37.164: bias_gelu_fusion ................................ False +100.83.37.164: biencoder_projection_dim ........................ 0 +100.83.37.164: biencoder_shared_query_context_model ............ False +100.83.37.164: block_data_path ................................. None +100.83.37.164: cache_fp8_weight ................................ False +100.83.37.164: cache_fp8_weight_fwd ............................ True +100.83.37.164: checkpoint_activations .......................... False +100.83.37.164: checkpoint_in_cpu ............................... False +100.83.37.164: checkpoint_num_layers ........................... 1 +100.83.37.164: classes_fraction ................................ 1.0 +100.83.37.164: clip_grad ....................................... 1.0 +100.83.37.164: compression_training ............................ False +100.83.37.164: consumed_train_samples .......................... 0 +100.83.37.164: consumed_train_tokens ........................... 0 +100.83.37.164: consumed_valid_samples .......................... 0 +100.83.37.164: contigious_checkpointing ........................ False +100.83.37.164: cpu_optimizer ................................... False +100.83.37.164: cpu_torch_adam .................................. False +100.83.37.164: create_moe_param_group .......................... False +100.83.37.164: curriculum_learning_legacy ...................... False +100.83.37.164: data_cache_path ................................. None +100.83.37.164: data_efficiency_curriculum_learning ............. False +100.83.37.164: data_impl ....................................... infer +100.83.37.164: data_parallel_random_init ....................... False +100.83.37.164: data_parallel_size .............................. 8 +100.83.37.164: data_path ....................................... ['/data/arxiv/tokenized_text_document'] +100.83.37.164: data_per_class_fraction ......................... 1.0 +100.83.37.164: data_sharding ................................... True +100.83.37.164: dataloader_type ................................. single +100.83.37.164: DDP_impl ........................................ local +100.83.37.164: decoder_num_layers .............................. None +100.83.37.164: decoder_seq_length .............................. None +100.83.37.164: deepscale ....................................... False +100.83.37.164: deepscale_config ................................ None +100.83.37.164: deepspeed ....................................... True +100.83.37.164: deepspeed_activation_checkpointing .............. False +100.83.37.164: deepspeed_config ................................ /data/output/llama13b//ds_config.json +100.83.37.164: deepspeed_mpi ................................... False +100.83.37.164: dino_bottleneck_size ............................ 256 +100.83.37.164: dino_freeze_last_layer .......................... 1 +100.83.37.164: dino_head_hidden_size ........................... 2048 +100.83.37.164: dino_local_crops_number ......................... 10 +100.83.37.164: dino_local_img_size ............................. 96 +100.83.37.164: dino_norm_last_layer ............................ False +100.83.37.164: dino_teacher_temp ............................... 0.07 +100.83.37.164: dino_warmup_teacher_temp ........................ 0.04 +100.83.37.164: dino_warmup_teacher_temp_epochs ................. 30 +100.83.37.164: distribute_checkpointed_activations ............. False +100.83.37.164: distribute_saved_activations .................... False +100.83.37.164: distributed_backend ............................. nccl +100.83.37.164: distributed_timeout_minutes ..................... 10 +100.83.37.164: do_norm_bias_weight_decay ....................... False +100.83.37.164: ds_inference .................................... False +100.83.37.164: ds_pipeline_enabled ............................. True +100.83.37.164: ds_sequence_parallel_size ....................... 1 +100.83.37.164: embed_layernorm ................................. False +100.83.37.164: embedding_path .................................. None +100.83.37.164: embedding_weights_in_fp32 ....................... False +100.83.37.164: empty_unused_memory_level ....................... 0 +100.83.37.164: enable_expert_tensor_parallelism ................ False +100.83.37.164: encoder_num_layers .............................. 40 +100.83.37.164: encoder_seq_length .............................. 2048 +100.83.37.164: end_weight_decay ................................ 0.1 +100.83.37.164: eod_mask_loss ................................... False +100.83.37.164: eval_interval ................................... 1000 +100.83.37.164: eval_iters ...................................... 100 +100.83.37.164: eval_loss_exit_value ............................ None +100.83.37.164: eval_micro_batch_size ........................... 1 +100.83.37.164: evidence_data_path .............................. None +100.83.37.164: exit_duration_in_mins ........................... None +100.83.37.164: exit_interval ................................... 0 +100.83.37.164: exit_on_missing_checkpoint ...................... False +100.83.37.164: exit_signal_handler ............................. False +100.83.37.164: expert_interval ................................. 2 +100.83.37.164: ffn_hidden_size ................................. 13824 +100.83.37.164: finetune ........................................ False +100.83.37.164: fix_position_emb_redundant_alloc ................ False +100.83.37.164: force_ds_sequence_parallel ...................... False +100.83.37.164: fp16 ............................................ False +100.83.37.164: fp16_lm_cross_entropy ........................... False +100.83.37.164: fp32_residual_connection ........................ False +100.83.37.164: fp8_amax_compute_algo ........................... most_recent +100.83.37.164: fp8_amax_history_len ............................ 1 +100.83.37.164: fp8_e4m3 ........................................ False +100.83.37.164: fp8_e5m2 ........................................ False +100.83.37.164: fp8_hybrid ...................................... False +100.83.37.164: fp8_interval .................................... 1 +100.83.37.164: fp8_margin ...................................... 0 +100.83.37.164: fp8_wgrad ....................................... True +100.83.37.164: global_batch_size ............................... 2048 +100.83.37.164: gradient_accumulation_fusion .................... False +100.83.37.164: head_lr_mult .................................... 1.0 +100.83.37.164: hidden_dropout .................................. 0.0 +100.83.37.164: hidden_size ..................................... 5120 +100.83.37.164: hidden_size_teacher ............................. None +100.83.37.164: hysteresis ...................................... 2 +100.83.37.164: ict_head_size ................................... None +100.83.37.164: ict_load ........................................ None +100.83.37.164: img_h ........................................... 224 +100.83.37.164: img_w ........................................... 224 +100.83.37.164: indexer_batch_size .............................. 128 +100.83.37.164: indexer_log_interval ............................ 1000 +100.83.37.164: inference ....................................... False +100.83.37.164: inference_batch_times_seqlen_threshold .......... 512 +100.83.37.164: init_method_std ................................. 0.02 +100.83.37.164: init_method_xavier_uniform ...................... False +100.83.37.164: initial_loss_scale .............................. 4294967296 +100.83.37.164: iter_per_epoch .................................. 1250 +100.83.37.164: kd .............................................. False +100.83.37.164: kd_alpha_ce ..................................... 1 +100.83.37.164: kd_beta_ce ...................................... 1 +100.83.37.164: kd_temp ......................................... 1.0 +100.83.37.164: kill_switch_path ................................ None +100.83.37.164: kv_channels ..................................... 128 +100.83.37.164: layernorm_epsilon ............................... 1e-05 +100.83.37.164: lazy_mpu_init ................................... None +100.83.37.164: load ............................................ /data/output/llama13b//checkpoints +100.83.37.164: load_teacher .................................... None +100.83.37.164: local_rank ...................................... None +100.83.37.164: log_batch_size_to_tensorboard ................... True +100.83.37.164: log_interval .................................... 10 +100.83.37.164: log_learning_rate_to_tensorboard ................ True +100.83.37.164: log_loss_scale_to_tensorboard ................... True +100.83.37.164: log_memory_to_tensorboard ....................... False +100.83.37.164: log_num_zeros_in_grad ........................... False +100.83.37.164: log_optimizer_states_to_tensorboard ............. False +100.83.37.164: log_params_norm ................................. False +100.83.37.164: log_timers_to_tensorboard ....................... True +100.83.37.164: log_validation_ppl_to_tensorboard ............... True +100.83.37.164: log_world_size_to_tensorboard ................... False +100.83.37.164: loss_scale ...................................... None +100.83.37.164: loss_scale_window ............................... 1000 +100.83.37.164: lr .............................................. 0.0003 +100.83.37.164: lr_decay_iters .................................. None +100.83.37.164: lr_decay_samples ................................ None +100.83.37.164: lr_decay_style .................................. cosine +100.83.37.164: lr_decay_tokens ................................. None +100.83.37.164: lr_warmup_fraction .............................. None +100.83.37.164: lr_warmup_iters ................................. 2000 +100.83.37.164: lr_warmup_samples ............................... 0 +100.83.37.164: lr_warmup_tokens ................................ None +100.83.37.164: make_vocab_size_divisible_by .................... 128 +100.83.37.164: mask_factor ..................................... 1.0 +100.83.37.164: mask_prob ....................................... 0.15 +100.83.37.164: mask_tensor_adding .............................. False +100.83.37.164: mask_type ....................................... random +100.83.37.164: masked_softmax_fusion ........................... False +100.83.37.164: max_position_embeddings ......................... 2048 +100.83.37.164: max_tokens_to_oom ............................... 12000 +100.83.37.164: mem_efficient_ln ................................ True +100.83.37.164: memory_centric_tiled_linear ..................... False +100.83.37.164: merge_file ...................................... /data/arxiv/gpt2-merges.txt +100.83.37.164: micro_batch_size ................................ 1 +100.83.37.164: min_loss_scale .................................. 1.0 +100.83.37.164: min_lr .......................................... 3e-05 +100.83.37.164: mlp_type ........................................ standard +100.83.37.164: mmap_warmup ..................................... False +100.83.37.164: moe_eval_capacity_factor ........................ 1.0 +100.83.37.164: moe_expert_parallel_size ........................ 1 +100.83.37.164: moe_loss_coeff .................................. 0.1 +100.83.37.164: moe_min_capacity ................................ 4 +100.83.37.164: moe_token_dropping .............................. True +100.83.37.164: moe_train_capacity_factor ....................... 1.0 +100.83.37.164: mos ............................................. False +100.83.37.164: no_cuda ......................................... False +100.83.37.164: no_load_lr_state ................................ False +100.83.37.164: no_load_optim ................................... None +100.83.37.164: no_load_rng ..................................... None +100.83.37.164: no_persist_layer_norm ........................... False +100.83.37.164: no_pipeline_parallel ............................ False +100.83.37.164: no_save_optim ................................... None +100.83.37.164: no_save_rng ..................................... None +100.83.37.164: no_scaled_init .................................. False +100.83.37.164: normalization ................................... rmsnorm +100.83.37.164: num_attention_heads ............................. 40 +100.83.37.164: num_attention_heads_teacher ..................... None +100.83.37.164: num_channels .................................... 3 +100.83.37.164: num_classes ..................................... 1000 +100.83.37.164: num_experts ..................................... [1] +100.83.37.164: num_experts_switch .............................. None +100.83.37.164: num_experts_teacher ............................. [1] +100.83.37.164: num_key_value_heads ............................. 40 +100.83.37.164: num_layers ...................................... 40 +100.83.37.164: num_layers_per_virtual_pipeline_stage ........... None +100.83.37.164: num_layers_teacher .............................. None +100.83.37.164: num_workers ..................................... 2 +100.83.37.164: onnx_safe ....................................... None +100.83.37.164: openai_gelu ..................................... False +100.83.37.164: optimizer ....................................... fusedadamw +100.83.37.164: output_bert_embeddings .......................... False +100.83.37.164: overlap_p2p_comm ................................ False +100.83.37.164: override_opt_param_scheduler .................... False +100.83.37.164: params_dtype .................................... torch.bfloat16 +100.83.37.164: partition_activations ........................... False +100.83.37.164: patch_dim ....................................... 16 +100.83.37.164: perform_initialization .......................... True +100.83.37.164: pipeline_model_parallel_size .................... 2 +100.83.37.164: pipeline_model_parallel_split_rank .............. None +100.83.37.164: profile ......................................... None +100.83.37.164: profile_backward ................................ False +100.83.37.164: profile_steps ................................... 3,4 +100.83.37.164: query_in_block_prob ............................. 0.1 +100.83.37.164: rampup_batch_size ............................... None +100.83.37.164: random_ltd ...................................... False +100.83.37.164: rank ............................................ 0 +100.83.37.164: recompute_granularity ........................... None +100.83.37.164: recompute_method ................................ None +100.83.37.164: recompute_num_layers ............................ 1 +100.83.37.164: remote_device ................................... none +100.83.37.164: reset_attention_mask ............................ False +100.83.37.164: reset_iteration ................................. False +100.83.37.164: reset_position_ids .............................. False +100.83.37.164: retriever_report_topk_accuracies ................ [] +100.83.37.164: retriever_score_scaling ......................... False +100.83.37.164: retriever_seq_length ............................ 256 +100.83.37.164: retro_add_retriever ............................. False +100.83.37.164: retro_cyclic_train_iters ........................ None +100.83.37.164: retro_encoder_attention_dropout ................. 0.1 +100.83.37.164: retro_encoder_hidden_dropout .................... 0.1 +100.83.37.164: retro_encoder_layers ............................ 2 +100.83.37.164: retro_num_neighbors ............................. 2 +100.83.37.164: retro_num_retrieved_chunks ...................... 2 +100.83.37.164: retro_return_doc_ids ............................ False +100.83.37.164: retro_workdir ................................... None +100.83.37.164: return_data_index ............................... False +100.83.37.164: rotary_percent .................................. 1.0 +100.83.37.164: sample_rate ..................................... 1.0 +100.83.37.164: save ............................................ /data/output/llama13b//checkpoints +100.83.37.164: save_interval ................................... 2000 +100.83.37.164: scatter_gather_tensors_in_pipeline .............. True +100.83.37.164: scattered_embeddings ............................ False +100.83.37.164: seed ............................................ 1234 +100.83.37.164: seq_length ...................................... 2048 +100.83.37.164: sequence_parallel ............................... True +100.83.37.164: sgd_momentum .................................... 0.9 +100.83.37.164: short_seq_prob .................................. 0.1 +100.83.37.164: skip_train ...................................... False +100.83.37.164: split ........................................... 969, 30, 1 +100.83.37.164: split_transformers .............................. False +100.83.37.164: squared_relu .................................... False +100.83.37.164: standalone_embedding_stage ...................... False +100.83.37.164: start_weight_decay .............................. 0.1 +100.83.37.164: swiglu .......................................... True +100.83.37.164: swin_backbone_type .............................. tiny +100.83.37.164: synchronize_each_layer .......................... False +100.83.37.164: tensor_model_parallel_size ...................... 2 +100.83.37.164: tensorboard_dir ................................. /data/output/llama13b//tensorboard +100.83.37.164: tensorboard_log_interval ........................ 1 +100.83.37.164: tensorboard_queue_size .......................... 1000 +100.83.37.164: test_data_path .................................. None +100.83.37.164: tile_factor ..................................... 1 +100.83.37.164: timing_log_level ................................ 0 +100.83.37.164: timing_log_option ............................... minmax +100.83.37.164: titles_data_path ................................ None +100.83.37.164: tokenizer_model ................................. None +100.83.37.164: tokenizer_type .................................. GPT2BPETokenizer +100.83.37.164: topk ............................................ 1 +100.83.37.164: train_data_exact_num_epochs ..................... None +100.83.37.164: train_data_path ................................. None +100.83.37.164: train_desc_path ................................. None +100.83.37.164: train_doc_idx_path .............................. None +100.83.37.164: train_idx_path .................................. None +100.83.37.164: train_iters ..................................... 250000 +100.83.37.164: train_sample_idx_path ........................... None +100.83.37.164: train_samples ................................... None +100.83.37.164: train_shuffle_idx_path .......................... None +100.83.37.164: train_tokens .................................... None +100.83.37.164: transformer_impl ................................ local +100.83.37.164: transformer_pipeline_model_parallel_size ........ 2 +100.83.37.164: universal_checkpoint ............................ False +100.83.37.164: untie_embeddings_and_output_weights ............. True +100.83.37.164: use_checkpoint_args ............................. False +100.83.37.164: use_checkpoint_opt_param_scheduler .............. False +100.83.37.164: use_contiguous_buffers_in_local_ddp ............. True +100.83.37.164: use_cpu_initialization .......................... None +100.83.37.164: use_dataset_only ................................ False +100.83.37.164: use_distributed_optimizer ....................... False +100.83.37.164: use_flash_attn .................................. False +100.83.37.164: use_flash_attn_triton ........................... False +100.83.37.164: use_flash_attn_v1 ............................... False +100.83.37.164: use_flash_attn_v2 ............................... False +100.83.37.164: use_fused_rmsnorm ............................... True +100.83.37.164: use_fused_sdpa .................................. True +100.83.37.164: use_fused_sdpa_with_recompute ................... False +100.83.37.164: use_hpu ......................................... False +100.83.37.164: use_one_sent_docs ............................... False +100.83.37.164: use_pin_memory .................................. False +100.83.37.164: use_ring_exchange_p2p ........................... False +100.83.37.164: use_rotary_position_embeddings .................. True +100.83.37.164: use_seq_len_plus_one_tokens ..................... True +100.83.37.164: use_tutel ....................................... False +100.83.37.164: valid_data_path ................................. None +100.83.37.164: variable_seq_lengths ............................ False +100.83.37.164: verify_checkpoint ............................... True +100.83.37.164: verify_checkpoint_model_type .................... LLAMA +100.83.37.164: virtual_pipeline_model_parallel_size ............ None +100.83.37.164: vision_backbone_type ............................ vit +100.83.37.164: vision_pretraining .............................. False +100.83.37.164: vision_pretraining_type ......................... classify +100.83.37.164: vocab_extra_ids ................................. 0 +100.83.37.164: vocab_file ...................................... /data/arxiv/gpt2-vocab.json +100.83.37.164: vocab_size ...................................... None +100.83.37.164: weight_decay .................................... 0.1 +100.83.37.164: weight_decay_incr_style ......................... constant +100.83.37.164: world_size ...................................... 32 +100.83.37.164: zero_allgather_bucket_size ...................... 0.0 +100.83.37.164: zero_contigious_gradients ....................... False +100.83.37.164: zero_reduce_bucket_size ......................... 0.0 +100.83.37.164: zero_reduce_scatter ............................. False +100.83.37.164: zero_stage ...................................... 0 +100.83.37.164: -------------------- end of arguments --------------------- +100.83.37.164: setting number of micro-batches to constant 256 +100.83.37.164: setting number of micro-batches to constant 256 +100.83.37.164: > building GPT2BPETokenizer tokenizer ... +100.83.37.188: ---------------------------------------------------------------------------------------------------- +100.83.37.188: +100.83.37.188: DeepSpeed C++/CUDA extension op reportDeepSpeed C++/CUDA extension op report +100.83.37.188: +100.83.37.188: ---------------------------------------------------------------------------------------------------- +100.83.37.188: +100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.188: runtime if needed. Op compatibility means that your system +100.83.37.188: meet the required dependencies to JIT install the op.NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.188: runtime if needed. Op compatibility means that your system +100.83.37.188: meet the required dependencies to JIT install the op. +100.83.37.188: +100.83.37.188: ---------------------------------------------------------------------------------------------------- +100.83.37.188: +100.83.37.188: JIT compiled ops requires ninjaJIT compiled ops requires ninja +100.83.37.188: +100.83.37.188: ninja .................. [OKAY] +100.83.37.188: ninja-------------------------------------------------- +100.83.37.188: op name.................. ................[OKAY] +100.83.37.188: installed ..-------------------------------------------------- +100.83.37.188: compatible +100.83.37.188: op name-------------------------------------------------- +100.83.37.188: ................ installed .. compatible +100.83.37.188: cpu_adam-------------------------------------------------- +100.83.37.188: ............... [NO] ....... [OKAY] +100.83.37.188: cpu_adam ...............fused_adam [NO] .................... [NO][OKAY] +100.83.37.188: ....... [OKAY] +100.83.37.188: fused_adam .............deepspeed_not_implemented [NO] .......[NO] [OKAY]....... +100.83.37.188: [OKAY] +100.83.37.188: deepspeed_not_implemented transformer_inference [NO].. .......[NO] [OKAY]....... +100.83.37.188: [OKAY] +100.83.37.188: transformer_inference --------------------------------------------------.. +100.83.37.188: [NO] ....... [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed general environment info: +100.83.37.188: torch install pathDeepSpeed general environment info: +100.83.37.188: ............... torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.188: ['/usr/local/lib/python3.10/dist-packages/torch']torch version +100.83.37.188: .................... torch version2.1.1a0+gitb51c9f6 +100.83.37.188: .................... deepspeed install path 2.1.1a0+gitb51c9f6........... +100.83.37.188: deepspeed install path['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.188: ........... deepspeed info ...................['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.188: 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.188: deepspeed info deepspeed wheel compiled w.................... ......0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.188: torch 2.1 deepspeed wheel compiled w. +100.83.37.188: ...... shared memory (/dev/shm) sizetorch 2.1 +100.83.37.188: .... shared memory (/dev/shm) size503.75 GB +100.83.37.188: .... 503.75 GB +100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.175: ------------------------------------------------------------------------------------------------------------------------------------------------------ +100.83.37.175: +100.83.37.175: +100.83.37.175: DeepSpeed C++/CUDA extension op reportDeepSpeed C++/CUDA extension op reportDeepSpeed C++/CUDA extension op report +100.83.37.175: +100.83.37.175: +100.83.37.175: ------------------------------------------------------------------------------------------------------------------------------------------------------ +100.83.37.175: +100.83.37.175: +100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.175: runtime if needed. Op compatibility means that your system +100.83.37.175: meet the required dependencies to JIT install the op.NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.175: runtime if needed. Op compatibility means that your system +100.83.37.175: meet the required dependencies to JIT install the op.NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.175: runtime if needed. Op compatibility means that your system +100.83.37.175: meet the required dependencies to JIT install the op. +100.83.37.175: +100.83.37.175: +100.83.37.175: ------------------------------------------------------------------------------------------------------------------------------------------------------ +100.83.37.175: +100.83.37.175: +100.83.37.175: JIT compiled ops requires ninjaJIT compiled ops requires ninjaJIT compiled ops requires ninja +100.83.37.175: +100.83.37.175: +100.83.37.175: ninjaninjaninja ...................................................... [OKAY] [OKAY] +100.83.37.175: [OKAY] +100.83.37.175: +100.83.37.175: ---------------------------------------------------------------------------------------------------- +100.83.37.175: -------------------------------------------------- +100.83.37.175: +100.83.37.175: op nameop name op name ................ ................ ................ installedinstalled installed.... ..compatiblecompatible +100.83.37.175: +100.83.37.175: compatible---------------------------------------------------------------------------------------------------- +100.83.37.175: +100.83.37.175: +100.83.37.175: -------------------------------------------------- +100.83.37.175: cpu_adam cpu_adam............... cpu_adam...............[NO] ...............[NO]....... .......[NO][OKAY] +100.83.37.175: [OKAY]....... +100.83.37.175: fused_adam[OKAY] +100.83.37.175: .............fused_adam fused_adam[NO] ............. ............. ....... [NO] [NO] [OKAY] ....... +100.83.37.175: ....... [OKAY][OKAY]deepspeed_not_implemented +100.83.37.175: +100.83.37.175: deepspeed_not_implementeddeepspeed_not_implemented[NO] .......[NO][NO] [OKAY].............. +100.83.37.175: [OKAY][OKAY]transformer_inference +100.83.37.175: +100.83.37.175: ..transformer_inference transformer_inference [NO].. .........[NO] [NO][OKAY]....... +100.83.37.175: .......[OKAY] -------------------------------------------------- +100.83.37.175: [OKAY] +100.83.37.175: +100.83.37.175: -------------------------------------------------- +100.83.37.175: -------------------------------------------------- +100.83.37.175: DeepSpeed general environment info: +100.83.37.175: torch install pathDeepSpeed general environment info: +100.83.37.175: ...............DeepSpeed general environment info: torch install path +100.83.37.175: ...............torch install path ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.175: ............... ['/usr/local/lib/python3.10/dist-packages/torch']torch version +100.83.37.175: ....................['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.175: torch version2.1.1a0+gitb51c9f6 +100.83.37.175: .................... deepspeed install path 2.1.1a0+gitb51c9f6........... +100.83.37.175: deepspeed install pathtorch version['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.175: ...............................deepspeed info ...................['/usr/local/lib/python3.10/dist-packages/deepspeed']2.1.1a0+gitb51c9f6 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.175: +100.83.37.175: +100.83.37.175: deepspeed infodeepspeed wheel compiled w. deepspeed install path ................... .................0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.175: torch 2.1 +100.83.37.175: deepspeed wheel compiled w.['/usr/local/lib/python3.10/dist-packages/deepspeed'] shared memory (/dev/shm) size +100.83.37.175: ...... deepspeed info .... torch 2.1 ................... +100.83.37.175: 503.75 GB +100.83.37.175: shared memory (/dev/shm) size0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.175: .... deepspeed wheel compiled w.503.75 GB +100.83.37.175: ...... torch 2.1 +100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.175: -------------------------------------------------- +100.83.37.175: DeepSpeed C++/CUDA extension op report +100.83.37.175: -------------------------------------------------- +100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.175: runtime if needed. Op compatibility means that your system +100.83.37.175: meet the required dependencies to JIT install the op. +100.83.37.175: -------------------------------------------------- +100.83.37.175: JIT compiled ops requires ninja +100.83.37.175: ninja .................. [OKAY] +100.83.37.175: -------------------------------------------------- +100.83.37.175: op name ................ installed .. compatible +100.83.37.175: -------------------------------------------------- +100.83.37.175: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.175: fused_adam ............. [NO] ....... [OKAY] +100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.175: transformer_inference .. [NO] ....... [OKAY] +100.83.37.175: -------------------------------------------------- +100.83.37.175: DeepSpeed general environment info: +100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.175: -------------------------------------------------- +100.83.37.175: DeepSpeed C++/CUDA extension op report +100.83.37.175: -------------------------------------------------- +100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.175: runtime if needed. Op compatibility means that your system +100.83.37.175: meet the required dependencies to JIT install the op. +100.83.37.175: -------------------------------------------------- +100.83.37.175: JIT compiled ops requires ninja +100.83.37.175: ninja .................. [OKAY] +100.83.37.175: -------------------------------------------------- +100.83.37.175: op name ................ installed .. compatible +100.83.37.175: -------------------------------------------------- +100.83.37.175: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.175: fused_adam ............. [NO] ....... [OKAY] +100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.175: transformer_inference .. [NO] ....... [OKAY] +100.83.37.175: -------------------------------------------------- +100.83.37.175: DeepSpeed general environment info: +100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.164: > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) +100.83.37.175: -------------------------------------------------- +100.83.37.175: DeepSpeed C++/CUDA extension op report +100.83.37.175: -------------------------------------------------- +100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.175: runtime if needed. Op compatibility means that your system +100.83.37.175: meet the required dependencies to JIT install the op. +100.83.37.175: -------------------------------------------------- +100.83.37.175: JIT compiled ops requires ninja +100.83.37.175: ninja .................. [OKAY] +100.83.37.175: -------------------------------------------------- +100.83.37.175: op name ................ installed .. compatible +100.83.37.175: -------------------------------------------------- +100.83.37.175: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.175: fused_adam ............. [NO] ....... [OKAY] +100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.175: transformer_inference .. [NO] ....... [OKAY] +100.83.37.175: -------------------------------------------------- +100.83.37.175: DeepSpeed general environment info: +100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed C++/CUDA extension op report +100.83.37.188: -------------------------------------------------- +100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.188: runtime if needed. Op compatibility means that your system +100.83.37.188: meet the required dependencies to JIT install the op. +100.83.37.188: -------------------------------------------------- +100.83.37.188: JIT compiled ops requires ninja +100.83.37.188: ninja .................. [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: op name ................ installed .. compatible +100.83.37.188: -------------------------------------------------- +100.83.37.188: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.188: fused_adam ............. [NO] ....... [OKAY] +100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.188: transformer_inference .. [NO] ....... [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed general environment info: +100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed C++/CUDA extension op report +100.83.37.188: -------------------------------------------------- +100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.188: runtime if needed. Op compatibility means that your system +100.83.37.188: meet the required dependencies to JIT install the op. +100.83.37.188: -------------------------------------------------- +100.83.37.188: JIT compiled ops requires ninja +100.83.37.188: ninja .................. [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: op name ................ installed .. compatible +100.83.37.188: -------------------------------------------------- +100.83.37.188: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.188: fused_adam ............. [NO] ....... [OKAY] +100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.188: transformer_inference .. [NO] ....... [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed general environment info: +100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed C++/CUDA extension op report +100.83.37.188: -------------------------------------------------- +100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.188: runtime if needed. Op compatibility means that your system +100.83.37.188: meet the required dependencies to JIT install the op. +100.83.37.188: -------------------------------------------------- +100.83.37.188: JIT compiled ops requires ninja +100.83.37.188: ninja .................. [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: op name ................ installed .. compatible +100.83.37.188: -------------------------------------------------- +100.83.37.188: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.188: fused_adam ............. [NO] ....... [OKAY] +100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.188: transformer_inference .. [NO] ....... [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed general environment info: +100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed C++/CUDA extension op report +100.83.37.188: -------------------------------------------------- +100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.188: runtime if needed. Op compatibility means that your system +100.83.37.188: meet the required dependencies to JIT install the op. +100.83.37.188: -------------------------------------------------- +100.83.37.188: JIT compiled ops requires ninja +100.83.37.188: ninja .................. [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: op name ................ installed .. compatible +100.83.37.188: -------------------------------------------------- +100.83.37.188: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.188: fused_adam ............. [NO] ....... [OKAY] +100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.188: transformer_inference .. [NO] ....... [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed general environment info: +100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.175: -------------------------------------------------- +100.83.37.175: DeepSpeed C++/CUDA extension op report +100.83.37.175: -------------------------------------------------- +100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.175: runtime if needed. Op compatibility means that your system +100.83.37.175: meet the required dependencies to JIT install the op. +100.83.37.175: -------------------------------------------------- +100.83.37.175: JIT compiled ops requires ninja +100.83.37.175: ninja .................. [OKAY] +100.83.37.175: -------------------------------------------------- +100.83.37.175: op name ................ installed .. compatible +100.83.37.175: -------------------------------------------------- +100.83.37.175: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: fused_adam ............. [NO] ....... [OKAY] +100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.175: transformer_inference .. [NO] ....... [OKAY] +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.175: -------------------------------------------------- +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.175: DeepSpeed general environment info: +100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed C++/CUDA extension op report +100.83.37.188: -------------------------------------------------- +100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.188: runtime if needed. Op compatibility means that your system +100.83.37.188: meet the required dependencies to JIT install the op. +100.83.37.188: -------------------------------------------------- +100.83.37.188: JIT compiled ops requires ninja +100.83.37.188: ninja .................. [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: op name ................ installed .. compatible +100.83.37.188: -------------------------------------------------- +100.83.37.188: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.188: fused_adam ............. [NO] ....... [OKAY] +100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.188: transformer_inference .. [NO] ....... [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed general environment info: +100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed C++/CUDA extension op report +100.83.37.188: -------------------------------------------------- +100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.188: runtime if needed. Op compatibility means that your system +100.83.37.188: meet the required dependencies to JIT install the op. +100.83.37.188: -------------------------------------------------- +100.83.37.188: JIT compiled ops requires ninja +100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.188: ninja .................. [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: op name ................ installed .. compatible +100.83.37.188: -------------------------------------------------- +100.83.37.188: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.188: fused_adam ............. [NO] ....... [OKAY] +100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.188: transformer_inference .. [NO] ....... [OKAY] +100.83.37.188: -------------------------------------------------- +100.83.37.188: DeepSpeed general environment info: +100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.165: > setting tensorboard ... +100.83.37.164: [2024-05-13 22:46:12,562] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.164: [2024-05-13 22:46:12,562] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.164: [2024-05-13 22:46:12,562] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.164: [2024-05-13 22:46:12,562] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.164: [2024-05-13 22:46:12,562] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.164: [2024-05-13 22:46:12,562] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.164: > initializing torch distributed ... +100.83.37.164: [2024-05-13 22:46:12,563] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.164: [2024-05-13 22:46:12,563] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.164: [2024-05-13 22:46:12,563] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend hccl +100.83.37.164: [2024-05-13 22:46:12,563] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.164: [2024-05-13 22:46:12,563] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.164: [2024-05-13 22:46:12,563] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.164: [2024-05-13 22:46:12,563] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.164: [2024-05-13 22:46:12,563] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.164: [2024-05-13 22:46:12,563] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.164: [2024-05-13 22:46:12,569] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.164: [2024-05-13 22:46:12,569] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed C++/CUDA extension op report +100.83.37.165: -------------------------------------------------- +100.83.37.165: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.165: runtime if needed. Op compatibility means that your system +100.83.37.165: meet the required dependencies to JIT install the op. +100.83.37.165: -------------------------------------------------- +100.83.37.165: JIT compiled ops requires ninja +100.83.37.165: ninja .................. [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: op name ................ installed .. compatible +100.83.37.165: -------------------------------------------------- +100.83.37.165: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.165: fused_adam ............. [NO] ....... [OKAY] +100.83.37.165: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.165: transformer_inference .. [NO] ....... [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed general environment info: +100.83.37.165: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.165: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.165: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.165: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.165: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.165: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.165: [2024-05-13 22:46:12,574] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.165: [2024-05-13 22:46:12,574] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.165: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.165: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.165: [2024-05-13 22:46:12,577] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.165: [2024-05-13 22:46:12,578] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed C++/CUDA extension op report +100.83.37.165: -------------------------------------------------- +100.83.37.165: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.165: runtime if needed. Op compatibility means that your system +100.83.37.165: meet the required dependencies to JIT install the op. +100.83.37.165: -------------------------------------------------- +100.83.37.165: JIT compiled ops requires ninja +100.83.37.165: ninja .................. [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: op name ................ installed .. compatible +100.83.37.165: -------------------------------------------------- +100.83.37.165: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.165: fused_adam ............. [NO] ....... [OKAY] +100.83.37.165: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.165: transformer_inference .. [NO] ....... [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed general environment info: +100.83.37.165: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.165: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.165: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.165: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.165: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.165: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed C++/CUDA extension op report +100.83.37.165: -------------------------------------------------- +100.83.37.165: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.165: runtime if needed. Op compatibility means that your system +100.83.37.165: meet the required dependencies to JIT install the op. +100.83.37.165: -------------------------------------------------- +100.83.37.165: JIT compiled ops requires ninja +100.83.37.165: ninja .................. [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: op name ................ installed .. compatible +100.83.37.165: -------------------------------------------------- +100.83.37.165: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.165: fused_adam ............. [NO] ....... [OKAY] +100.83.37.165: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.165: transformer_inference .. [NO] ....... [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed general environment info: +100.83.37.165: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.165: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.165: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.165: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.165: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.165: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.165: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.165: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.165: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.165: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed C++/CUDA extension op report +100.83.37.165: -------------------------------------------------- +100.83.37.165: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.165: runtime if needed. Op compatibility means that your system +100.83.37.165: meet the required dependencies to JIT install the op. +100.83.37.165: -------------------------------------------------- +100.83.37.165: JIT compiled ops requires ninja +100.83.37.165: ninja .................. [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: op name ................ installed .. compatible +100.83.37.165: -------------------------------------------------- +100.83.37.165: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.165: fused_adam ............. [NO] ....... [OKAY] +100.83.37.165: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.165: transformer_inference .. [NO] ....... [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed general environment info: +100.83.37.165: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.165: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.165: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.165: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.165: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.165: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.165: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.165: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.188: [2024-05-13 22:46:12,602] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.188: [2024-05-13 22:46:12,602] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.188: [2024-05-13 22:46:12,602] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.188: [2024-05-13 22:46:12,602] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed C++/CUDA extension op report +100.83.37.165: -------------------------------------------------- +100.83.37.165: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.165: runtime if needed. Op compatibility means that your system +100.83.37.165: meet the required dependencies to JIT install the op. +100.83.37.165: -------------------------------------------------- +100.83.37.165: JIT compiled ops requires ninja +100.83.37.165: ninja .................. [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: op name ................ installed .. compatible +100.83.37.165: -------------------------------------------------- +100.83.37.165: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.165: fused_adam ............. [NO] ....... [OKAY] +100.83.37.165: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.165: transformer_inference .. [NO] ....... [OKAY] +100.83.37.165: -------------------------------------------------- +100.83.37.165: DeepSpeed general environment info: +100.83.37.165: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.165: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.165: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.165: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.165: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.165: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.165: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.165: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.175: [2024-05-13 22:46:12,617] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.175: [2024-05-13 22:46:12,617] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.175: [2024-05-13 22:46:12,617] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.175: [2024-05-13 22:46:12,617] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.175: [2024-05-13 22:46:12,617] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.175: [2024-05-13 22:46:12,617] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.165: [2024-05-13 22:46:12,624] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.165: [2024-05-13 22:46:12,624] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.175: [2024-05-13 22:46:12,635] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.175: [2024-05-13 22:46:12,635] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.175: [2024-05-13 22:46:12,635] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.175: [2024-05-13 22:46:12,635] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.175: [2024-05-13 22:46:12,663] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.175: [2024-05-13 22:46:12,663] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.188: [2024-05-13 22:46:12,699] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.188: [2024-05-13 22:46:12,699] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.188: [2024-05-13 22:46:12,700] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.188: [2024-05-13 22:46:12,700] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.188: [2024-05-13 22:46:12,700] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.188: [2024-05-13 22:46:12,700] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.175: [2024-05-13 22:46:12,703] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.175: [2024-05-13 22:46:12,703] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.188: [2024-05-13 22:46:12,707] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.188: [2024-05-13 22:46:12,707] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.188: [2024-05-13 22:46:12,715] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.188: [2024-05-13 22:46:12,715] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.188: [2024-05-13 22:46:12,717] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.188: [2024-05-13 22:46:12,717] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.165: [2024-05-13 22:46:12,734] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.165: [2024-05-13 22:46:12,734] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.165: [2024-05-13 22:46:12,755] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.165: [2024-05-13 22:46:12,755] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.165: [2024-05-13 22:46:12,756] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.165: [2024-05-13 22:46:12,756] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.165: [2024-05-13 22:46:12,758] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.165: [2024-05-13 22:46:12,759] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.165: [2024-05-13 22:46:12,771] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.165: [2024-05-13 22:46:12,771] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.175: -------------------------------------------------- +100.83.37.175: DeepSpeed C++/CUDA extension op report +100.83.37.175: -------------------------------------------------- +100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at +100.83.37.175: runtime if needed. Op compatibility means that your system +100.83.37.175: meet the required dependencies to JIT install the op. +100.83.37.175: -------------------------------------------------- +100.83.37.175: JIT compiled ops requires ninja +100.83.37.175: ninja .................. [OKAY] +100.83.37.175: -------------------------------------------------- +100.83.37.175: op name ................ installed .. compatible +100.83.37.175: -------------------------------------------------- +100.83.37.175: cpu_adam ............... [NO] ....... [OKAY] +100.83.37.175: fused_adam ............. [NO] ....... [OKAY] +100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY] +100.83.37.175: transformer_inference .. [NO] ....... [OKAY] +100.83.37.175: -------------------------------------------------- +100.83.37.175: DeepSpeed general environment info: +100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] +100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6 +100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] +100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 +100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1 +100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB +100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed +100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** +100.83.37.175: [2024-05-13 22:46:13,113] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented +100.83.37.175: [2024-05-13 22:46:13,113] [INFO] [comm.py:637:init_distributed] cdb=None +100.83.37.164: > initialized tensor model parallel with size 2 +100.83.37.164: > initialized pipeline model parallel with size 2 +100.83.37.164: > setting random seeds to 1234 ... +100.83.37.164: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 +100.83.37.164: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.164: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.164: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.164: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.175: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.175: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.188: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.165: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.164: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.165: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.164: ============================= HABANA PT BRIDGE CONFIGURATION =========================== +100.83.37.164: PT_HPU_LAZY_MODE = 1 +100.83.37.164: PT_RECIPE_CACHE_PATH = +100.83.37.164: PT_CACHE_FOLDER_DELETE = 0 +100.83.37.164: PT_HPU_RECIPE_CACHE_CONFIG = +100.83.37.164: PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807 +100.83.37.164: PT_HPU_LAZY_ACC_PAR_MODE = 0 +100.83.37.164: PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0 +100.83.37.164: ---------------------------: System Configuration :--------------------------- +100.83.37.164: Num CPU Cores : 152 +100.83.37.164: CPU RAM : 1056440336 KB +100.83.37.164: ------------------------------------------------------------------------------ +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: > compiling dataset index builder ... +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: make: Entering directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data' +100.83.37.164: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.165: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: make: Nothing to be done for 'default'. +100.83.37.164: make: Leaving directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.164: >>> done with dataset index builder. Compilation time: 0.017 seconds +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.165: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.165: cmdline: git rev-parse --show-toplevel +100.83.37.165: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.165: To add an exception for this directory, call: +100.83.37.165: +100.83.37.165: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.188: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.175: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.188: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.175: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.175: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.164: cmdline: git rev-parse --show-toplevel +100.83.37.164: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.164: To add an exception for this directory, call: +100.83.37.164: +100.83.37.164: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.175: >fused kernel is only supported in cuda, skip loading fused kernel +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.175: cmdline: git rev-parse --show-toplevel +100.83.37.175: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.175: To add an exception for this directory, call: +100.83.37.175: +100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: git root error: Cmd('git') failed due to: exit code(128) +100.83.37.188: cmdline: git rev-parse --show-toplevel +100.83.37.188: stderr: 'fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: To add an exception for this directory, call: +100.83.37.188: +100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed' +100.83.37.188: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin +100.83.37.188: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.175: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.164: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin +100.83.37.164: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.164: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin +100.83.37.164: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.164: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin +100.83.37.164: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin +100.83.37.164: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.164: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.188: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.165: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin +100.83.37.165: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.164: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.164: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.164: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.164: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.165: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.164: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.164: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.188: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.188: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.188: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.188: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.165: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.165: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.165: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.165: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.165: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.165: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.164: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.164: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.164: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.164: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.188: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.188: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.165: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.165: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.165: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.165: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.165: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.165: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.188: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.188: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.175: wandb: Tracking run with wandb version 0.17.0 +100.83.37.175: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-fp5kfn51 +100.83.37.175: wandb: Run `wandb offline` to turn off syncing. +100.83.37.175: wandb: Syncing run amber-leaf-2452 +100.83.37.175: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.175: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/fp5kfn51 +100.83.37.164: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.164: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.175: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.188: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.188: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.165: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.165: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.175: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.175: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.188: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.188: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.188: wandb: Tracking run with wandb version 0.17.0 +100.83.37.188: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-rl26u8sm +100.83.37.188: wandb: Run `wandb offline` to turn off syncing. +100.83.37.188: wandb: Syncing run distinctive-river-2453 +100.83.37.188: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.188: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/rl26u8sm +100.83.37.175: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.175: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.175: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.188: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin +100.83.37.188: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc +100.83.37.164: wandb: Tracking run with wandb version 0.17.0 +100.83.37.164: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-0z570p02 +100.83.37.164: wandb: Run `wandb offline` to turn off syncing. +100.83.37.164: wandb: Syncing run light-rain-2455 +100.83.37.164: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.164: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/0z570p02 +100.83.37.164: wandb: Tracking run with wandb version 0.17.0 +100.83.37.164: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-fos1pin6 +100.83.37.164: wandb: Run `wandb offline` to turn off syncing. +100.83.37.164: wandb: Syncing run solar-planet-2454 +100.83.37.164: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.164: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/fos1pin6 +100.83.37.164: wandb: Tracking run with wandb version 0.17.0 +100.83.37.164: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-p3zm12bf +100.83.37.164: wandb: Run `wandb offline` to turn off syncing. +100.83.37.164: wandb: Syncing run morning-smoke-2456 +100.83.37.164: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.164: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/p3zm12bf +100.83.37.164: wandb: Tracking run with wandb version 0.17.0 +100.83.37.164: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-puqor9zn +100.83.37.164: wandb: Run `wandb offline` to turn off syncing. +100.83.37.164: wandb: Syncing run dazzling-water-2456 +100.83.37.164: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.164: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/puqor9zn +100.83.37.165: wandb: Tracking run with wandb version 0.17.0 +100.83.37.165: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-j732hq0y +100.83.37.165: wandb: Run `wandb offline` to turn off syncing. +100.83.37.165: wandb: Syncing run balmy-blaze-2458 +100.83.37.165: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.165: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/j732hq0y +100.83.37.175: wandb: Tracking run with wandb version 0.17.0 +100.83.37.175: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-wgnfvl09 +100.83.37.175: wandb: Run `wandb offline` to turn off syncing. +100.83.37.175: wandb: Syncing run spring-frog-2459 +100.83.37.175: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.175: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/wgnfvl09 +100.83.37.175: wandb: Tracking run with wandb version 0.17.0 +100.83.37.175: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-w4ot9y9c +100.83.37.175: wandb: Run `wandb offline` to turn off syncing. +100.83.37.175: wandb: Syncing run young-brook-2460 +100.83.37.175: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.175: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/w4ot9y9c +100.83.37.164: wandb: Tracking run with wandb version 0.17.0 +100.83.37.164: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-gtsx02j7 +100.83.37.164: wandb: Run `wandb offline` to turn off syncing. +100.83.37.164: wandb: Syncing run honest-plant-2461 +100.83.37.164: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.164: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/gtsx02j7 +100.83.37.188: wandb: Tracking run with wandb version 0.17.0 +100.83.37.188: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-rp4qghzm +100.83.37.188: wandb: Run `wandb offline` to turn off syncing. +100.83.37.188: wandb: Syncing run snowy-snowball-2462 +100.83.37.188: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.188: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/rp4qghzm +100.83.37.188: wandb: Tracking run with wandb version 0.17.0 +100.83.37.188: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224618-sdw10vkb +100.83.37.188: wandb: Run `wandb offline` to turn off syncing. +100.83.37.188: wandb: Syncing run lively-rain-2462 +100.83.37.188: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.188: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/sdw10vkb +100.83.37.165: wandb: Tracking run with wandb version 0.17.0 +100.83.37.165: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-qz3pl2zi +100.83.37.165: wandb: Run `wandb offline` to turn off syncing. +100.83.37.165: wandb: Syncing run winter-river-2466 +100.83.37.165: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.165: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/qz3pl2zi +100.83.37.165: wandb: Tracking run with wandb version 0.17.0 +100.83.37.165: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-qfph0ohf +100.83.37.165: wandb: Run `wandb offline` to turn off syncing. +100.83.37.164: wandb: Tracking run with wandb version 0.17.0 +100.83.37.164: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-0wnhrd74 +100.83.37.164: wandb: Run `wandb offline` to turn off syncing. +100.83.37.164: wandb: Syncing run ruby-field-2472 +100.83.37.164: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.164: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/0wnhrd74 +100.83.37.165: wandb: Syncing run sweet-rain-2468 +100.83.37.165: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.165: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/qfph0ohf +100.83.37.164: wandb: Tracking run with wandb version 0.17.0 +100.83.37.164: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-c3b6xxja +100.83.37.164: wandb: Run `wandb offline` to turn off syncing. +100.83.37.165: wandb: Tracking run with wandb version 0.17.0 +100.83.37.165: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-l409wihp +100.83.37.165: wandb: Run `wandb offline` to turn off syncing. +100.83.37.164: wandb: Syncing run stellar-grass-2470 +100.83.37.164: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.164: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/c3b6xxja +100.83.37.165: wandb: Syncing run confused-salad-2464 +100.83.37.165: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.165: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/l409wihp +100.83.37.165: wandb: Tracking run with wandb version 0.17.0 +100.83.37.165: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-pioyqxqf +100.83.37.165: wandb: Run `wandb offline` to turn off syncing. +100.83.37.165: wandb: Syncing run stilted-mountain-2466 +100.83.37.165: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.165: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/pioyqxqf +100.83.37.164: wandb: Tracking run with wandb version 0.17.0 +100.83.37.164: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-znk15o7l +100.83.37.164: wandb: Run `wandb offline` to turn off syncing. +100.83.37.188: wandb: Tracking run with wandb version 0.17.0 +100.83.37.188: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-925ten3y +100.83.37.188: wandb: Run `wandb offline` to turn off syncing. +100.83.37.164: wandb: Syncing run stilted-hill-2464 +100.83.37.164: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.164: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/znk15o7l +100.83.37.188: wandb: Syncing run clear-butterfly-2464 +100.83.37.188: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.188: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/925ten3y +100.83.37.165: wandb: Tracking run with wandb version 0.17.0 +100.83.37.165: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-q5b0ymee +100.83.37.165: wandb: Run `wandb offline` to turn off syncing. +100.83.37.165: wandb: Tracking run with wandb version 0.17.0 +100.83.37.165: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-mka06gnr +100.83.37.165: wandb: Run `wandb offline` to turn off syncing. +100.83.37.165: wandb: Syncing run rural-donkey-2472 +100.83.37.165: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.165: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/q5b0ymee +100.83.37.165: wandb: Syncing run rose-frog-2472 +100.83.37.165: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.165: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/mka06gnr +100.83.37.188: wandb: Tracking run with wandb version 0.17.0 +100.83.37.188: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-01m8fbfa +100.83.37.188: wandb: Run `wandb offline` to turn off syncing. +100.83.37.188: wandb: Syncing run whole-flower-2472 +100.83.37.188: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.188: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/01m8fbfa +100.83.37.165: wandb: Tracking run with wandb version 0.17.0 +100.83.37.165: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-dbon0u84 +100.83.37.165: wandb: Run `wandb offline` to turn off syncing. +100.83.37.165: wandb: Syncing run dauntless-snow-2471 +100.83.37.165: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.165: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/dbon0u84 +100.83.37.175: wandb: Tracking run with wandb version 0.17.0 +100.83.37.175: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-3k705vg6 +100.83.37.175: wandb: Run `wandb offline` to turn off syncing. +100.83.37.175: wandb: Syncing run ethereal-gorge-2478 +100.83.37.175: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.175: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/3k705vg6 +100.83.37.175: wandb: Tracking run with wandb version 0.17.0 +100.83.37.175: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-es1kzrgg +100.83.37.175: wandb: Run `wandb offline` to turn off syncing. +100.83.37.175: wandb: Syncing run celestial-sky-2476 +100.83.37.175: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.175: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/es1kzrgg +100.83.37.188: wandb: Tracking run with wandb version 0.17.0 +100.83.37.188: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-jfmjpuu6 +100.83.37.188: wandb: Run `wandb offline` to turn off syncing. +100.83.37.188: wandb: Tracking run with wandb version 0.17.0 +100.83.37.188: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-s1d2fbbr +100.83.37.188: wandb: Run `wandb offline` to turn off syncing. +100.83.37.188: wandb: Syncing run playful-shadow-2475 +100.83.37.188: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.188: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/jfmjpuu6 +100.83.37.188: wandb: Syncing run zany-armadillo-2479 +100.83.37.188: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.188: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/s1d2fbbr +100.83.37.188: wandb: Tracking run with wandb version 0.17.0 +100.83.37.175: wandb: Tracking run with wandb version 0.17.0 +100.83.37.175: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-okz7bazx +100.83.37.175: wandb: Run `wandb offline` to turn off syncing. +100.83.37.188: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-irc4ppie +100.83.37.188: wandb: Run `wandb offline` to turn off syncing. +100.83.37.175: wandb: Tracking run with wandb version 0.17.0 +100.83.37.175: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-hlnlapts +100.83.37.175: wandb: Run `wandb offline` to turn off syncing. +100.83.37.188: wandb: Syncing run brisk-sun-2479 +100.83.37.188: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.188: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/irc4ppie +100.83.37.175: wandb: Syncing run feasible-mountain-2479 +100.83.37.175: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.175: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/hlnlapts +100.83.37.175: wandb: Syncing run soft-terrain-2478 +100.83.37.175: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.175: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/okz7bazx +100.83.37.175: wandb: Tracking run with wandb version 0.17.0 +100.83.37.175: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240513_224619-lj7slcgc +100.83.37.175: wandb: Run `wandb offline` to turn off syncing. +100.83.37.175: wandb: Syncing run cerulean-disco-2483 +100.83.37.175: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs +100.83.37.175: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/lj7slcgc diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/ensemble_classifier.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/ensemble_classifier.py new file mode 100644 index 0000000000000000000000000000000000000000..c2333b70154b5761b47bcb7cdf50e11c3d500dda --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/ensemble_classifier.py @@ -0,0 +1,149 @@ +import os +import argparse +import collections + +import numpy as np +import torch + + +def process_files(args): + all_predictions = collections.OrderedDict() + all_labels = collections.OrderedDict() + all_uid = collections.OrderedDict() + for path in args.paths: + path = os.path.join(path, args.prediction_name) + try: + data = torch.load(path) + for dataset in data: + name, d = dataset + predictions, labels, uid = d + if name not in all_predictions: + all_predictions[name] = np.array(predictions) + if args.labels is None: + args.labels = [i for i in range(all_predictions[name].shape[1])] + if args.eval: + all_labels[name] = np.array(labels) + all_uid[name] = np.array(uid) + else: + all_predictions[name] += np.array(predictions) + assert np.allclose(all_uid[name], np.array(uid)) + except Exception as e: + print(e) + continue + return all_predictions, all_labels, all_uid + + +def get_threshold(all_predictions, all_labels, one_threshold=False): + if one_threshold: + all_predictons = {'combined': np.concatenate(list(all_predictions.values()))} + all_labels = {'combined': np.concatenate(list(all_predictions.labels()))} + out_thresh = [] + for dataset in all_predictions: + preds = all_predictions[dataset] + labels = all_labels[dataset] + out_thresh.append(calc_threshold(preds, labels)) + return out_thresh + + +def calc_threshold(p, l): + trials = [(i) * (1. / 100.) for i in range(100)] + best_acc = float('-inf') + best_thresh = 0 + for t in trials: + acc = ((apply_threshold(p, t).argmax(-1) == l).astype(float)).mean() + if acc > best_acc: + best_acc = acc + best_thresh = t + return best_thresh + + +def apply_threshold(preds, t): + assert (np.allclose(preds.sum(-1), np.ones(preds.shape[0]))) + prob = preds[:, -1] + thresholded = (prob >= t).astype(int) + preds = np.zeros_like(preds) + preds[np.arange(len(thresholded)), thresholded.reshape(-1)] = 1 + return preds + + +def threshold_predictions(all_predictions, threshold): + if len(threshold) != len(all_predictions): + threshold = [threshold[-1]] * (len(all_predictions) - len(threshold)) + for i, dataset in enumerate(all_predictions): + thresh = threshold[i] + preds = all_predictions[dataset] + all_predictions[dataset] = apply_threshold(preds, thresh) + return all_predictions + + +def postprocess_predictions(all_predictions, all_labels, args): + for d in all_predictions: + all_predictions[d] = all_predictions[d] / len(args.paths) + + if args.calc_threshold: + args.threshold = get_threshold(all_predictions, all_labels, args.one_threshold) + print('threshold', args.threshold) + + if args.threshold is not None: + all_predictions = threshold_predictions(all_predictions, args.threshold) + + return all_predictions, all_labels + + +def write_predictions(all_predictions, all_labels, all_uid, args): + all_correct = 0 + count = 0 + for dataset in all_predictions: + preds = all_predictions[dataset] + preds = np.argmax(preds, -1) + if args.eval: + correct = (preds == all_labels[dataset]).sum() + num = len(all_labels[dataset]) + accuracy = correct / num + count += num + all_correct += correct + accuracy = (preds == all_labels[dataset]).mean() + print(accuracy) + if not os.path.exists(os.path.join(args.outdir, dataset)): + os.makedirs(os.path.join(args.outdir, dataset)) + outpath = os.path.join( + args.outdir, dataset, os.path.splitext( + args.prediction_name)[0] + '.tsv') + with open(outpath, 'w') as f: + f.write('id\tlabel\n') + f.write('\n'.join(str(uid) + '\t' + str(args.labels[p]) + for uid, p in zip(all_uid[dataset], preds.tolist()))) + if args.eval: + print(all_correct / count) + + +def ensemble_predictions(args): + all_predictions, all_labels, all_uid = process_files(args) + all_predictions, all_labels = postprocess_predictions(all_predictions, all_labels, args) + write_predictions(all_predictions, all_labels, all_uid, args) + + +def main(): + parser = argparse.ArgumentParser() + parser.add_argument('--paths', required=True, nargs='+', + help='paths to checkpoint directories used in ensemble') + parser.add_argument('--eval', action='store_true', + help='compute accuracy metrics against labels (dev set)') + parser.add_argument('--outdir', + help='directory to place ensembled predictions in') + parser.add_argument('--prediction-name', default='test_predictions.pt', + help='name of predictions in checkpoint directories') + parser.add_argument('--calc-threshold', action='store_true', + help='calculate threshold classification') + parser.add_argument('--one-threshold', action='store_true', + help='use on threshold for all subdatasets') + parser.add_argument('--threshold', nargs='+', default=None, type=float, + help='user supplied threshold for classification') + parser.add_argument('--labels', nargs='+', default=None, + help='whitespace separated list of label names') + args = parser.parse_args() + ensemble_predictions(args) + + +if __name__ == '__main__': + main() diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/eval_utils.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/eval_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..968776aae9f404756d88436d61165e84f7d06c10 --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/eval_utils.py @@ -0,0 +1,247 @@ +# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. + +"""Evaluation utilities.""" + +import os +import time +from functools import partial + +import torch + +from megatron import get_args +from megatron import print_rank_last, is_last_rank +from megatron.core import mpu +from megatron.schedules import get_forward_backward_func +from tasks.finetune_utils import build_data_loader +from tasks.finetune_utils import process_batch +from deepspeed.accelerator import get_accelerator + + +def accuracy_func_provider(single_dataset_provider): + """Provide function that calculates accuracies.""" + args = get_args() + + # Build dataloaders. + datapaths = args.valid_data + dataloaders = [] + for datapath in datapaths: + dataset = single_dataset_provider(datapath) + dataloader = build_data_loader( + dataset, args.orig_micro_batch_size, num_workers=args.num_workers, + drop_last=(mpu.get_data_parallel_world_size() > 1)) + dataloaders.append((dataset.dataset_name, dataloader)) + + def metrics_func(model, epoch, output_predictions=False): + print_rank_last('calculating metrics ...') + correct = 0 + total = 0 + if output_predictions: + assert mpu.get_data_parallel_world_size() == 1 + named_predictions = [] + names = 'predictions' + for name, dataloader in dataloaders: + output = calculate_correct_answers(name, model, dataloader, + epoch, output_predictions) + if not output_predictions: + correct_ans, total_count = output + else: + correct_ans, total_count, predictions = output + named_predictions.append((name, predictions)) + names += '_' + name + correct += correct_ans + total += total_count + if is_last_rank(): + percent = 0 + if total > 0: + percent = float(correct) * 100.0 / float(total) + print(' >> |epoch: {}| overall: correct / total = {} / {} = ' + '{:.4f} %'.format(epoch, correct, total, percent)) + + if output_predictions and is_last_rank(): + assert args.load is not None + filename = os.path.join(args.load, names + '.pt') + torch.save(named_predictions, filename) + + return metrics_func + + +def calculate_correct_answers(name, model, dataloader, + epoch, output_predictions): + """Calculate correct over total answers and return prediction if the + `output_predictions` is true.""" + args = get_args() + forward_backward_func = get_forward_backward_func() + start_time = time.time() + for m in model: + m.eval() + saved_micro_batch_size = args.micro_batch_size + saved_global_batch_size = args.global_batch_size + + ds = dataloader.dataset + if hasattr(ds, 'sample_multiplier'): + # If our dataset as a sample_multiplier attribute that means + # each "sample" from the dataset actually has multiple samples + # that will collapse into the batch dimension (for example in + # the RACE dataset that has several options), we need to + # account for that when setting the micro batch size. + sample_multiplier = ds.sample_multiplier + else: + sample_multiplier = 1 + micro_batch_size_times_data_parallel = args.orig_micro_batch_size * args.data_parallel_size + num_micro_batches = args.orig_global_batch_size // micro_batch_size_times_data_parallel + + def loss_func(output_predictions, labels, output_tensor): + args = get_args() + logits = output_tensor + + loss_dict = {} + # Add output predictions. + if output_predictions: + assert False + loss_dict['softmaxes'] = torch.nn.Softmax(dim=-1)( + logits.float()).data.cpu().numpy().tolist() + loss_dict['labels'] = labels.data.cpu().numpy().tolist() + loss_dict['ids'] = batch['uid'].cpu().numpy().tolist() + # Compute the correct answers. + if args.finetune and args.task == 'CoLA': + predicted = torch.argmax(logits, dim=-1) + loss_dict['labels'] = labels.data.cpu().numpy().tolist() + loss_dict['predicted'] = predicted.data.cpu().numpy().tolist() + elif args.finetune and args.task == 'STS-B': + predicted = torch.squeeze(logits) + loss_dict['labels'] = labels.data.cpu().numpy().tolist() + loss_dict['predicted'] = predicted.data.cpu().numpy().tolist() + else: + predicted = torch.argmax(logits, dim=-1) + corrects = (predicted == labels) + # Add to the counters. + loss_dict['total'] = labels.size(0) + loss_dict['correct'] = corrects.sum().item() + + return 0, loss_dict + + # defined inside to capture output_predictions + def correct_answers_forward_step(batch, model): + try: + batch_ = next(batch) + except BaseException: + batch_ = batch + tokens, types, labels, attention_mask = process_batch(batch_) + + # Forward model. + args = get_args() + output_tensor = model(tokens, attention_mask, tokentype_ids=types) + + return output_tensor, partial(loss_func, output_predictions, labels) + + with torch.no_grad(): + # For all the batches in the dataset. + total = 0 + correct = 0 + labels = [] + predicted = [] + if output_predictions: + # This option is only possible when data parallel size is 1. + assert mpu.get_data_parallel_world_size() == 1 + softmaxes = [] + labels = [] + ids = [] + for _, batch in enumerate(dataloader): + # For evaluation only mode we use drop_last = False to get all the + # samples, which means we might not have a full batch, so we + # adjust batch_size here to actual batch size of data + actual_batch_size = len(batch['label']) + # ... applying sample_multiplier if necessary + args.micro_batch_size = actual_batch_size * sample_multiplier + args.global_batch_size = actual_batch_size * sample_multiplier * num_micro_batches + + loss_dicts = forward_backward_func(correct_answers_forward_step, batch, model, + optimizer=None, timers=None, forward_only=True) + + for loss_dict in loss_dicts: + if output_predictions: + softmaxes.extend(loss_dict['softmaxes']) + labels.extend(loss_dict['labels']) + ids.extend(loss_dict['ids']) + if args.finetune and args.task in ['CoLA', 'STS-B']: + labels.extend(loss_dict['labels']) + predicted.extend(loss_dict['predicted']) + else: + total += loss_dict['total'] + correct += loss_dict['correct'] + + + for m in model: + m.train() + args.micro_batch_size = saved_micro_batch_size + args.global_batch_size = saved_global_batch_size + + # Reduce. + if mpu.is_pipeline_last_stage(): + if args.finetune and args.task in ['CoLA', 'STS-B']: + if args.task == 'CoLA': + labels = get_accelerator().LongTensor(labels) + predicted = get_accelerator().LongTensor(predicted) + labels_gather = [torch.zeros(len(labels), dtype=torch.long, + device=labels.device) for _ in range(mpu.get_data_parallel_world_size())] + predicted_gather = [torch.zeros(len(predicted), dtype=torch.long, + device=predicted.device) for _ in range(mpu.get_data_parallel_world_size())] + else: + labels = get_accelerator().FloatTensor(labels) + predicted = get_accelerator().FloatTensor(predicted) + labels_gather = [torch.zeros(len(labels), dtype=torch.float, + device=labels.device) for _ in range(mpu.get_data_parallel_world_size())] + predicted_gather = [torch.zeros(len(predicted), dtype=torch.float, + device=predicted.device) for _ in range(mpu.get_data_parallel_world_size())] + torch.distributed.all_gather(labels_gather, labels, + group=mpu.get_data_parallel_group()) + torch.distributed.all_gather(predicted_gather, predicted, + group=mpu.get_data_parallel_group()) + + labels_gather = sum([x.data.cpu().numpy().tolist() for x in labels_gather], []) + predicted_gather = sum([x.data.cpu().numpy().tolist() for x in predicted_gather], []) + + # Print on screen. + if args.task == 'CoLA': + from sklearn.metrics import matthews_corrcoef + mcc = matthews_corrcoef(labels_gather, predicted_gather) + elapsed_time = time.time() - start_time + print_rank_last(' > |epoch: {}| metrics for {}: mcc ' + '= {} , elapsed time (sec): {:.3f}'.format( + epoch, name, mcc, elapsed_time)) + else: + from scipy.stats import pearsonr, spearmanr + pearson_corr = pearsonr(predicted_gather, labels_gather)[0] + spearman_corr = spearmanr(predicted_gather, labels_gather)[0] + corr = (pearson_corr + spearman_corr) / 2 + elapsed_time = time.time() - start_time + print_rank_last(' > |epoch: {}| metrics for {}: pearson ' + '= {} spearmanr = {} corr = {} elapsed time (sec): {:.3f}'.format( + epoch, name, pearson_corr, spearman_corr, + corr, elapsed_time)) + + if output_predictions: + return 0, 0, () + return 0, 0 + else: + unreduced = get_accelerator().LongTensor([correct, total]) + torch.distributed.all_reduce(unreduced, + group=mpu.get_data_parallel_group()) + + # Print on screen. + + correct_ans = unreduced[0].item() + total_count = unreduced[1].item() + percent = float(correct_ans) * 100.0 / float(total_count) + elapsed_time = time.time() - start_time + print_rank_last(' > |epoch: {}| metrics for {}: correct / total ' + '= {} / {} = {:.4f} %, elapsed time (sec): {:.3f}'.format( + epoch, name, correct_ans, total_count, + percent, elapsed_time)) + + if output_predictions: + return correct_ans, total_count, (softmaxes, labels, ids) + return correct_ans, total_count + if output_predictions: + return 0, 0, () + return 0, 0 diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/finetune_utils.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/finetune_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..b73bfe93a16137101427680bb56fe042fa8503bc --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/finetune_utils.py @@ -0,0 +1,351 @@ +# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. + +"""Finetune utilities.""" + +from functools import partial +import sys +import torch + +from megatron import get_args, get_num_microbatches +from megatron import print_rank_0 +from megatron import get_timers +from megatron.core import mpu +from megatron.core.enums import ModelType +from megatron.checkpointing import load_checkpoint +from megatron.checkpointing import save_checkpoint +from megatron.training import evaluate_and_print_results +from megatron.training import setup_model_and_optimizer +from megatron.training import train_step +from megatron.training import training_log +from megatron.utils import average_losses_across_data_parallel_group +from megatron.utils import calc_params_l2_norm +from megatron.utils import check_adlr_autoresume_termination +from deepspeed.accelerator import get_accelerator + +def process_batch(batch): + """Process batch and produce inputs for the model.""" + args = get_args() + + tokens = batch['text'].long().to(get_accelerator().device_name()).contiguous() + types = batch['types'].long().to(get_accelerator().device_name()).contiguous() + labels = batch['label'].long().to(get_accelerator().device_name()).contiguous() + attention_mask = batch['padding_mask'].float().to(get_accelerator().device_name()).contiguous() + if args.fp16: + attention_mask = attention_mask.half() + + return tokens, types, labels, attention_mask + + +def cross_entropy_loss_func(labels, output_tensor): + logits = output_tensor + + # Cross-entropy loss. + loss_func = torch.nn.CrossEntropyLoss() + loss = loss_func(logits.contiguous().float(), labels) + + # Reduce loss for logging. + averaged_loss = average_losses_across_data_parallel_group([loss]) + + return loss, {'lm loss': averaged_loss[0]} + + +def _cross_entropy_forward_step(batch, model): + """Simple forward step with cross-entropy loss.""" + timers = get_timers() + + # Get the batch. + timers('batch-generator', log_level=2).start() + try: + batch_ = next(batch) + except BaseException: + batch_ = batch + tokens, types, labels, attention_mask = process_batch(batch_) + timers('batch-generator').stop() + + # Forward model. + output_tensor = model(tokens, attention_mask, tokentype_ids=types) + + return output_tensor, partial(cross_entropy_loss_func, labels) + +def process_batch_mse(batch): + """Process batch and produce inputs for the model.""" + args = get_args() + + tokens = batch['text'].long().to(get_accelerator().device_name()).contiguous() + types = batch['types'].long().to(get_accelerator().device_name()).contiguous() + labels = batch['label'].float().to(get_accelerator().device_name()).contiguous() + attention_mask = batch['padding_mask'].float().to(get_accelerator().device_name()).contiguous() + if args.fp16: + attention_mask = attention_mask.half() + + return tokens, types, labels, attention_mask + +def mse_loss_func(labels, output_tensor): + logits = output_tensor + + # Cross-entropy loss. + loss_func = torch.nn.MSELoss() + loss = loss_func(logits.contiguous().float().view(-1), labels.view(-1)) + + # Reduce loss for logging. + averaged_loss = average_losses_across_data_parallel_group([loss]) + + return loss, {'lm loss': averaged_loss[0]} + +def mse_forward_step(batch, model): + """Simple forward step with cross-entropy loss.""" + timers = get_timers() + + # Get the batch. + timers('batch-generator').start() + try: + batch_ = next(batch) + except BaseException: + batch_ = batch + tokens, types, labels, attention_mask = process_batch_mse(batch_) + timers('batch-generator').stop() + + # Forward model. + output_tensor = model(tokens, attention_mask, tokentype_ids=types) + + return output_tensor, partial(mse_loss_func, labels) + +def build_data_loader(dataset, micro_batch_size, num_workers, drop_last, + task_collate_fn=None): + """Data loader. Note that batch-size is the local (per GPU) batch-size.""" + + # Sampler. + world_size = mpu.get_data_parallel_world_size() + rank = mpu.get_data_parallel_rank() + sampler = torch.utils.data.distributed.DistributedSampler( + dataset, num_replicas=world_size, rank=rank) + + # Data loader. Note that batch size is the per GPU batch size. + data_loader = torch.utils.data.DataLoader(dataset, + batch_size=micro_batch_size, + sampler=sampler, + shuffle=False, + num_workers=num_workers, + drop_last=drop_last, + pin_memory=True, + collate_fn=task_collate_fn) + + return data_loader + + +def _build_infinite_size_dataloader(dataloader): + """Build a looped dataloader with infinite size.""" + + iterator = dataloader.__iter__() + while True: + try: + yield iterator.__next__() + except StopIteration: + iterator = dataloader.__iter__() + + +def _build_train_valid_dataloaders(train_dataset, valid_dataset, + task_collate_fn=None): + """Traing and validation dataloaders.""" + args = get_args() + + print_rank_0('building train and validation dataloaders ...') + # Training dataset. + train_dataloader = build_data_loader(train_dataset, args.micro_batch_size, + args.num_workers, not args.keep_last, + task_collate_fn) + # Set the training iterations. + args.train_iters_per_epoch = len(train_dataloader) + args.train_iters = args.epochs * args.train_iters_per_epoch + # Validation dataset. For this dataset, we do not need to set up + # shuffling so we can just use a simple infinite loop. + valid_dataloader_ = build_data_loader(valid_dataset, args.micro_batch_size, + args.num_workers, not args.keep_last, + task_collate_fn) + valid_dataloader = _build_infinite_size_dataloader(valid_dataloader_) + + # Now that we've built the data loaders, set batch_size arguments + # to the actual batch size the model will see for this dataset. + # This is necessary so pipeline transfers know what size they are + # and the LR schedule, which is based on samples seen, gets set + # correctly. + args.orig_micro_batch_size = args.micro_batch_size + args.orig_global_batch_size = args.global_batch_size + if hasattr(train_dataset, 'sample_multiplier'): + # If our dataset as a sample_multiplier attribute that means + # each "sample" from the dataset actually has multiple samples + # that will collapse into the batch dimension (for example in + # the RACE dataset that has several options), we need to + # account for that when setting the micro batch size. + args.micro_batch_size *= train_dataset.sample_multiplier + args.global_batch_size *= train_dataset.sample_multiplier + + return train_dataloader, valid_dataloader + + +def _train(model, optimizer, opt_param_scheduler, forward_step, + train_dataloader, valid_dataloader, end_of_epoch_callback): + """Train the model.""" + args = get_args() + timers = get_timers() + + assert get_num_microbatches() == 1, "finetuning with gradient accumulation doesn't currently work" + + # Turn on training mode which enables dropout. + for m in model: + m.train() + + # Tracking loss. + losses_dict_sum = {} + + # Starting epoch and iteration + start_epoch = args.iteration // args.train_iters_per_epoch + start_iteration = args.iteration % args.train_iters_per_epoch + iteration = args.iteration + + # Memory reporting flag. + report_memory_flag = True + + # For each remaining epoch + timers('interval-time', log_level=0).start(barrier=True) + for epoch in range(start_epoch, args.epochs): + print_rank_0('working on epoch {} ...'.format(epoch + 1)) + + # Set the data loader epoch to shuffle the index iterator. + train_dataloader.sampler.set_epoch(args.seed + epoch) + + # For all the batches in the dataset. + for iteration_, batch in enumerate(train_dataloader): + + # Ignore the iterations before starting value + if iteration_ < start_iteration: + continue + # Set to zero so the next epoch does not skip any batches. + start_iteration = 0 + + # Train for one step. + out = train_step(forward_step, batch, model, optimizer, opt_param_scheduler) + + losses_dict, skipped_iter, grad_norm, num_zeros_in_grad = out + iteration += 1 + + # Logging. + params_norm = None + if args.log_params_norm: + params_norm = calc_params_l2_norm(model) + if args.deepspeed: + loss_scale = model[0].optimizer.cur_scale + else: + loss_scale = optimizer.get_loss_scale().item() + report_memory_flag = training_log(losses_dict, losses_dict_sum, + optimizer.param_groups[0]['lr'], + iteration, loss_scale, + report_memory_flag, skipped_iter, + grad_norm, params_norm, num_zeros_in_grad) + + # Autoresume + if args.adlr_autoresume and \ + (iteration % args.adlr_autoresume_interval == 0): + check_adlr_autoresume_termination(iteration, model, + optimizer, opt_param_scheduler) + + # Checkpointing + saved_checkpoint = False + if args.save and args.save_interval and \ + iteration % args.save_interval == 0: + save_checkpoint(iteration, model, optimizer, opt_param_scheduler) + saved_checkpoint = True + + # Evaluation + if args.eval_interval and iteration % args.eval_interval == 0: + prefix = 'iteration {}'.format(iteration) + evaluate_and_print_results(prefix, forward_step, + valid_dataloader, model, + iteration, None, False) + + # Exiting based on iterations + if args.exit_interval and iteration % args.exit_interval == 0: + if not saved_checkpoint: + save_checkpoint(iteration, model, optimizer, opt_param_scheduler) + torch.distributed.barrier() + print_rank_0('exiting program at iteration {}'.format(iteration)) + sys.exit() + + # Checkpointing at the end of each epoch. + if args.save: + save_checkpoint(iteration, model, optimizer, opt_param_scheduler) + + # Callback at the end of each epoch. + if end_of_epoch_callback is not None: + end_of_epoch_callback(model, epoch) + + +def finetune(train_valid_datasets_provider, model_provider, + model_type=ModelType.encoder_or_decoder, + forward_step=_cross_entropy_forward_step, + end_of_epoch_callback_provider=None, + task_collate_fn=None): + """Main finetune function used across all tasks.""" + args = get_args() + timers = get_timers() + + assert args.rampup_batch_size is None, \ + 'batch size scaling is not supported for finetuning' + + # Train and validation data loaders. + timers('train/valid/test dataset/dataloder', log_level=0).start() + if args.epochs > 0: + train_dataset, valid_dataset = train_valid_datasets_provider() + train_dataloader, valid_dataloader = _build_train_valid_dataloaders( + train_dataset, valid_dataset, task_collate_fn) + else: + args.train_iters = 0 + timers('train/valid/test dataset/dataloder').stop() + + # Build calback function. + timers('callback function', log_level=0).start() + end_of_epoch_callback = None + if end_of_epoch_callback_provider is not None: + end_of_epoch_callback = end_of_epoch_callback_provider() + timers('callback function').stop() + + # Build model, optimizer and learning rate scheduler. + timers('model and optimizer', log_level=0).start() + model, optimizer, opt_param_scheduler = setup_model_and_optimizer(model_provider, model_type) + timers('model and optimizer').stop() + + # If pretrained checkpoint is provided and we have not trained for + # any iteration (i.e., iteration is zero), then load the pretrained + # checkpoint. + timers('pretrained checkpoint', log_level=0).start(barrier=True) + if args.iteration == 0 and args.pretrained_checkpoint is not None: + original_load = args.load + args.load = args.pretrained_checkpoint + original_rng = args.no_load_rng + args.no_load_rng = True + _ = load_checkpoint(model, None, None) + args.load = original_load + args.no_load_rng = original_rng + # This is critical when only model is loaded. We should make sure + # main parameters are also updated. When DeepSpeed is enabled, + # DeepSpeed engine will handle this. + if not args.deepspeed: + optimizer.reload_model_params() + timers('pretrained checkpoint').stop() + + # Print setup timing. + print_rank_0('done with setups ...') + timers.log(['train/valid/test dataset/dataloder', 'callback function', + 'model and optimizer', 'pretrained checkpoint'], barrier=True) + print_rank_0('training ...') + + # Finetune the model. + if args.epochs > 0: + _train(model, optimizer, opt_param_scheduler, forward_step, + train_dataloader, valid_dataloader, end_of_epoch_callback) + # Or just evaluate. + else: + if end_of_epoch_callback is not None: + print_rank_0('evaluation only mode, setting epoch to -1') + end_of_epoch_callback(model, epoch=-1, output_predictions=True) + print_rank_0('done :-)') diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/main.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/main.py new file mode 100644 index 0000000000000000000000000000000000000000..9bc38f5fd28ed00f546dc8d10f9a47ab4d9afb9d --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/main.py @@ -0,0 +1,102 @@ +# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. + +"""Main tasks functionality.""" + +import os +import sys +sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), + os.path.pardir))) + +from megatron import get_args +from megatron.initialize import initialize_megatron + + +def get_tasks_args(parser): + """Provide extra arguments required for tasks.""" + group = parser.add_argument_group(title='tasks') + + group.add_argument('--task', type=str, required=True, + help='Task name.') + group.add_argument('--epochs', type=int, default=None, + help='Number of finetunning epochs. Zero results in ' + 'evaluation only.') + group.add_argument('--pretrained-checkpoint', type=str, default=None, + help='Pretrained checkpoint used for finetunning.') + group.add_argument('--keep-last', action='store_true', + help='Keep the last batch (maybe incomplete) in' + 'the data loader') + group.add_argument('--train-data', nargs='+', default=None, + help='Whitespace separated paths or corpora names ' + 'for training.') + group.add_argument('--valid-data', nargs='*', default=None, + help='path(s) to the validation data.') + group.add_argument('--overlapping-eval', type=int, default=32, + help='Sliding window for overlapping evaluation.') + group.add_argument('--strict-lambada', action='store_true', + help='Use more difficult formulation of lambada.') + # Retriever args + group.add_argument('--qa-data-dev', type=str, default=None, + help='Path to the QA dataset dev file.') + group.add_argument('--qa-data-test', type=str, default=None, + help='Path to the QA dataset test file.') + + # Faiss arguments for retriever + group.add_argument('--faiss-use-gpu', action='store_true', + help='Whether create the FaissMIPSIndex on GPU') + group.add_argument('--faiss-match', type=str, default='string', \ + choices=['regex', 'string'], help="Answer matching '\ + 'logic type") + group.add_argument('--faiss-topk-retrievals', type=int, default=100, + help='Number of blocks to use as top-k during retrieval') + + # finetune for retriever + group.add_argument('--eval-micro-batch-size', type=int, default=None, + help='Eval Batch size per model instance (local batch ' + 'size). Global batch size is local batch size ' + 'times data parallel size.') + group.add_argument('--train-with-neg', action='store_true', + help='Whether to use negative examples during model ' + 'training') + group.add_argument('--train-hard-neg', type=int, default=0, + help='Number of hard negative exmaples to use during ' + 'training') + + + # parameters for Av.rank validation method + # Following options/arguments have been taken directly from DPR codebase + group.add_argument('--val-av-rank-hard-neg', type=int, default=30, + help='Av.rank validation: how many hard negatives to' + ' take from each question pool') + group.add_argument('--val-av-rank-other-neg', type=int, default=30, + help='Av.rank validation: how many other negatives to' + ' take from each question pool') + + + return parser + + +if __name__ == '__main__': + + initialize_megatron(extra_args_provider=get_tasks_args) + + args = get_args() + + if args.num_layers_per_virtual_pipeline_stage is not None: + print("Interleaved pipeline schedule is not yet supported for downstream tasks.") + exit() + + if args.task == 'RACE': + from race.finetune import main + elif args.task in ['MNLI', 'QQP', 'QNLI', 'SST-2', 'CoLA', 'STS-B', 'MRPC', 'RTE']: + from glue.finetune import main + elif args.task in ['LAMBADA', 'WIKITEXT103']: + from zeroshot_gpt.evaluate import main + elif args.task in ['ICT-ZEROSHOT-NQ', 'RETRIEVER-EVAL']: + from orqa.evaluate_orqa import main + elif args.task in ['RET-FINETUNE-NQ']: + from orqa.supervised.finetune import main + else: + raise NotImplementedError('Task {} is not implemented.'.format( + args.task)) + + main() diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/msdp/README.md b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/msdp/README.md new file mode 100644 index 0000000000000000000000000000000000000000..27c8728eca146aea44c627a99d5f80184b6fbf84 --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/msdp/README.md @@ -0,0 +1,19 @@ + +# Multi-Stage Prompting for Knowledgeable Dialogue Generation + +Below we present the steps to run our multi-stage dialogue prompting (MSDP) framework. + +## Multi-Stage Dialogue Prompting + +### Data Preparation +1. Dataset Download: [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) and [Wizard of Internet](https://parl.ai/projects/sea/) +2. Data Processing: We provide the script to run the [`data processing`](../../examples/msdp/data_processing.sh) of the datatsets. + +### Stage-1: Prompting for Knowledge Generation +1. We provide the script to perform the [`first-stage prompting`](../../examples/msdp/prompt_knwl_gen.sh) for the knowledge generation. +2. We provide the [`evaluation script`](../../examples/msdp/eval_knwl_generation.sh) for the automatic evaluation (i.e., F1, BLEU, METEOR, and ROUGE-L) of the knowledge generation. + +### Stage-2: Prompting for Response Generation +1. We provide the script to [`prepare the input file`](../../examples/msdp/prep_resp_gen.sh) for the response generation (based on the previously generated knowledge file). +2. We provide the script to perform the [`second-stage prompting`](../../examples/msdp/prompt_resp_gen.sh) for the response generation. +3. We provide the [`evaluation script`](../../examples/msdp/eval_resp_generation.sh) for the automatic evaluation (i.e., F1, KF1, BLEU, METEOR, and ROUGE-L) of the response generation. diff --git a/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/msdp/preprocessing.py b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/msdp/preprocessing.py new file mode 100644 index 0000000000000000000000000000000000000000..d904c9d0d51d32a3f05b0a62199f3db0403d281b --- /dev/null +++ b/docker/intel_code/llama13b/Megatron-DeepSpeed/tasks/msdp/preprocessing.py @@ -0,0 +1,582 @@ +# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. + +"""Preprocessing for Wizard of Wikipedia and Wizard of Internet datasets""" + +import torch +import argparse +from nltk import word_tokenize +from tqdm import tqdm +import numpy as np +import json + +def get_args(): + parser = argparse.ArgumentParser(description="Preprocessing") + + parser.add_argument("--func", type=str, default=None, + help="choose to run which function") + parser.add_argument("--raw_file", type=str, default=None, + help="path of the input file") + parser.add_argument("--processed_file", type=str, default=None, + help="path of the output file") + parser.add_argument("--knwl_ref_file", type=str, default=None, + help="path of the knowledge reference file") + parser.add_argument("--resp_ref_file", type=str, default=None, + help="path of the knowledge reference file") + parser.add_argument("--knwl_gen_file", type=str, default=None, + help="path of the generated knowledge file") + parser.add_argument("--test_file", type=str, default=None, + help="path of the test file") + parser.add_argument("--train_file", type=str, default=None, + help="path of the train file") + parser.add_argument("--model_file", type=str, default=None, + help="path of the model file") + parser.add_argument("--data_type", type=str, default=None, + help="data types, choose one out of three types: \ + wow_seen, wow_unseen, and woi") + parser.add_argument("--seed", type=int, default=1234, + help="random seed") + + args = parser.parse_args() + return args + + +def process_wow_dataset(raw_file, processed_file, knwl_ref_file, resp_ref_file): + """ + This is a function used for processing the wizard of wikipedia (wow) dataset + Expected processed format: + topic \t dialogue context \t golden knowledge \t golden response + """ + + # loading the raw data + print("> Loading data from %s" % raw_file) + with open(raw_file, "r") as fr: + dialog_data = json.load(fr) + + print("> Processing data ...") + fproc = open(processed_file, "w") + fknwl = open(knwl_ref_file, "w") if knwl_ref_file else None + fresp = open(resp_ref_file, "w") if resp_ref_file else None + + for i, sample in enumerate(tqdm(dialog_data)): + # get all the dialog data for a single dialog sample + dialog = sample["dialog"] + + turn_list = [] # collect the dialog history + # processing for each single dialog sample + for j, turn in enumerate(dialog): + # text of each turn + text = turn["text"] + if not (text.endswith("?") or text.endswith(".") or text.endswith("!")): + text = text + "." + + if j == 0: + # first turn + turn_list.append(text) + continue + + speaker = turn["speaker"].lower() + if "wizard" in speaker: + checked_sentence = list(turn["checked_sentence"].values()) # knowledge + checked_passage = list(turn["checked_passage"].values()) # topic + + assert len(checked_sentence) <= 1 + + # get the ground truth knowledge + if len(checked_sentence) > 0: + checked_sentence = checked_sentence[0] + else: + checked_sentence = "no_passages_used" + + if len(checked_passage) == 1: + checked_passage = checked_passage[0] + else: + checked_passage = "no_passages_used" + + # get the topic + if checked_passage != "no_passages_used": + topic = checked_passage + else: + topic = sample["chosen_topic"] + + dialog_context = " [SEP] ".join(turn_list) + knowledge = checked_sentence + response = text + # add the response into the dialog history + turn_list.append(response) + + # write to the output files + fproc.write(topic + "\t" + dialog_context + "\t" + \ + knowledge + "\t" + response + "\n") + + if fknwl: + fknwl.write(knowledge + "\n") + if fresp: + # tokenize for evaluation + response = " ".join(word_tokenize(response)) + fresp.write(response + "\n") + + else: + assert "apprentice" in speaker + turn_list.append(text) + + fproc.close() + if fknwl: + fknwl.close() + if fresp: + fresp.close() + + +def process_woi_dataset(raw_file, processed_file, knwl_ref_file, resp_ref_file): + """ + This is a function used for processing the wizard of internet (woi) dataset + Expected processed format: + topic \t dialogue context \t golden knowledge \t golden response + """ + + print("> Processing %s" % raw_file) + fproc = open(processed_file, "w") + fknwl = open(knwl_ref_file, "w") if knwl_ref_file else None + fresp = open(resp_ref_file, "w") if resp_ref_file else None + + with open(raw_file, "r") as fr: + for i, line in tqdm(enumerate(fr)): + # read line by line, each line uses json format + line = line.strip() + item_dict = json.loads(line) + + # item_dict is a dictionary + # its key is the data id, and its value contains all the data content + item_dict = item_dict.values() + item_dict = list(item_dict)[0] # len(item_dict) == 1 + + # get the whole dialog data for a single dialog sample + dialog_data = item_dict['dialog_history'] + length = len(dialog_data) + + turn_list = [] # collect the dialog history + search_text = "" + for i in range(length): + item = dialog_data[i] + action = item['action'] + + if action == "Wizard => SearchAgent": + search_text = item['text'] + + elif action == "Wizard => Apprentice": + if len(turn_list) == 0: + # first turn + turn = item['text'] + turn_list.append(turn) + continue + + # get the relevant content + contents = item["context"]["contents"] + selects = item["context"]["selected_contents"] + flag = selects[0][0] + selects = selects[1:] + assert len(selects) == len(contents) + + # get the topic + if flag: + # no knowledge sentence is used for the response + topic = "no_topic" + knwl_sent = "no_passages_used" + else: + # we consider the search text as the topic + topic = search_text + # get the knowledge sentence + knwl_sent = "" + for content, select in zip(contents, selects): + content = content['content'] + assert len(content) == len(select) + for c, s in zip(content, select): + if s: + knwl_sent = c + break + + if knwl_sent == "": + # no knowledge is used for the response + topic = "no_topic" + knwl_sent = "no_passages_used" + + # get dialogue context, knowledge, and response + dialog_context = " [SEP] ".join(turn_list) + response = item['text'] + + # processing + topic = topic.replace("\n", "").replace("\r", \ + "").replace("\t", "") + dialog_context = dialog_context.replace("\n", "").replace("\r", \ + "").replace("\t", "") + knwl_sent = knwl_sent.replace("\n", "").replace("\r", \ + "").replace("\t", "") + response = response.replace("\n", "").replace("\r", \ + "").replace("\t", "") + + if topic != "no_topic": + # write to the ouput files + fproc.write(topic + "\t" + dialog_context + "\t" + \ + knwl_sent + "\t" + response + "\n") + if fknwl: + fknwl.write(knwl_sent + "\n") + if fresp: + # tokenize for evaluation + response = " ".join(word_tokenize(response)) + fresp.write(response + "\n") + + turn_list.append(response) + + elif action == "Apprentice => Wizard": + turn = item['text'] + turn_list.append(turn) + + else: + assert action == "SearchAgent => Wizard", \ + "Please check whether you have used the correct data!" + + fproc.close() + if fknwl: + fknwl.close() + if fresp: + fresp.close() + + +def get_database(test_datapath, train_datapath, data_type): + """Get the database by topics""" + + assert data_type in ["wow_seen", "wow_unseen", "woi"], \ + "Please input a correct data type!!" + + # get test data topic dictionary + print("> reading test data from %s" % test_datapath) + test_topics = {} + with open(test_datapath, "r") as f: + for i, line in enumerate(f): + line = line.strip() + splits = line.split("\t") + topic = splits[0] + test_topics[topic] = True + + print("> reading data from %s" % train_datapath) + train_data_by_topic = {} + dialog_data_by_topic = {} + dialog_examples = [] + with open(train_datapath, "r") as f: + for i, line in enumerate(f): + line = line.strip() + splits = line.split("\t") + topic = splits[0] + turns = splits[1].split(" [SEP] ")[-3:] + knowledge = splits[2] + response = splits[3] + # filtering data samples + if knowledge == "no_passages_used": + # when no knowledge is used + continue + if data_type != "wow_seen" and ("(" in knowledge or ")" in knowledge): + # when bracket exists in the knowledge + continue + if data_type != "wow_seen" and topic not in knowledge: + # when topic does not exist in the knowledge + continue + + # get the instance + last_turn = turns[-1] + instance = "( " + last_turn + " ) " + topic + " => " + knowledge + + # construct dialog example + dialog_example = "" + if data_type != "wow_seen": + dialog_example += "( " + topic + " ) " + for i, turn in enumerate(turns): + if i != 0: + dialog_example += " " + dialog_example += turn + + # check overlaps + if topic in test_topics: + if topic not in train_data_by_topic: + train_data_by_topic[topic] = [instance] + else: + train_data_by_topic[topic].append(instance) + + if topic not in dialog_data_by_topic: + dialog_data_by_topic[topic] = [dialog_example] + else: + dialog_data_by_topic[topic].append(dialog_example) + + else: + # filtering data samples + if len(knowledge.split()) > 20: + # knowledge is too long + continue + if knowledge.startswith("It") or knowledge.startswith("it") or \ + knowledge.startswith("This") or knowledge.startswith("this"): + continue + + # append all the data into dialogue examples list + dialog_examples.append((topic, dialog_example, instance)) + + return train_data_by_topic, dialog_data_by_topic, dialog_examples + + +emb_dict = {} +def select_prompts_based_on_similarity( + query, dialog_list, prompt_list, topic, tokenizer, encoder, topk): + """Select samples based on the similarity""" + + with torch.no_grad(): + # get the query embeddings + query_ids = tokenizer.encode(query) + query_ids = torch.LongTensor([query_ids]).cuda() + query_emb = encoder(input_ids=query_ids).pooler_output + query_emb = query_emb[0] + + # calculate embeddings for the samples in the database + if topic in emb_dict: + example_embeddings = emb_dict[topic] + example_embeddings = example_embeddings.cuda() + else: + for idx, example in enumerate(dialog_list): + example_ids = tokenizer.encode(example) + example_ids = torch.LongTensor([example_ids]).cuda() + example_emb = encoder(input_ids=example_ids).pooler_output + if idx == 0: + example_embeddings = example_emb + else: + example_embeddings = torch.cat( + (example_embeddings, example_emb), dim=0) + emb_dict[topic] = example_embeddings.cpu() + + # compare the similarity and select the topk samples + similarity_list = example_embeddings.matmul(query_emb) + _, indices = torch.topk(similarity_list, k=topk) + + indices = indices.tolist() + indices = indices[::-1] # reverse the order + selected_prompts = [] + for index in indices: + # index = index.item() + selected_prompts.append(prompt_list[index]) + + return selected_prompts + + +def prompt_selection_for_knowledge_generation( + test_datapath, train_datapath, model_path, output_prompt_path, data_type): + """Selecting prompts for the knowledge generation""" + + print("> Selecting prompts for the knowledge generation") + + train_data_by_topic, dialog_data_by_topic, dialog_examples = \ + get_database(test_datapath, train_datapath, data_type) + + from transformers import DPRQuestionEncoderTokenizer + print("> loading tokenizer and encoder") + tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( + 'facebook/dpr-question_encoder-single-nq-base') + encoder = torch.load(model_path).cuda() + + print("> getting dialog embeddings") + with torch.no_grad(): + for idx, example in tqdm(enumerate(dialog_examples)): + dialog = example[1] + dialog_ids = tokenizer.encode(dialog) + dialog_ids = torch.LongTensor([dialog_ids]).cuda() + dialog_emb = encoder(input_ids=dialog_ids).pooler_output + + if idx == 0: + dialog_embeddings = dialog_emb + else: + dialog_embeddings = torch.cat((dialog_embeddings, dialog_emb), dim=0) + + print("> reading test data from %s" % test_datapath) + prompt_list_for_each_sample = [] + with open(test_datapath, "r") as f: + for i, line in tqdm(enumerate(f)): + line = line.strip() + + splits = line.split("\t") + topic = splits[0] + turns = splits[1].split(" [SEP] ")[-3:] + + # get the query sentence + query_sent = "" + if data_type != "seen": + query_sent += "( " + topic + " ) " + for i, turn in enumerate(turns): + if i != 0: + query_sent += " " + query_sent += turn + + if topic not in train_data_by_topic: + # get the query embedding + query_ids = tokenizer.encode(query_sent) + query_ids = torch.LongTensor([query_ids]).cuda() + query_emb = encoder(input_ids=query_ids).pooler_output + query_emb = query_emb[0] + + # calculate the similarity + similarity_list = dialog_embeddings.matmul(query_emb) + _, indices = torch.sort(similarity_list) + indices = indices.tolist() + selected_topics = {} + selected_prompts = [] + num_prompt = 0 + for index in indices: + example = dialog_examples[index] + topic_temp = example[0] + if topic_temp not in selected_topics: + selected_topics[topic_temp] = True + selected_prompts.append(example[2]) + num_prompt += 1 + if num_prompt == 10: + break + + # get the selected samples + example_list = selected_prompts[::-1] + key = topic + " " + turns[-1] + prompt_list_for_each_sample.append({key: example_list}) + + else: + num_data_sample = min(len(train_data_by_topic[topic]), 10) + total_example_list = train_data_by_topic[topic] + + dialog_list = dialog_data_by_topic[topic] + assert len(dialog_list) == len(train_data_by_topic[topic]) + + # calculate the similarity + example_list = select_prompts_based_on_similarity( + query_sent, dialog_list, total_example_list, + topic, tokenizer, encoder, topk=num_data_sample) + + key = topic + " " + turns[-1] + prompt_list_for_each_sample.append({key: example_list}) + + print("writing to %s" % output_prompt_path) + with open(output_prompt_path, "w") as f: + for instance in tqdm(prompt_list_for_each_sample): + json.dump(instance, f) + f.write("\n") + + +def prompt_selection_for_response_generation(input_path, output_path, seed): + """Selecting prompts for the response generation""" + + print("> Selecting prompts for the response generation") + print("> set random seed") + np.random.seed(seed) + + prompt_example_list = [] + print("> reading data from %s" % input_path) + with open(input_path, "r") as f: + for i, line in tqdm(enumerate(f)): + line = line.strip() + splits = line.split("\t") + + # get the topic, context, knowledge and response + topic = splits[0] + dialog_context = splits[1] + knowledge = splits[2] + response = splits[3] + turns = dialog_context.split(" [SEP] ")[-3:] + if knowledge == "no_passages_used": + continue + + # calculate the overlap ratio + from nltk import word_tokenize + knowledge_sent_token_list = word_tokenize(knowledge) + knowledge_sent_token_dict = {token: True for token in knowledge_sent_token_list} + knowledge_len = len(knowledge_sent_token_list) + response_token_list = word_tokenize(response) + response_len = len(response_token_list) + num_overlap_token = 0 + accumulator = 0 + for token in response_token_list: + if token in knowledge_sent_token_dict: + accumulator += 1 + else: + if accumulator >= 10: + num_overlap_token += accumulator + accumulator = 0 + if accumulator >= 10: + num_overlap_token += accumulator + + # filtering the data based on the ratio + if num_overlap_token > response_len * 0.9 or num_overlap_token < response_len * 0.6: + continue + if num_overlap_token < knowledge_len * 0.8: + continue + + last_turn = " ".join(word_tokenize(turns[-1])) + knowledge = " ".join(word_tokenize(knowledge)) + response = " ".join(word_tokenize(response)) + prompt_example = "" + # add dialog context + prompt_example += "Topic: " + topic + ". " + prompt_example += "User says: " + last_turn + " " + prompt_example += "We know that: " + knowledge + " " + prompt_example += "System replies: " + response + + prompt_example_list.append(prompt_example) + + # shuffle the prompt examples + np.random.shuffle(prompt_example_list) + + print("> writing to %s" % output_path) + with open(output_path, "w") as f: + # f.write("Generate the System's response based on the knowledge sentence:\n") + for i in tqdm(range(20)): + example = prompt_example_list[i] + f.write(example + "\n") + + +def prepare_input_for_response_generation(test_file, knwl_gen_file, processed_file): + """Preparing inputs for the response generation""" + + print("> Reading knowledge file from %s" % knwl_gen_file) + # get the knowledge list + with open(knwl_gen_file, "r") as f: + knowledge_list = f.readlines() + + print("> Processing ...") + with open(test_file, "r") as fr: + with open(processed_file, "w") as fw: + for line_num, line in enumerate(tqdm(fr)): + line = line.strip() + splits = line.split("\t") + # prepare topic, context, knowledge and response + topic = splits[0] + dialog_context = splits[1] + response = splits[3] + knowledge = knowledge_list[line_num] + knowledge = knowledge.strip() + if "<|endoftext|>" in knowledge: + knowledge = knowledge.replace("<|endoftext|>", "") + + # write to the output file + fw.write(topic + "\t" + dialog_context + "\t" \ + + knowledge + "\t" + response + "\n") + + +if __name__ == "__main__": + + args = get_args() + if args.func == "process_wow_dataset": + process_wow_dataset(args.raw_file, args.processed_file, args.knwl_ref_file, args.resp_ref_file) + + elif args.func == "process_woi_dataset": + process_woi_dataset(args.raw_file, args.processed_file, args.knwl_ref_file, args.resp_ref_file) + + elif args.func == "get_knwl_gen_prompts": + prompt_selection_for_knowledge_generation( + args.test_file, args.train_file, args.model_file, + args.processed_file, args.data_type) + + elif args.func == "get_resp_gen_prompts": + prompt_selection_for_response_generation( + args.train_file, args.processed_file, args.seed) + + elif args.func == "prepare_input": + prepare_input_for_response_generation( + args.test_file, args.knwl_gen_file, args.processed_file) diff --git a/docker/intel_code/llama13b/README.md b/docker/intel_code/llama13b/README.md new file mode 100644 index 0000000000000000000000000000000000000000..6b6e4d009ef5eea9913d1d24126b1cb34e9c6971 --- /dev/null +++ b/docker/intel_code/llama13b/README.md @@ -0,0 +1,48 @@ +# Docker Compose Example (GPT3 175B) + +A Docker compose example for GPT3 175B. To enable passwordless ssh, the `docker-compose.yml` mounts the `ssh` directory +that contains a pre-generated ssh rsa key. Copying this same key to all running containers in the cluster allows +passwordless ssh between containers. + +Example yaml configuration for GPT3 175B. It builds the docker image based off the dockerfile and mounts necessary +models and datasets. + +``` [yaml] +version: '3.3' + +services: + gpt3: + image: gpt3-pytorch-installer-1.13.1 + build: + context: . + network: host + args: + BASE_IMAGE: "${BASE_IMAGE}" + container_name: gpt3_demo + runtime: habana + environment: + - HABANA_VISIBLE_DEVICES=all + - OMPI_MCA_btl_vader_single_copy_mechanism=none + - MODEL_GARDEN_ROOT=/model_garden/ + cap_add: + - SYS_NICE + ipc: host + network_mode: host + working_dir: /launch + volumes: + - ./ssh:/root/.ssh/ + - ./launch:/launch + - ./launch/.deepspeed_env:/root/.deepspeed_env + - /data/preprocessed_c4_spm:/software/data/datasets/c4_mlperf_19_12_2022/preprocessed_c4_spm + - /data/model_garden:/model_garden + tty: true + +``` + +## Example Launch Scripts + +I have constructed a few examples in the `launch` folder for hccl_demo, and PyTorch MNIST. To run these +scripts, download the necessary repos (hccl_demo, model_garden, deepspeed-fork, etc...) + + + diff --git a/docker/intel_code/llama13b/docker-compose-tensorboard.yml b/docker/intel_code/llama13b/docker-compose-tensorboard.yml new file mode 100644 index 0000000000000000000000000000000000000000..c1cffee8f2dbca00e482d447cad55b088432a2b7 --- /dev/null +++ b/docker/intel_code/llama13b/docker-compose-tensorboard.yml @@ -0,0 +1,18 @@ +version: '3.3' + +services: + gpt3: + image: vault.habana.ai/gaudi-docker/1.8.0/ubuntu20.04/habanalabs/pytorch-installer-1.13.0:latest + container_name: gpt3-pt-tensorboard + command: "tensorboard --logdir /tensorboard" + environment: + - HABANA_VISIBLE_DEVICES=all + - OMPI_MCA_btl_vader_single_copy_mechanism=none + cap_add: + - SYS_NICE + ipc: host + network_mode: host + volumes: + - /software/ltran/G2/docker/gpt3/model_garden/internal/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/scripts/qa/ds_z0_nl96_hs12288_nh96_gb1536_mb1_D1_T8_P8_DEVICES64/tensorboard:/tensorboard + + diff --git a/docker/intel_code/llama13b/docker-compose.yml b/docker/intel_code/llama13b/docker-compose.yml new file mode 100644 index 0000000000000000000000000000000000000000..40241e419ff545dee9cb88c1168c5592610e4236 --- /dev/null +++ b/docker/intel_code/llama13b/docker-compose.yml @@ -0,0 +1,32 @@ +version: '3.3' + +services: + bloom13b: + image: llama-1.15.0 + build: + context: . + network: host + args: + BASE_IMAGE: "${BASE_IMAGE}" + container_name: llama_demo + privileged: true + runtime: habana + environment: + - HABANA_VISIBLE_DEVICES=all + - OMPI_MCA_btl_vader_single_copy_mechanism=none + - PYTHON=/usr/bin/python3 + - MEGATRON_DEEPSPEED_ROOT=/Megatron-DeepSpeed + - PYTHONPATH=/Megatron-DeepSpeed:/usr/lib/habanalabs/:$PYTHONPATH + cap_add: + - SYS_NICE + ipc: host + network_mode: host + working_dir: /Megatron-DeepSpeed + volumes: + - ./ssh:/root/.ssh/ + - ./launch:/launch + - ./launch/.deepspeed_env:/root/.deepspeed_env + - /mnt/weka/peacock/:/data/ + - ./Megatron-DeepSpeed:/Megatron-DeepSpeed + - /etc/habanalabs/:/etc/habanalabs/ + tty: true diff --git a/docker/intel_code/llama13b/docker-compose_old.yml b/docker/intel_code/llama13b/docker-compose_old.yml new file mode 100644 index 0000000000000000000000000000000000000000..61bf832429db324666f6580584908fb8ca2a32ad --- /dev/null +++ b/docker/intel_code/llama13b/docker-compose_old.yml @@ -0,0 +1,32 @@ +version: '3.3' + +services: + bloom13b: + image: llama13b-pytorch-installer-1.14.0 + build: + context: . + network: host + args: + BASE_IMAGE: "${BASE_IMAGE}" + container_name: llama_demo + privileged: true + runtime: habana + environment: + - HABANA_VISIBLE_DEVICES=all + - OMPI_MCA_btl_vader_single_copy_mechanism=none + - PYTHON=/usr/bin/python3 + - MODEL_REFERENCES_ROOT=/Model-References + - PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH + cap_add: + - SYS_NICE + ipc: host + network_mode: host + working_dir: /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed + volumes: + - ./ssh:/root/.ssh/ + - ./launch:/launch + - ./launch/.deepspeed_env:/root/.deepspeed_env + - /mnt/weka/peacock/:/data/ + - ./Model-References:/Model-References + - /etc/habanalabs/:/etc/habanalabs/ + tty: true diff --git a/docker/intel_code/llama13b/dockerfile b/docker/intel_code/llama13b/dockerfile new file mode 100644 index 0000000000000000000000000000000000000000..684be9bdd755c240c054cdf59839dbedeff1e946 --- /dev/null +++ b/docker/intel_code/llama13b/dockerfile @@ -0,0 +1,33 @@ +ARG BASE_IMAGE +FROM ${BASE_IMAGE} + +RUN wget -q --output-document - https://raw.githubusercontent.com/HabanaAI/Megatron-DeepSpeed/main/megatron/core/requirements.txt | grep -v "^-e" > /tmp/requirements.txt + +RUN pip install git+https://github.com/HabanaAI/DeepSpeed.git@1.15.1 +RUN apt-get update -y && \ + apt install screen -y && \ + pip install transformers && \ + pip install -r /tmp/requirements.txt && \ + pip install wandb && \ + pip install git+https://github.com/polisettyvarma/lm-evaluation-harness.git@lm_harness_fixes && \ + apt-get install pdsh tmux -y + +RUN mkdir ~/.ssh && \ +cd ~/.ssh && \ +sed -i 's/#Port 22/Port 3122/g' /etc/ssh/sshd_config && \ +sed -i 's/# Port 22/ Port 3122/g' /etc/ssh/ssh_config && \ +sed -i 's/3022/3122/g' ~/.bashrc && \ +echo "Host *" >> ~/.ssh/config && \ +echo "ForwardAgent yes" >> ~/.ssh/config && \ +echo "StrictHostKeyChecking no" >> ~/.ssh/config && \ +echo "UserKnownHostsFile /dev/null" >> ~/.ssh/config && \ +echo "LogLevel ERROR" >> ~/.ssh/config && \ +chmod 600 ~/.ssh/config + + + + + + + + diff --git a/docker/intel_code/llama13b/run_tokenizer.sh b/docker/intel_code/llama13b/run_tokenizer.sh new file mode 100644 index 0000000000000000000000000000000000000000..2c265fa2f26351dfe54ac0944b7f9ca72677323e --- /dev/null +++ b/docker/intel_code/llama13b/run_tokenizer.sh @@ -0,0 +1,68 @@ +DIR="/sml1/datasets/slimpj/hub/datasets--MBZUAI-LLM--SlimPajama-627B-DC/snapshots/fe5ace6d3edb8568b6a4f608a460d3f7aef7bc0b" +DATASET_NAME="RedPajamaArxiv" +TRAIN_DIR="$DIR/train/$DATASET_NAME" +TEST_DIR="$DIR/test/$DATASET_NAME" +OUTPUT_TRAIN_DIR="$DIR/train/$DATASET_NAME-copy" +OUTPUT_TEST_DIR="$DIR/test/$DATASET_NAME-copy" + +mkdir -p $OUTPUT_TEST_DIR +mkdir -p $OUTPUT_TRAIN_DIR + +cd $TRAIN_DIR +ls -lrt . | awk '{print $9,$11}' | while read a b; do cmd="cp $b $OUTPUT_TRAIN_DIR/$a"; eval $cmd;done; + +cd $TEST_DIR +ls -lrt . | awk '{print $9,$11}' | while read a b; do cmd="cp $b $OUTPUT_TEST_DIR/$a"; eval $cmd;done; +cd - + +FINAL_DIR="/sml1/datasets/$DATASET_NAME/" +mkdir -p $FINAL_DIR +mkdir -p $FINAL_DIR/train +mkdir -p $FINAL_DIR/test + +max_files=$(ls -lrt $TRAIN_DIR | wc -l) +mx=$(echo ${#max_files}) +REGEX="[0-9]" +for m in $(seq 1 $mx) +do + cmd="unzstd $OUTPUT_TRAIN_DIR/chunk_$REGEX.jsonl.zst --stdout > $FINAL_DIR/train/p_$m.jsonl" + eval $cmd + REGEX="[1-9]$REGEX" +done +final_cmd="cat $FINAL_DIR/train/p_[1-$mx].jsonl > $FINAL_DIR/train/final.jsonl" +eval $final_cmd + +max_files=$(ls -lrt $TEST_DIR | wc -l) +mx=$(echo ${#max_files}) +REGEX="[0-9]" +for m in $(seq 1 $mx) +do + cmd="unzstd $OUTPUT_TEST_DIR/chunk_$REGEX.jsonl.zst --stdout > $FINAL_DIR/test/p_$m.jsonl" + eval $cmd + REGEX="[1-9]$REGEX" +done + +final_cmd="cat $FINAL_DIR/test/p_[1-$mx].jsonl > $FINAL_DIR/test/final.jsonl" +eval $final_cmd + +cat $FINAL_DIR/*/final.jsonl > $FINAL_DIR/final.jsonl + +mkdir -p $FINAL_DIR/tokenizer/ +python3 /sml1/Megatron-LLaMA/tools/preprocess_data.py \ + --input $FINAL_DIR/final.jsonl \ + --output-prefix $FINAL_DIR/tokenizer/ \ + --vocab-file /sml1/datasets/gpt2/vocab.json \ + --merge-file /sml1/datasets/gpt2/merges.txt \ + --dataset-impl mmap --tokenizer-type GPT2BPETokenizer \ + --append-eod --workers 8 --chunk-size 50 >tokenizer.out 2>tokenizer.err + + + + + + + + + + + diff --git a/docker/intel_code/llama13b/ssh/authorized_keys b/docker/intel_code/llama13b/ssh/authorized_keys new file mode 100644 index 0000000000000000000000000000000000000000..92341e3d16c6f043fc14601c09e79bcbdd061c9a --- /dev/null +++ b/docker/intel_code/llama13b/ssh/authorized_keys @@ -0,0 +1 @@ +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdaIGX6WA8S8kOK2pS9AJNb/rbub2PbTJSD0eGDd9uXPst4cH0pOmwWAoPsaf/Etio0eqQq68FuS3/Zy3CzyTBHh20aEq+wPI+AAW1CEjk8PwuIeQdLqvdMTCKsX9ZBAJivkRQZeu3z/gCk1iytMa76yGgd3pq96ipdh8nsaJesfuKGHJeNY2oseWg2X+fCqhgtdcIR7WSHutUfe4Vier5lCj9ycnywrrcU5modKIuh9QQD6oUdl6Fm3f4swmQrIFZrcOy/oioawJ7+ruEAEaxnEVsBkGk3FGihGNqXeKNeze/GtcJUZTJjA49j2SeKvZ27p2squIMcZoB53vhDgDRuJSevJc962xO7+QLp+pChQSDc8YkjsVXGbA0JgBXsRKGUiVenrWZEHKD13WZYzMWOHuChdhrxOx+HmwkQ1c3HXZoPlVPANd/3Wkb7ujz/cWsHf0Ytsd6rOYjbm4MEx8CQCkfa89GL0+/3Z9xWxJuhAE+iXCXFXNqr3fc4+/c+VM= root@g2-srv90-c02l-idc diff --git a/docker/intel_code/llama13b/ssh/id_rsa b/docker/intel_code/llama13b/ssh/id_rsa new file mode 100644 index 0000000000000000000000000000000000000000..2a0e08072b3ddf944c608d226dc40453e3d8614b --- /dev/null +++ b/docker/intel_code/llama13b/ssh/id_rsa @@ -0,0 +1,38 @@ +-----BEGIN OPENSSH PRIVATE KEY----- +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn +NhAAAAAwEAAQAAAYEAnWiBl+lgPEvJDitqUvQCTW/627m9j20yUg9Hhg3fblz7LeHB9KTp +sFgKD7Gn/xLYqNHqkKuvBbkt/2ctws8kwR4dtGhKvsDyPgAFtQhI5PD8LiHkHS6r3TEwir +F/WQQCYr5EUGXrt8/4ApNYsrTGu+shoHd6aveoqXYfJ7GiXrH7ihhyXjWNqLHloNl/nwqo +YLXXCEe1kh7rVH3uFYnq+ZQo/cnJ8sK63FOZqHSiLofUEA+qFHZehZt3+LMJkKyBWa3Dsv +6IqGsCe/q7hABGsZxFbAZBpNxRooRjal3ijXs3vxrXCVGUyYwOPY9knir2du6drKriDHGa +Aed74Q4A0biUnryXPetsTu/kC6fqQoUEg3PGJI7FVxmwNCYAV7EShlIlXp61mRByg9d1mW +MzFjh7goXYa8Tsfh5sJENXNx12aD5VTwDXf91pG+7o8/3FrB39GLbHeqzmI25uDBMfAkAp +H2vPRi9Pv92fcVsSboQBPolwlxVzaq933OPv3PlTAAAFkJ9Z2omfWdqJAAAAB3NzaC1yc2 +EAAAGBAJ1ogZfpYDxLyQ4ralL0Ak1v+tu5vY9tMlIPR4YN325c+y3hwfSk6bBYCg+xp/8S +2KjR6pCrrwW5Lf9nLcLPJMEeHbRoSr7A8j4ABbUISOTw/C4h5B0uq90xMIqxf1kEAmK+RF +Bl67fP+AKTWLK0xrvrIaB3emr3qKl2Hyexol6x+4oYcl41jaix5aDZf58KqGC11whHtZIe +61R97hWJ6vmUKP3JyfLCutxTmah0oi6H1BAPqhR2XoWbd/izCZCsgVmtw7L+iKhrAnv6u4 +QARrGcRWwGQaTcUaKEY2pd4o17N78a1wlRlMmMDj2PZJ4q9nbunayq4gxxmgHne+EOANG4 +lJ68lz3rbE7v5Aun6kKFBINzxiSOxVcZsDQmAFexEoZSJV6etZkQcoPXdZljMxY4e4KF2G +vE7H4ebCRDVzcddmg+VU8A13/daRvu6PP9xawd/Ri2x3qs5iNubgwTHwJAKR9rz0YvT7/d +n3FbEm6EAT6JcJcVc2qvd9zj79z5UwAAAAMBAAEAAAGAVVqC0y4AOhHaLu3J1LttuDHddG +IOcQSEQcz5Oq6xFjYjGakONCtscGv84K+z6fN9OmXBbLs7x723PIPlY3pRcspyzw2yYidb +89StQ5H/fO1TwWwtNsnE9ccjjEFdTZaH+KU1g+cQX3bNBBCECztNfD6u2EWRQwmSEnnzwO +FoqzKVtDc3ZPBjJTN50bO+qS3tSauws1O3GEndz84NWO6VVMpLQ/q0oAeJrclDS/4ap2KN +0ju8PSZGcOpxrpDewe1XyLzP1HmT6Kcno3nwBw2b3yr1eLQVGps3SCjhDFzTjMPoI9QWUr +BB+oxLI3FbSj/c74l4EiOtio+qDfAooUkPXeayfirBrAeZg4PKS5iD9BiZEIJ7CKJ9L6tN +naqxf7TLPFCiBJHjPJaWPhji3hQeuPl6QwCuHo/EHFKcIrZqBJzLT1LrJq7XgGpE7oSi1Q +2IMfLKIjq7Oxy4tjChKzfUBgGiYpFN+rPtzqHQzvDE6ajQorQm2FeCbsx6Gx6bZVJRAAAA +wQCvTGRI35pthr1psDllTvitJgX/1Ail0n2X3Ul/wg8SQYvPVMjMzIJQGiM3mtzEYJY0Sf +FKeuzNxJOHKmEYF2siZobh4PQPMS33ERqh/soJZ6R9T4dIGyU6KDSDwS7yPg3LkFzv6QZ9 ++07RQdeYNE8/j1mnLhmPtFb8RxemAHdH6zQJlZPfKN0zlN1KkY48njBGeTqM2ub/enDgxd +6eWvKCkv8z3/uhHcoaDXldZpBBLR/cx4gG8qcACUTultP7h30AAADBANCeZqWyUTP35RhX +DMBXaQ0nul3ytIWYkeyC2bu7AbOZwdWp8L5i5JwoKwrlss2Wg83lWqwBnjaEdnuMngUhh6 +4MzjY4aSy3WURP2EPLuegK3I8I3FpDv45Dj+5M2WqbQNXNeDvDTtbToFUJJMbjI5eUECUR +akL6Qcio0KEru4HFaOmV13WHkZbTrcMjzIS7krPCV9s4E4f8RP+GYd3qc9N7y60lcsLVKn +FEuIf/eSQbaPrTw0y/wNLVYXTH2FvpOwAAAMEAwSiZ81FqO4ZtDRFR/LV1YHXOctasdfH3 +JCSj5vEVPWIIspUvDzsLEYuynh4nCYRs5tANY0hP17REkkLfwB6T7i7APinjod4LDTzqnJ +/gA0AFRJO/WEZ/+xJpiuEwtLiNbQjOJkeNhqRkBJzRpbjFG6WR/s/ufEFYC4gs6uqB54FN +T+AUX/t8plq+KGirTxsHzUA0rJPvsJ4PNt8ZEaV6sgPKkx54YI8CrKyDwOln5gphw6oVBV +0IXbm1Eof4uu7JAAAAFnJvb3RAZzItc3J2OTAtYzAybC1pZGMBAgME +-----END OPENSSH PRIVATE KEY----- diff --git a/docker/intel_code/llama13b/ssh/id_rsa.pub b/docker/intel_code/llama13b/ssh/id_rsa.pub new file mode 100644 index 0000000000000000000000000000000000000000..92341e3d16c6f043fc14601c09e79bcbdd061c9a --- /dev/null +++ b/docker/intel_code/llama13b/ssh/id_rsa.pub @@ -0,0 +1 @@ +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdaIGX6WA8S8kOK2pS9AJNb/rbub2PbTJSD0eGDd9uXPst4cH0pOmwWAoPsaf/Etio0eqQq68FuS3/Zy3CzyTBHh20aEq+wPI+AAW1CEjk8PwuIeQdLqvdMTCKsX9ZBAJivkRQZeu3z/gCk1iytMa76yGgd3pq96ipdh8nsaJesfuKGHJeNY2oseWg2X+fCqhgtdcIR7WSHutUfe4Vier5lCj9ycnywrrcU5modKIuh9QQD6oUdl6Fm3f4swmQrIFZrcOy/oioawJ7+ruEAEaxnEVsBkGk3FGihGNqXeKNeze/GtcJUZTJjA49j2SeKvZ27p2squIMcZoB53vhDgDRuJSevJc962xO7+QLp+pChQSDc8YkjsVXGbA0JgBXsRKGUiVenrWZEHKD13WZYzMWOHuChdhrxOx+HmwkQ1c3HXZoPlVPANd/3Wkb7ujz/cWsHf0Ytsd6rOYjbm4MEx8CQCkfa89GL0+/3Z9xWxJuhAE+iXCXFXNqr3fc4+/c+VM= root@g2-srv90-c02l-idc diff --git a/docker/stable-diffusion/docker-compose.yml b/docker/stable-diffusion/docker-compose.yml new file mode 100644 index 0000000000000000000000000000000000000000..319082aaf3a340f4dbfc1db9f18a968df294e9d9 --- /dev/null +++ b/docker/stable-diffusion/docker-compose.yml @@ -0,0 +1,28 @@ +version: '3.3' + +services: + stable-diffusion: + image: stable-diffusion-pytorch-installer-1.13.1 + build: + context: . + network: host + args: + BASE_IMAGE: "${BASE_IMAGE}" + container_name: stable_diffusion_demo + runtime: habana + environment: + - HABANA_VISIBLE_DEVICES=all + - OMPI_MCA_btl_vader_single_copy_mechanism=none + - MODEL_GARDEN_ROOT=/model_garden/ + cap_add: + - SYS_ADMIN + ipc: host + network_mode: host + working_dir: /launch + volumes: + - ./ssh:/root/.ssh/ + - ./launch:/launch + - ${STABLE_DIFFUSION_HOST}:${STABLE_DIFFUSION_DIR}:ro + - ${MODEL_GARDEN_HOST}:/model_garden + - /etc/habanalabs/:/etc/habanalabs/:ro + tty: true diff --git a/docker/stable-diffusion/dockerfile b/docker/stable-diffusion/dockerfile new file mode 100644 index 0000000000000000000000000000000000000000..985d087535b211bb4a4bcda74b255f1a21af5cfd --- /dev/null +++ b/docker/stable-diffusion/dockerfile @@ -0,0 +1,35 @@ +ARG BASE_IMAGE +FROM ${BASE_IMAGE} + +# COPY deepspeed-fork /deepspeed-fork +# COPY /model_garden/PyTorch/generative_models/stable-diffusion-training/ /stable-diffusion-training/requirements.txt + +RUN wget -q --output-document - https://raw.githubusercontent.com/HabanaAI/Model-References/master/PyTorch/generative_models/stable-diffusion-training/requirements.txt | grep -v "^-e" > /tmp/requirements.txt + +RUN apt-get update -y && \ + pip install -r /tmp/requirements.txt && \ + apt-get install pdsh tmux -y + +RUN mkdir ~/.ssh && \ +cd ~/.ssh && \ +sed -i 's/#Port 22/Port 3022/g' /etc/ssh/sshd_config && \ +sed -i 's/# Port 22/ Port 3022/g' /etc/ssh/ssh_config && \ +echo "/etc/init.d/ssh start \"-p 3022\"" >> ~/.bashrc && \ +echo "Host *" >> ~/.ssh/config && \ +echo "ForwardAgent yes" >> ~/.ssh/config && \ +echo "StrictHostKeyChecking no" >> ~/.ssh/config && \ +echo "UserKnownHostsFile /dev/null" >> ~/.ssh/config && \ +echo "LogLevel ERROR" >> ~/.ssh/config && \ +chmod 600 ~/.ssh/config + + +# COPY launch /launch +# RUN mv /launch/.deepspeed_env ~/ + +RUN git config --global --add safe.directory /model_garden + + + + + + diff --git a/docker/stable-diffusion/launch/.deepspeed_env b/docker/stable-diffusion/launch/.deepspeed_env new file mode 100644 index 0000000000000000000000000000000000000000..634c2f6837ca104e819997cc8c5cd3057592247d --- /dev/null +++ b/docker/stable-diffusion/launch/.deepspeed_env @@ -0,0 +1,2 @@ +PYTHONPATH=/model_garden:$PYTHONPATH +MODEL_GARDEN_ROOT=/model_garden/ \ No newline at end of file diff --git a/docker/stable-diffusion/launch/hostsfile b/docker/stable-diffusion/launch/hostsfile new file mode 100644 index 0000000000000000000000000000000000000000..d2f9c95d26afe0cd97b8d99abc58620615a9bf15 --- /dev/null +++ b/docker/stable-diffusion/launch/hostsfile @@ -0,0 +1,8 @@ +g2-srv89-c02l-idc +g2-srv90-c02l-idc +g2-srv91-c02l-idc +g2-srv92-c02l-idc +g2h-srv93-c02l-idc +g2h-srv94-c02l-idc +g2h-srv95-c02l-idc +g2h-srv96-c02l-idc diff --git a/docker/stable-diffusion/launch/run_sdt.sh b/docker/stable-diffusion/launch/run_sdt.sh new file mode 100644 index 0000000000000000000000000000000000000000..0d195ecf31aedd179953f344d76a86fe2e59480c --- /dev/null +++ b/docker/stable-diffusion/launch/run_sdt.sh @@ -0,0 +1,22 @@ +#!/bin/bash + +RUN_SH=`pwd`/sdt.sh +# Set TRAIN_BATCHES=1000 to add arg "--limit_train_batches 1000" mentioned in +# https://github.com/HabanaAI/Model-References/tree/master/PyTorch/generative_models/stable-diffusion-training#multi-server-training-examples +# It will run full epoch if TRAIN_BATCHES is not set +MPI_MODEL_ENV_VARS=" -x TRAIN_BATCHES=1000 " +CMD="mpirun \ + --tag-output \ + --allow-run-as-root \ + --bind-to none \ + --report-bindings \ + --npernode 1 \ + --hostfile hostsfile \ + -x MASTER_ADDR=$(head -n 1 hostsfile) + -x LD_PRELOAD=${LD_PRELOAD} \ + -x MODEL_GARDEN_ROOT $MPI_MODEL_ENV_VARS \ + ${RUN_SH};" + +echo $CMD +eval $CMD + diff --git a/docker/stable-diffusion/launch/sdt.sh b/docker/stable-diffusion/launch/sdt.sh new file mode 100644 index 0000000000000000000000000000000000000000..8db197befc010fcc198cfc462cee3c9d87cae574 --- /dev/null +++ b/docker/stable-diffusion/launch/sdt.sh @@ -0,0 +1,55 @@ +#!/bin/bash + +cd ${MODEL_GARDEN_ROOT}/PyTorch/generative_models/stable-diffusion-training/ + +git config --global --add safe.directory `pwd`/src/taming-transformers +git config --global --add safe.directory `pwd`/src/clip +pip install -r requirements.txt + +export PYTHONPATH=`pwd`/src/taming-transformers:$PYTHONPATH +export CKPT_PATH=/software/lfs/data/pytorch/stable-diffusion/model.ckpt + +if [ -z ${OMPI_COMM_WORLD_SIZE} ]; then WORLD_SIZE=${WORLD_SIZE:-1}; else WORLD_SIZE=${OMPI_COMM_WORLD_SIZE}; fi +if [ -z ${OMPI_COMM_WORLD_RANK} ]; then NODE_RANK=${NODE_RANK:-0}; else NODE_RANK=${OMPI_COMM_WORLD_RANK}; fi + +hostname + +ulimit -n $(ulimit -aH|grep "open file" |tr -s ' '|cut -d ' ' -f 4) +echo ===SOFT LIMIT=== +ulimit -a +echo ===HARD LIMIT=== +ulimit -aH +echo ========== +export NODE_RANK=${NODE_RANK} +BATCH_SIZE=${BATCH_SIZE:-8} +HPU_GRAPH=${HPU_GRAPH:-True} +TRAIN_EPOCHS=${TRAIN_EPOCHS:-10} + +if [ ! -z ${TRAIN_BATCHES} ]; then + LIMIT_TRAIN_BATCHES="--limit_train_batches ${TRAIN_BATCHES}" +fi + +for v in $(printenv |grep OMPI | cut -d '=' -f 1); do + unset $v + echo unset $v +done + +CMD="python main.py \ + --base hpu_config_web_dataset.yaml \ + --train \ + --scale_lr False \ + --seed 0 \ + --hpus 8 \ + --batch_size ${BATCH_SIZE} \ + --use_lazy_mode True \ + --hmp \ + --no-test True \ + --max_epochs ${TRAIN_EPOCHS} \ + ${LIMIT_TRAIN_BATCHES} \ + --limit_val_batches 0 \ + --hpu_graph ${HPU_GRAPH} \ + --ckpt_path=${CKPT_PATH} \ + --num_nodes ${WORLD_SIZE}" + +echo $CMD +eval $CMD diff --git a/docker/stable-diffusion/ssh/authorized_keys b/docker/stable-diffusion/ssh/authorized_keys new file mode 100644 index 0000000000000000000000000000000000000000..92341e3d16c6f043fc14601c09e79bcbdd061c9a --- /dev/null +++ b/docker/stable-diffusion/ssh/authorized_keys @@ -0,0 +1 @@ +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdaIGX6WA8S8kOK2pS9AJNb/rbub2PbTJSD0eGDd9uXPst4cH0pOmwWAoPsaf/Etio0eqQq68FuS3/Zy3CzyTBHh20aEq+wPI+AAW1CEjk8PwuIeQdLqvdMTCKsX9ZBAJivkRQZeu3z/gCk1iytMa76yGgd3pq96ipdh8nsaJesfuKGHJeNY2oseWg2X+fCqhgtdcIR7WSHutUfe4Vier5lCj9ycnywrrcU5modKIuh9QQD6oUdl6Fm3f4swmQrIFZrcOy/oioawJ7+ruEAEaxnEVsBkGk3FGihGNqXeKNeze/GtcJUZTJjA49j2SeKvZ27p2squIMcZoB53vhDgDRuJSevJc962xO7+QLp+pChQSDc8YkjsVXGbA0JgBXsRKGUiVenrWZEHKD13WZYzMWOHuChdhrxOx+HmwkQ1c3HXZoPlVPANd/3Wkb7ujz/cWsHf0Ytsd6rOYjbm4MEx8CQCkfa89GL0+/3Z9xWxJuhAE+iXCXFXNqr3fc4+/c+VM= root@g2-srv90-c02l-idc diff --git a/docker/stable-diffusion/ssh/id_rsa b/docker/stable-diffusion/ssh/id_rsa new file mode 100644 index 0000000000000000000000000000000000000000..2a0e08072b3ddf944c608d226dc40453e3d8614b --- /dev/null +++ b/docker/stable-diffusion/ssh/id_rsa @@ -0,0 +1,38 @@ +-----BEGIN OPENSSH PRIVATE KEY----- +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn +NhAAAAAwEAAQAAAYEAnWiBl+lgPEvJDitqUvQCTW/627m9j20yUg9Hhg3fblz7LeHB9KTp +sFgKD7Gn/xLYqNHqkKuvBbkt/2ctws8kwR4dtGhKvsDyPgAFtQhI5PD8LiHkHS6r3TEwir +F/WQQCYr5EUGXrt8/4ApNYsrTGu+shoHd6aveoqXYfJ7GiXrH7ihhyXjWNqLHloNl/nwqo +YLXXCEe1kh7rVH3uFYnq+ZQo/cnJ8sK63FOZqHSiLofUEA+qFHZehZt3+LMJkKyBWa3Dsv +6IqGsCe/q7hABGsZxFbAZBpNxRooRjal3ijXs3vxrXCVGUyYwOPY9knir2du6drKriDHGa +Aed74Q4A0biUnryXPetsTu/kC6fqQoUEg3PGJI7FVxmwNCYAV7EShlIlXp61mRByg9d1mW +MzFjh7goXYa8Tsfh5sJENXNx12aD5VTwDXf91pG+7o8/3FrB39GLbHeqzmI25uDBMfAkAp +H2vPRi9Pv92fcVsSboQBPolwlxVzaq933OPv3PlTAAAFkJ9Z2omfWdqJAAAAB3NzaC1yc2 +EAAAGBAJ1ogZfpYDxLyQ4ralL0Ak1v+tu5vY9tMlIPR4YN325c+y3hwfSk6bBYCg+xp/8S +2KjR6pCrrwW5Lf9nLcLPJMEeHbRoSr7A8j4ABbUISOTw/C4h5B0uq90xMIqxf1kEAmK+RF +Bl67fP+AKTWLK0xrvrIaB3emr3qKl2Hyexol6x+4oYcl41jaix5aDZf58KqGC11whHtZIe +61R97hWJ6vmUKP3JyfLCutxTmah0oi6H1BAPqhR2XoWbd/izCZCsgVmtw7L+iKhrAnv6u4 +QARrGcRWwGQaTcUaKEY2pd4o17N78a1wlRlMmMDj2PZJ4q9nbunayq4gxxmgHne+EOANG4 +lJ68lz3rbE7v5Aun6kKFBINzxiSOxVcZsDQmAFexEoZSJV6etZkQcoPXdZljMxY4e4KF2G +vE7H4ebCRDVzcddmg+VU8A13/daRvu6PP9xawd/Ri2x3qs5iNubgwTHwJAKR9rz0YvT7/d +n3FbEm6EAT6JcJcVc2qvd9zj79z5UwAAAAMBAAEAAAGAVVqC0y4AOhHaLu3J1LttuDHddG +IOcQSEQcz5Oq6xFjYjGakONCtscGv84K+z6fN9OmXBbLs7x723PIPlY3pRcspyzw2yYidb +89StQ5H/fO1TwWwtNsnE9ccjjEFdTZaH+KU1g+cQX3bNBBCECztNfD6u2EWRQwmSEnnzwO +FoqzKVtDc3ZPBjJTN50bO+qS3tSauws1O3GEndz84NWO6VVMpLQ/q0oAeJrclDS/4ap2KN +0ju8PSZGcOpxrpDewe1XyLzP1HmT6Kcno3nwBw2b3yr1eLQVGps3SCjhDFzTjMPoI9QWUr +BB+oxLI3FbSj/c74l4EiOtio+qDfAooUkPXeayfirBrAeZg4PKS5iD9BiZEIJ7CKJ9L6tN +naqxf7TLPFCiBJHjPJaWPhji3hQeuPl6QwCuHo/EHFKcIrZqBJzLT1LrJq7XgGpE7oSi1Q +2IMfLKIjq7Oxy4tjChKzfUBgGiYpFN+rPtzqHQzvDE6ajQorQm2FeCbsx6Gx6bZVJRAAAA +wQCvTGRI35pthr1psDllTvitJgX/1Ail0n2X3Ul/wg8SQYvPVMjMzIJQGiM3mtzEYJY0Sf +FKeuzNxJOHKmEYF2siZobh4PQPMS33ERqh/soJZ6R9T4dIGyU6KDSDwS7yPg3LkFzv6QZ9 ++07RQdeYNE8/j1mnLhmPtFb8RxemAHdH6zQJlZPfKN0zlN1KkY48njBGeTqM2ub/enDgxd +6eWvKCkv8z3/uhHcoaDXldZpBBLR/cx4gG8qcACUTultP7h30AAADBANCeZqWyUTP35RhX +DMBXaQ0nul3ytIWYkeyC2bu7AbOZwdWp8L5i5JwoKwrlss2Wg83lWqwBnjaEdnuMngUhh6 +4MzjY4aSy3WURP2EPLuegK3I8I3FpDv45Dj+5M2WqbQNXNeDvDTtbToFUJJMbjI5eUECUR +akL6Qcio0KEru4HFaOmV13WHkZbTrcMjzIS7krPCV9s4E4f8RP+GYd3qc9N7y60lcsLVKn +FEuIf/eSQbaPrTw0y/wNLVYXTH2FvpOwAAAMEAwSiZ81FqO4ZtDRFR/LV1YHXOctasdfH3 +JCSj5vEVPWIIspUvDzsLEYuynh4nCYRs5tANY0hP17REkkLfwB6T7i7APinjod4LDTzqnJ +/gA0AFRJO/WEZ/+xJpiuEwtLiNbQjOJkeNhqRkBJzRpbjFG6WR/s/ufEFYC4gs6uqB54FN +T+AUX/t8plq+KGirTxsHzUA0rJPvsJ4PNt8ZEaV6sgPKkx54YI8CrKyDwOln5gphw6oVBV +0IXbm1Eof4uu7JAAAAFnJvb3RAZzItc3J2OTAtYzAybC1pZGMBAgME +-----END OPENSSH PRIVATE KEY----- diff --git a/docker/stable-diffusion/ssh/id_rsa.pub b/docker/stable-diffusion/ssh/id_rsa.pub new file mode 100644 index 0000000000000000000000000000000000000000..92341e3d16c6f043fc14601c09e79bcbdd061c9a --- /dev/null +++ b/docker/stable-diffusion/ssh/id_rsa.pub @@ -0,0 +1 @@ +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdaIGX6WA8S8kOK2pS9AJNb/rbub2PbTJSD0eGDd9uXPst4cH0pOmwWAoPsaf/Etio0eqQq68FuS3/Zy3CzyTBHh20aEq+wPI+AAW1CEjk8PwuIeQdLqvdMTCKsX9ZBAJivkRQZeu3z/gCk1iytMa76yGgd3pq96ipdh8nsaJesfuKGHJeNY2oseWg2X+fCqhgtdcIR7WSHutUfe4Vier5lCj9ycnywrrcU5modKIuh9QQD6oUdl6Fm3f4swmQrIFZrcOy/oioawJ7+ruEAEaxnEVsBkGk3FGihGNqXeKNeze/GtcJUZTJjA49j2SeKvZ27p2squIMcZoB53vhDgDRuJSevJc962xO7+QLp+pChQSDc8YkjsVXGbA0JgBXsRKGUiVenrWZEHKD13WZYzMWOHuChdhrxOx+HmwkQ1c3HXZoPlVPANd/3Wkb7ujz/cWsHf0Ytsd6rOYjbm4MEx8CQCkfa89GL0+/3Z9xWxJuhAE+iXCXFXNqr3fc4+/c+VM= root@g2-srv90-c02l-idc