Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
2,400
Given the following text description, write Python code to implement the functionality described below step by step Description: Using the PyTorch JIT Compiler with Pyro This tutorial shows how to use the PyTorch jit compiler in Pyro models. Summary Step1: Introduction PyTorch 1.0 includes a jit compiler to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode". Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference. The rest of this tutorial focuses on Pyro's jitted inference algorithms Step2: First let's run as usual with an SVI object and Trace_ELBO. Step3: Next to run with a jit compiled inference, we simply replace diff - elbo = Trace_ELBO() + elbo = JitTrace_ELBO() Also note that the AutoDiagonalNormal guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the guide(data) once to initialize, then run the compiled SVI, Step4: Notice that we have a more than 2x speedup for this small model. Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler. Step5: We can compile the potential energy computation in NUTS using the jit_compile=True argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using ignore_jit_warnings=True. Step6: We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structure Time series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$ Non-tensor inputs should be passed as **kwargs to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed **kwargs. Tensor inputs should be passed as *args. These must not determine model structure. However len(args) may determine model structure (as is used e.g. in semisupervised models). To illustrate this with a time series model, we will pass in a sequence of observations as a tensor arg and the sequence length as a non-tensor kwarg Step7: Now lets' run SVI as usual. Step8: Again we'll simply swap in a Jit* implementation diff - elbo = TraceEnum_ELBO(max_plate_nesting=1) + elbo = JitTraceEnum_ELBO(max_plate_nesting=1) Note that we are manually specifying the max_plate_nesting arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
Python Code: import os import torch import pyro import pyro.distributions as dist from torch.distributions import constraints from pyro import poutine from pyro.distributions.util import broadcast_shape from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI from pyro.infer.mcmc import MCMC, NUTS from pyro.infer.autoguide import AutoDiagonalNormal from pyro.optim import Adam smoke_test = ('CI' in os.environ) assert pyro.__version__.startswith('1.7.0') Explanation: Using the PyTorch JIT Compiler with Pyro This tutorial shows how to use the PyTorch jit compiler in Pyro models. Summary: You can use compiled functions in Pyro models. You cannot use pyro primitives inside compiled functions. If your model has static structure, you can use a Jit* version of an ELBO algorithm, e.g. ```diff Trace_ELBO() JitTrace_ELBO() ``` The HMC and NUTS classes accept jit_compile=True kwarg. Models should input all tensors as *args and all non-tensors as **kwargs. Each different value of **kwargs triggers a separate compilation. Use **kwargs to specify all variation in structure (e.g. time series length). To ignore jit warnings in safe code blocks, use with pyro.util.ignore_jit_warnings():. To ignore all jit warnings in HMC or NUTS, pass ignore_jit_warnings=True. Table of contents Introduction A simple model Varying structure End of explanation def model(data): loc = pyro.sample("loc", dist.Normal(0., 10.)) scale = pyro.sample("scale", dist.LogNormal(0., 3.)) with pyro.plate("data", data.size(0)): pyro.sample("obs", dist.Normal(loc, scale), obs=data) guide = AutoDiagonalNormal(model) data = dist.Normal(0.5, 2.).sample((100,)) Explanation: Introduction PyTorch 1.0 includes a jit compiler to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode". Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference. The rest of this tutorial focuses on Pyro's jitted inference algorithms: JitTrace_ELBO, JitTraceGraph_ELBO, JitTraceEnum_ELBO, JitMeanField_ELBO, HMC(jit_compile=True), and NUTS(jit_compile=True). For further reading, see the examples/ directory, where most examples include a --jit option to run in compiled mode. A simple model Let's start with a simple Gaussian model and an autoguide. End of explanation %%time pyro.clear_param_store() elbo = Trace_ELBO() svi = SVI(model, guide, Adam({'lr': 0.01}), elbo) for i in range(2 if smoke_test else 1000): svi.step(data) Explanation: First let's run as usual with an SVI object and Trace_ELBO. End of explanation %%time pyro.clear_param_store() guide(data) # Do any lazy initialization before compiling. elbo = JitTrace_ELBO() svi = SVI(model, guide, Adam({'lr': 0.01}), elbo) for i in range(2 if smoke_test else 1000): svi.step(data) Explanation: Next to run with a jit compiled inference, we simply replace diff - elbo = Trace_ELBO() + elbo = JitTrace_ELBO() Also note that the AutoDiagonalNormal guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the guide(data) once to initialize, then run the compiled SVI, End of explanation %%time nuts_kernel = NUTS(model) pyro.set_rng_seed(1) mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data) Explanation: Notice that we have a more than 2x speedup for this small model. Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler. End of explanation %%time nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True) pyro.set_rng_seed(1) mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data) Explanation: We can compile the potential energy computation in NUTS using the jit_compile=True argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using ignore_jit_warnings=True. End of explanation def model(sequence, num_sequences, length, state_dim=16): # This is a Gaussian HMM model. with pyro.plate("states", state_dim): trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim))) emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.)) emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.)) # We're doing manual data subsampling, so we need to scale to actual data size. with poutine.scale(scale=num_sequences): # We'll use enumeration inference over the hidden x. x = 0 for t in pyro.markov(range(length)): x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]), infer={"enumerate": "parallel"}) pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale), obs=sequence[t]) guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"])) # This is fake data of different lengths. lengths = [24] * 50 + [48] * 20 + [72] * 5 sequences = [torch.randn(length) for length in lengths] Explanation: We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structure Time series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$ Non-tensor inputs should be passed as **kwargs to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed **kwargs. Tensor inputs should be passed as *args. These must not determine model structure. However len(args) may determine model structure (as is used e.g. in semisupervised models). To illustrate this with a time series model, we will pass in a sequence of observations as a tensor arg and the sequence length as a non-tensor kwarg: End of explanation %%time pyro.clear_param_store() elbo = TraceEnum_ELBO(max_plate_nesting=1) svi = SVI(model, guide, Adam({'lr': 0.01}), elbo) for i in range(1 if smoke_test else 10): for sequence in sequences: svi.step(sequence, # tensor args num_sequences=len(sequences), length=len(sequence)) # non-tensor args Explanation: Now lets' run SVI as usual. End of explanation %%time pyro.clear_param_store() # Do any lazy initialization before compiling. guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0])) elbo = JitTraceEnum_ELBO(max_plate_nesting=1) svi = SVI(model, guide, Adam({'lr': 0.01}), elbo) for i in range(1 if smoke_test else 10): for sequence in sequences: svi.step(sequence, # tensor args num_sequences=len(sequences), length=len(sequence)) # non-tensor args Explanation: Again we'll simply swap in a Jit* implementation diff - elbo = TraceEnum_ELBO(max_plate_nesting=1) + elbo = JitTraceEnum_ELBO(max_plate_nesting=1) Note that we are manually specifying the max_plate_nesting arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually. End of explanation
2,401
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https Step1: Input Dataset The dataset that will be used for training is the TCIA CBIS-DDSM dataset. This dataset contains ~2500 mammography images in DICOM format. Each image is given a BI-RADS breast density score from 1 to 4. In this tutorial, we will build a binary classifier that distinguishes between breast density "2" (scattered density) and "3" (heterogeneously dense). These are the two most common and variably assigned scores. In the literature, this is said to be particularly difficult for radiologists to consistently distinguish. Step2: Next, we are going to transfer the DICOM instances to the Cloud Healthcare API. Note Step3: Explore the Cloud Healthcare DICOM dataset (optional) This is an optional section to explore the Cloud Healthcare DICOM dataset. In the following code, we simply just list the studies that we have loaded into the Cloud Healthcare API. You can modify the num_of_studies_to_print parameter to print as many studies as desired. Step4: Convert DICOM to JPEG The ML model that we will build requires that the dataset be in JPEG. We will leverage the Cloud Healthcare API to transcode DICOM to JPEG. First we will create a Google Cloud Storage bucket to hold the output JPEG files. Next, we will use the ExportDicomData API to transform the DICOMs to JPEGs. Step5: Next we will convert the DICOMs to JPEGs using the ExportDicomData. Step6: We will use the Operation name returned from the previous command to poll the status of ExportDicomData. We will poll for operation completeness, which should take a few minutes. When the operation is complete, the operation's done field will be set to true. Meanwhile, you should be able to observe the JPEG images being added to your Google Cloud Storage bucket. Training We will use Transfer Learning to retrain a generically trained trained model to perform breast density classification. Specifically, we will use an Inception V3 checkpoint as the starting point. The neural network we will use can roughly be split into two parts Step7: The following command will kick off a Cloud Dataflow pipeline that runs preprocessing. The script that has the relevant code is preprocess.py. You can check out how the pipeline is progressing here. When the operation is done, we will begin training the classification layers. Step8: Train the Classification Layers of Model using Cloud AI Platform In this step, we will train the classification layers of the model. This consists of just a dense and softmax layer. We will use the bottleneck values calculated at the previous step as the input to these layers. We will use Cloud AI Platform to train the model. The output of stage will be a trained model exported to GCS, which can be used for inference. There are various training parameters below that can be tuned. Step9: We'll invoke Cloud AI Platform with the above parameters. We use a GPU for training to speed up operations. The script that does the training is model.py Step10: You can monitor the status of the training job by running the following command. The job can take a few minutes to start-up. Step11: When the job has started, you can observe the logs for the training job by executing the below command (it will poll for new logs every 30 seconds). As training progresses, the logs will output the accuracy on the training set, validation set, as well as the cross entropy. You'll generally see that the accuracy goes up, while the cross entropy goes down as the number of training iterations increases. Finally, when the training is complete, the accuracy of the model on the held-out test set will be output to console. The job can take a few minutes to shut-down. Step12: Deployment and Getting Predictions Cloud AI Platform (CAIP) can also be used to serve the model for inference. The inference model is composed of the pre-trained Inception V3 checkpoint, along with the classification layers we trained above for breast density. First we set the inference model name/version and select a mammography image to test out. Step13: Let's run inference for the image and observe the results. We should see the returned label as well as the score. Step14: Getting Explanations There are limits and caveats when using the Explainable AI feature on CAIP. Read about them here. The Explainable AI feature of CAIP can be used to provide visibility as to why the model returned a prediction for a given input. In this codelab, we are going to use this feature to figure out which pixels in the example mammography image contributed the most to the prediction. This can be useful for debugging model performance and improving the confidence in the model. Read here for more details. See below for sample output. To get started, we will first deploy the model to CAIP that has explainable AI enabled. Step15: We'll create an Explainable AI configuration file. This will allow us to specify the input and the output tensor to correlate. See here for more details. Below we'll correlate the input image tensor with the output of the softmax layer. The Explanation AI configuration file is required to be stored in the model directory. Step16: Finally, let's deploy the model. Step17: Next, we'll ask for the annotated image that includes the Explainable AI overlay. Step18: Next, lets print the annotated image (with overlay). We can see green highlights for the pixels that give the biggest signal for the highest scoring class. Step19: Integration in the clinical workflow To allow medical imaging ML models to be easily integrated into clinical workflows, an inference module can be used. A standalone modality, a PACS system or a DICOM router can push DICOM instances into Cloud Healthcare DICOM stores, allowing ML models to be triggered for inference. This inference results can then be structured into various DICOM formats (e.g. DICOM structured reports) and stored in the Cloud Healthcare API, which can then be retrieved by the customer. The inference module is built as a Docker container and deployed using Kubernetes, allowing you to easily scale your deployment. The dataflow for inference can look as follows (see corresponding diagram below) Step20: Next, we will building the inference module using Cloud Build API. This will create a Docker container that will be stored in Google Container Registry. The inference module code is found in inference.py. The build script used to build the Docker container for this module is cloudbuild.yaml. Progress of build may be found on cloud build dashboard. Step21: Next, we will deploy the inference module to Kubernetes. Then we create a Kubernetes Cluster and a Deployment for the inference module. Step22: Next, we will store a mammography DICOM instance from the TCIA dataset to the DICOM store. This is the image that we will request inference for. Pushing this instance to the DICOM store will result in a Pubsub message, which will trigger the inference module. Step23: You should be able to observe the inference module's logs by running the following command. In the logs, you should observe that the inference module successfully recieved the the Pubsub message and ran inference on the DICOM instance. The logs should also include the inference results. It can take a few minutes to start-up the Kubernetes deployment, so you many have to run this a few times. Step24: You can also query the Cloud Healthcare DICOMWeb API (using QIDO-RS) to see that the DICOM structured report has been inserted for the study. The structured report contents can be found under tag "0040A730". You can optionally also use WADO-RS to recieve the instance (e.g. for viewing).
Python Code: %%bash pip3 install git+https://github.com/GoogleCloudPlatform/healthcare.git#subdirectory=imaging/ml/toolkit pip3 install dicomweb-client pip3 install pydicom Explanation: Copyright 2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This tutorial is for educational purposes purposes only and is not intended for use in clinical diagnosis or clinical decision-making or for any other clinical use. Training/Inference on Breast Density Classification Model on Cloud AI Platform The goal of this tutorial is to train, deploy and run inference on a breast density classification model. Breast density is thought to be a factor for an increase in the risk for breast cancer. This will emphasize using the Cloud Healthcare API in order to store, retreive and transcode medical images (in DICOM format) in a managed and scalable way. This tutorial will focus on using Cloud AI Platform to scalably train and serve the model. Note: This is the Cloud AI Platform version of the AutoML Codelab found here. Requirements A Google Cloud project. Project has Cloud Healthcare API enabled. Project has Cloud Machine Learning API enabled. Project has Cloud Dataflow API enabled. Project has Cloud Build API enabled. Project has Kubernetes engine API enabled. Project has Cloud Resource Manager API enabled. Notebook dependencies We will need to install the hcls_imaging_ml_toolkit package found here. This toolkit helps make working with DICOM objects and the Cloud Healthcare API easier. In addition, we will install dicomweb-client to help us interact with the DIOCOMWeb API and pydicom which is used to help up construct DICOM objects. End of explanation project_id = "MY_PROJECT" # @param location = "us-central1" dataset_id = "MY_DATASET" # @param dicom_store_id = "MY_DICOM_STORE" # @param # Input data used by Cloud ML must be in a bucket with the following format. cloud_bucket_name = "gs://" + project_id + "-vcm" %%bash -s {project_id} {location} {cloud_bucket_name} # Create bucket. gsutil -q mb -c regional -l $2 $3 # Allow Cloud Healthcare API to write to bucket. PROJECT_NUMBER=`gcloud projects describe $1 | grep projectNumber | sed 's/[^0-9]//g'` SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com" COMPUTE_ENGINE_SERVICE_ACCOUNT="${PROJECT_NUMBER}[email protected]" gsutil -q iam ch serviceAccount:${SERVICE_ACCOUNT}:objectAdmin $3 gsutil -q iam ch serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT}:objectAdmin $3 gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${SERVICE_ACCOUNT} --role=roles/pubsub.publisher gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/pubsub.admin # Allow compute service account to create datasets and dicomStores. gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/healthcare.dicomStoreAdmin gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/healthcare.datasetAdmin import json import os import google.auth from google.auth.transport.requests import AuthorizedSession from hcls_imaging_ml_toolkit import dicom_path credentials, project = google.auth.default() authed_session = AuthorizedSession(credentials) # Path to Cloud Healthcare API. HEALTHCARE_API_URL = 'https://healthcare.googleapis.com/v1' # Create Cloud Healthcare API dataset. path = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets?dataset_id=' + dataset_id) headers = {'Content-Type': 'application/json'} resp = authed_session.post(path, headers=headers) assert resp.status_code == 200, 'error creating Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text) print('Full response:\n{0}'.format(resp.text)) # Create Cloud Healthcare API DICOM store. path = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets', dataset_id, 'dicomStores?dicom_store_id=' + dicom_store_id) resp = authed_session.post(path, headers=headers) assert resp.status_code == 200, 'error creating DICOM store, code: {0}, response: {1}'.format(resp.status_code, resp.text) print('Full response:\n{0}'.format(resp.text)) dicom_store_path = dicom_path.Path(project_id, location, dataset_id, dicom_store_id) Explanation: Input Dataset The dataset that will be used for training is the TCIA CBIS-DDSM dataset. This dataset contains ~2500 mammography images in DICOM format. Each image is given a BI-RADS breast density score from 1 to 4. In this tutorial, we will build a binary classifier that distinguishes between breast density "2" (scattered density) and "3" (heterogeneously dense). These are the two most common and variably assigned scores. In the literature, this is said to be particularly difficult for radiologists to consistently distinguish. End of explanation # Store DICOM instances in Cloud Healthcare API. path = 'https://healthcare.googleapis.com/v1/{}:import'.format(dicom_store_path) headers = {'Content-Type': 'application/json'} body = { 'gcsSource': { 'uri': 'gs://gcs-public-data--healthcare-tcia-cbis-ddsm/dicom/**' } } resp = authed_session.post(path, headers=headers, json=body) assert resp.status_code == 200, 'error creating Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text) print('Full response:\n{0}'.format(resp.text)) response = json.loads(resp.text) operation_name = response['name'] import time def wait_for_operation_completion(path, timeout, sleep_time=30): success = False while time.time() < timeout: print('Waiting for operation completion...') resp = authed_session.get(path) assert resp.status_code == 200, 'error polling for Operation results, code: {0}, response: {1}'.format(resp.status_code, resp.text) response = json.loads(resp.text) if 'done' in response: if response['done'] == True and 'error' not in response: success = True; break time.sleep(sleep_time) print('Full response:\n{0}'.format(resp.text)) assert success, "operation did not complete successfully in time limit" print('Success!') return response path = os.path.join(HEALTHCARE_API_URL, operation_name) timeout = time.time() + 40*60 # Wait up to 40 minutes. _ = wait_for_operation_completion(path, timeout) Explanation: Next, we are going to transfer the DICOM instances to the Cloud Healthcare API. Note: We are transfering >100GB of data so this will take some time to complete End of explanation num_of_studies_to_print = 2 # @param path = os.path.join(HEALTHCARE_API_URL, dicom_store_path.dicomweb_path_str, 'studies') resp = authed_session.get(path) assert resp.status_code == 200, 'error querying Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text) response = json.loads(resp.text) print(json.dumps(response[:num_of_studies_to_print], indent=2)) Explanation: Explore the Cloud Healthcare DICOM dataset (optional) This is an optional section to explore the Cloud Healthcare DICOM dataset. In the following code, we simply just list the studies that we have loaded into the Cloud Healthcare API. You can modify the num_of_studies_to_print parameter to print as many studies as desired. End of explanation jpeg_bucket = cloud_bucket_name + "/images/" Explanation: Convert DICOM to JPEG The ML model that we will build requires that the dataset be in JPEG. We will leverage the Cloud Healthcare API to transcode DICOM to JPEG. First we will create a Google Cloud Storage bucket to hold the output JPEG files. Next, we will use the ExportDicomData API to transform the DICOMs to JPEGs. End of explanation %%bash -s {jpeg_bucket} {project_id} {location} {dataset_id} {dicom_store_id} gcloud beta healthcare --project $2 dicom-stores export gcs $5 --location=$3 --dataset=$4 --mime-type="image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50" --gcs-uri-prefix=$1 Explanation: Next we will convert the DICOMs to JPEGs using the ExportDicomData. End of explanation # GCS Bucket to store output TFRecords. bottleneck_bucket = cloud_bucket_name + "/bottleneck" # @param # Percentage of dataset to allocate for validation and testing. validation_percentage = 10 # @param testing_percentage = 10 # @param # Number of Dataflow workers. This can be increased to improve throughput. dataflow_num_workers = 5 # @param # Staging bucket for training. staging_bucket = cloud_bucket_name # @param Explanation: We will use the Operation name returned from the previous command to poll the status of ExportDicomData. We will poll for operation completeness, which should take a few minutes. When the operation is complete, the operation's done field will be set to true. Meanwhile, you should be able to observe the JPEG images being added to your Google Cloud Storage bucket. Training We will use Transfer Learning to retrain a generically trained trained model to perform breast density classification. Specifically, we will use an Inception V3 checkpoint as the starting point. The neural network we will use can roughly be split into two parts: "feature extraction" and "classification". In transfer learning, we take advantage of a pre-trained (checkpoint) model to do the "feature extraction", and add a few layers to perform the "classification" relevant to the specific problem. In this case, we are adding aa dense layer with two neurons to do the classification and a softmax layer to normalize the classification score. The mammography images will be classified as either "2" (scattered density) or "3" (heterogeneously dense). See below for diagram of the training process: The "feature extraction" and the "classification" part will be done in the following steps, respectively. Preprocess Raw Images using Cloud Dataflow In this step, we will resize images to 300x300 (required for Inception V3) and will run each image through the checkpoint Inception V3 model to calculate the bottleneck values. This is the feature vector for the output of the feature extraction part of the model (the part that is already pre-trained). Since this process is resource intensive, we will utilize Cloud Dataflow in order to do this scalably. We extract the features and calculate the bottleneck values here for performance reasons - so that we don't have to recalculate them during training. The output of this process will be a collection of TFRecords storing the bottleneck value for each image in the input dataset. This TFRecord format is commonly used to store Tensors in binary format for storage. Finally, in this step, we will also split the input dataset into training, validation or testing. The percentage of each can be modified using the parameters below. End of explanation %%bash -s {project_id} {jpeg_bucket} {bottleneck_bucket} {validation_percentage} {testing_percentage} {dataflow_num_workers} {staging_bucket} # Install Python library dependencies. pip install virtualenv python3 -m virtualenv env source env/bin/activate pip install tensorflow==1.15.0 google-apitools apache_beam[gcp]==2.18.0 # Start job in Cloud Dataflow and wait for completion. python3 -m scripts.preprocess.preprocess \ --project $1 \ --input_path $2 \ --output_path "$3/record" \ --num_workers $6 \ --temp_location "$7/temp" \ --staging_location "$7/staging" \ --validation_percentage $4 \ --testing_percentage $5 Explanation: The following command will kick off a Cloud Dataflow pipeline that runs preprocessing. The script that has the relevant code is preprocess.py. You can check out how the pipeline is progressing here. When the operation is done, we will begin training the classification layers. End of explanation training_steps = 1000 # @param learning_rate = 0.01 # @param # Location of exported model. exported_model_bucket = cloud_bucket_name + "/models" # @param # Inference requires the exported model to be versioned (by default we choose version 1). exported_model_versioned_uri = exported_model_bucket + "/1" Explanation: Train the Classification Layers of Model using Cloud AI Platform In this step, we will train the classification layers of the model. This consists of just a dense and softmax layer. We will use the bottleneck values calculated at the previous step as the input to these layers. We will use Cloud AI Platform to train the model. The output of stage will be a trained model exported to GCS, which can be used for inference. There are various training parameters below that can be tuned. End of explanation %%bash -s {location} {bottleneck_bucket} {staging_bucket} {training_steps} {learning_rate} {exported_model_versioned_uri} # Start training on CAIP. gcloud ai-platform jobs submit training breast_density \ --python-version 3.7 \ --runtime-version 1.15 \ --scale-tier BASIC_GPU \ --module-name "scripts.trainer.model" \ --package-path scripts \ --staging-bucket $3 \ --region $1 \ -- \ --bottleneck_dir "$2/record" \ --training_steps $4 \ --learning_rate $5 \ --export_model_path $6 Explanation: We'll invoke Cloud AI Platform with the above parameters. We use a GPU for training to speed up operations. The script that does the training is model.py End of explanation !gcloud ai-platform jobs describe breast_density Explanation: You can monitor the status of the training job by running the following command. The job can take a few minutes to start-up. End of explanation !gcloud ai-platform jobs stream-logs breast_density --polling-interval=30 Explanation: When the job has started, you can observe the logs for the training job by executing the below command (it will poll for new logs every 30 seconds). As training progresses, the logs will output the accuracy on the training set, validation set, as well as the cross entropy. You'll generally see that the accuracy goes up, while the cross entropy goes down as the number of training iterations increases. Finally, when the training is complete, the accuracy of the model on the held-out test set will be output to console. The job can take a few minutes to shut-down. End of explanation model_name = "breast_density" # @param deployment_version = "deployment" # @param # The full name of the model. full_model_name = "projects/" + project_id + "/models/" + model_name + "/versions/" + deployment_version !gcloud ai-platform models create $model_name --regions $location !gcloud ai-platform versions create $deployment_version --model $model_name --origin $exported_model_versioned_uri --runtime-version 1.15 --python-version 3.7 # DICOM Study/Series UID of input mammography image that we'll test. input_mammo_study_uid = "1.3.6.1.4.1.9590.100.1.2.85935434310203356712688695661986996009" # @param input_mammo_series_uid = "1.3.6.1.4.1.9590.100.1.2.374115997511889073021386151921807063992" # @param input_mammo_instance_uid = "1.3.6.1.4.1.9590.100.1.2.289923739312470966435676008311959891294" # @param Explanation: Deployment and Getting Predictions Cloud AI Platform (CAIP) can also be used to serve the model for inference. The inference model is composed of the pre-trained Inception V3 checkpoint, along with the classification layers we trained above for breast density. First we set the inference model name/version and select a mammography image to test out. End of explanation from base64 import b64encode, b64decode import io from PIL import Image import tensorflow as tf _INCEPTION_V3_SIZE = 299 input_file_path = os.path.join(jpeg_bucket, input_mammo_study_uid, input_mammo_series_uid, input_mammo_instance_uid + ".jpg") with tf.io.gfile.GFile(input_file_path, 'rb') as example_img: # Resize the image to InceptionV3 input size. im = Image.open(example_img).resize((_INCEPTION_V3_SIZE,_INCEPTION_V3_SIZE)) imgByteArr = io.BytesIO() im.save(imgByteArr, format='JPEG') b64str = b64encode(imgByteArr.getvalue()).decode('utf-8') with open('input_image.json', 'a') as outfile: json.dump({'inputs': [{'b64': b64str}]}, outfile) outfile.write('\n') predictions = !gcloud ai-platform predict --model $model_name --version $deployment_version --json-instances='input_image.json' print(predictions) Explanation: Let's run inference for the image and observe the results. We should see the returned label as well as the score. End of explanation explainable_version = "explainable_ai" # @param Explanation: Getting Explanations There are limits and caveats when using the Explainable AI feature on CAIP. Read about them here. The Explainable AI feature of CAIP can be used to provide visibility as to why the model returned a prediction for a given input. In this codelab, we are going to use this feature to figure out which pixels in the example mammography image contributed the most to the prediction. This can be useful for debugging model performance and improving the confidence in the model. Read here for more details. See below for sample output. To get started, we will first deploy the model to CAIP that has explainable AI enabled. End of explanation import json import os import scripts.constants as constants explainable_metadata = { "outputs": { "probability": { "output_tensor_name": constants.OUTPUT_SOFTMAX_TENSOR_NAME + ":0", } }, "inputs": { "img_bytes": { "input_tensor_name": constants.INPUT_PIXELS_TENSOR_NAME + ":0", "input_tensor_type": "numeric", "modality": "image", } }, "framework": "tensorflow" } # The configuration file in the CAIP model directory. with tf.io.gfile.GFile(os.path.join(exported_model_versioned_uri, 'explanation_metadata.json'), 'w') as output_file: json.dump(explainable_metadata, output_file) Explanation: We'll create an Explainable AI configuration file. This will allow us to specify the input and the output tensor to correlate. See here for more details. Below we'll correlate the input image tensor with the output of the softmax layer. The Explanation AI configuration file is required to be stored in the model directory. End of explanation !gcloud beta ai-platform versions create $explainable_version \ --model $model_name\ --origin $exported_model_versioned_uri \ --runtime-version 1.15 \ --python-version 3.7 \ --machine-type n1-standard-4 \ --explanation-method integrated-gradients \ --num-integral-steps 25 Explanation: Finally, let's deploy the model. End of explanation explanations = !gcloud beta ai-platform explain --model $model_name --version $explainable_version --json-instances='input_image.json' response = json.loads(explanations.s) Explanation: Next, we'll ask for the annotated image that includes the Explainable AI overlay. End of explanation import base64 import io from PIL import Image assert len(response['explanations']) == 1 LABELS = ['2', '3'] prediction = response['explanations'][0] predicted_label = LABELS[prediction['attributions_by_label'][0]['label_index']] confidence = prediction['attributions_by_label'][0]['example_score'] print('Predicted class: ', predicted_label) print('Confidence: ', confidence) b64str = prediction['attributions_by_label'][0]['attributions']['img_bytes']['b64_jpeg'] display(Image.open(io.BytesIO(base64.b64decode(b64str)))) Explanation: Next, lets print the annotated image (with overlay). We can see green highlights for the pixels that give the biggest signal for the highest scoring class. End of explanation # Pubsub config. pubsub_topic_id = "MY_PUBSUB_TOPIC_ID" # @param pubsub_subscription_id = "MY_PUBSUB_SUBSRIPTION_ID" # @param # DICOM Store for store DICOM used for inference. inference_dicom_store_id = "MY_INFERENCE_DICOM_STORE" # @param pubsub_subscription_name = "projects/" + project_id + "/subscriptions/" + pubsub_subscription_id inference_dicom_store_path = dicom_path.FromPath(dicom_store_path, store_id=inference_dicom_store_id) %%bash -s {pubsub_topic_id} {pubsub_subscription_id} {project_id} {location} {dataset_id} {inference_dicom_store_id} # Create Pubsub channel. gcloud beta pubsub topics create $1 gcloud beta pubsub subscriptions create $2 --topic $1 # Create a Cloud Healthcare DICOM store that published on given Pubsub topic. TOKEN=`gcloud beta auth application-default print-access-token` NOTIFICATION_CONFIG="{notification_config: {pubsub_topic: \"projects/$3/topics/$1\"}}" curl -s -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${TOKEN}" -d "${NOTIFICATION_CONFIG}" https://healthcare.googleapis.com/v1/projects/$3/locations/$4/datasets/$5/dicomStores?dicom_store_id=$6 # Enable Cloud Healthcare API to publish on given Pubsub topic. PROJECT_NUMBER=`gcloud projects describe $3 | grep projectNumber | sed 's/[^0-9]//g'` SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com" gcloud beta pubsub topics add-iam-policy-binding $1 --member="serviceAccount:${SERVICE_ACCOUNT}" --role="roles/pubsub.publisher" Explanation: Integration in the clinical workflow To allow medical imaging ML models to be easily integrated into clinical workflows, an inference module can be used. A standalone modality, a PACS system or a DICOM router can push DICOM instances into Cloud Healthcare DICOM stores, allowing ML models to be triggered for inference. This inference results can then be structured into various DICOM formats (e.g. DICOM structured reports) and stored in the Cloud Healthcare API, which can then be retrieved by the customer. The inference module is built as a Docker container and deployed using Kubernetes, allowing you to easily scale your deployment. The dataflow for inference can look as follows (see corresponding diagram below): Client application uses STOW-RS to push a new DICOM instance to the Cloud Healthcare DICOMWeb API. The insertion of the DICOM instance triggers a Cloud Pubsub message to be published. The inference module will pull incoming Pubsub messages and will recieve a message for the previously inserted DICOM instance. The inference module will retrieve the instance in JPEG format from the Cloud Healthcare API using WADO-RS. The inference module will send the JPEG bytes to the model hosted on Cloud AI Platform. Cloud AI Platform will return the prediction back to the inference module. The inference module will package the prediction into a DICOM instance. This can potentially be a DICOM structured report, presentation state, or even burnt text on the image. In this codelab, we will focus on just DICOM structured reports, specifically Comprehensive Structured Reports. The structured report is then stored back in the Cloud Healthcare API using STOW-RS. The client application can query for (or retrieve) the structured report by using QIDO-RS or WADO-RS. Pubsub can also be used by the client application to poll for the newly created DICOM structured report instance. To begin, we will create a new DICOM store that will store our inference source (DICOM mammography instance) and results (DICOM structured report). In order to enable Pubsub notifications to be triggered on inserted instances, we will give the DICOM store a Pubsub channel to publish on. End of explanation %%bash -s {project_id} PROJECT_ID=$1 gcloud builds submit --config scripts/inference/cloudbuild.yaml --timeout 1h scripts/inference Explanation: Next, we will building the inference module using Cloud Build API. This will create a Docker container that will be stored in Google Container Registry. The inference module code is found in inference.py. The build script used to build the Docker container for this module is cloudbuild.yaml. Progress of build may be found on cloud build dashboard. End of explanation %%bash -s {project_id} {location} {pubsub_subscription_name} {full_model_name} {inference_dicom_store_path} gcloud container clusters create inference-module --region=$2 --scopes https://www.googleapis.com/auth/cloud-platform --num-nodes=1 PROJECT_ID=$1 SUBSCRIPTION_PATH=$3 MODEL_PATH=$4 INFERENCE_DICOM_STORE_PATH=$5 cat <<EOF | kubectl create -f - apiVersion: extensions/v1beta1 kind: Deployment metadata: name: inference-module namespace: default spec: replicas: 1 template: metadata: labels: app: inference-module spec: containers: - name: inference-module image: gcr.io/${PROJECT_ID}/inference-module:latest command: - "/opt/inference_module/bin/inference_module" - "--subscription_path=${SUBSCRIPTION_PATH}" - "--model_path=${MODEL_PATH}" - "--dicom_store_path=${INFERENCE_DICOM_STORE_PATH}" - "--prediction_service=CAIP" EOF Explanation: Next, we will deploy the inference module to Kubernetes. Then we create a Kubernetes Cluster and a Deployment for the inference module. End of explanation # DICOM Study/Series UID of input mammography image that we'll push for inference. input_mammo_study_uid = "1.3.6.1.4.1.9590.100.1.2.85935434310203356712688695661986996009" input_mammo_series_uid = "1.3.6.1.4.1.9590.100.1.2.374115997511889073021386151921807063992" input_mammo_instance_uid = "1.3.6.1.4.1.9590.100.1.2.289923739312470966435676008311959891294" from google.cloud import storage from dicomweb_client.api import DICOMwebClient from dicomweb_client import session_utils from pydicom storage_client = storage.Client() bucket = storage_client.bucket('gcs-public-data--healthcare-tcia-cbis-ddsm', user_project=project_id) blob = bucket.blob("dicom/{}/{}/{}.dcm".format(input_mammo_study_uid,input_mammo_series_uid,input_mammo_instance_uid)) blob.download_to_filename('example.dcm') dataset = pydicom.dcmread('example.dcm') session = session_utils.create_session_from_gcp_credentials() study_path = dicom_path.FromPath(inference_dicom_store_path, study_uid=input_mammo_study_uid) dicomweb_url = os.path.join(HEALTHCARE_API_URL, study_path.dicomweb_path_str) dcm_client = DICOMwebClient(dicomweb_url, session) dcm_client.store_instances(datasets=[dataset]) Explanation: Next, we will store a mammography DICOM instance from the TCIA dataset to the DICOM store. This is the image that we will request inference for. Pushing this instance to the DICOM store will result in a Pubsub message, which will trigger the inference module. End of explanation !kubectl logs -l app=inference-module Explanation: You should be able to observe the inference module's logs by running the following command. In the logs, you should observe that the inference module successfully recieved the the Pubsub message and ran inference on the DICOM instance. The logs should also include the inference results. It can take a few minutes to start-up the Kubernetes deployment, so you many have to run this a few times. End of explanation dcm_client.search_for_instances(study_path.study_uid, fields=['all']) Explanation: You can also query the Cloud Healthcare DICOMWeb API (using QIDO-RS) to see that the DICOM structured report has been inserted for the study. The structured report contents can be found under tag "0040A730". You can optionally also use WADO-RS to recieve the instance (e.g. for viewing). End of explanation
2,402
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>The BurnMan Tutorial</h1> Part 2 Step1: After initialization, the "print" method can be used to directly print molar, weight or atomic amounts. Optional variables control the print precision and normalization of amounts. Step2: Let's do something a little more complicated. When we're making a starting mix for petrological experiments, we often have to add additional components. For example, we add iron as Fe2O3 even if we want a reduced oxide starting mix, because FeO is not a stable stoichiometric compound. Here we show how to use BurnMan to create such mixes. In this case, let's say we want to create a KLB-1 starting mix (Takahashi, 1986). We know the weight proportions of the various oxides (including only components in the NCFMAS system) Step3: However, this composition is not the composition we wish to make in the lab. We need to make the following changes Step4: Then we can change the component set to the oxidised, carbonated compounds and print the desired starting compositions, for 2 g total mass
Python Code: from burnman import Composition olivine_composition = Composition({'MgO': 1.8, 'FeO': 0.2, 'SiO2': 1.}, 'weight') Explanation: <h1>The BurnMan Tutorial</h1> Part 2: The Composition Class This file is part of BurnMan - a thermoelastic and thermodynamic toolkit for the Earth and Planetary Sciences Copyright (C) 2012 - 2021 by the BurnMan team, released under the GNU GPL v2 or later. Introduction This ipython notebook is the second in a series designed to introduce new users to the code structure and functionalities present in BurnMan. <b>Demonstrates</b> burnman.Composition: Defining Composition objects, converting between molar, weight and atomic amounts, changing component bases. and modifying compositions. Everything in BurnMan and in this tutorial is defined in SI units. The Composition class It is quite common in petrology to want to perform simple manipulations on chemical compositions. These manipulations might include: - converting between molar and weight percent of oxides or elements - changing from one compositional basis to another (e.g. 'FeO' and 'Fe2O3' to 'Fe' and 'O') - adding new chemical components to an existing composition in specific proportions with existing components. These operations are easy to perform in Excel (for example), but errors are surprisingly common, and are even present in published literature. BurnMan's Composition class is designed to make some of these common tasks easy and hopefully less error prone. Composition objects are initialised with a dictionary of component amounts (in any format), followed by a string that indicates whether that composition is given in "molar" amounts or "weight" (more technically mass, but weight is a more commonly used word in chemistry). End of explanation olivine_composition.print('molar', significant_figures=4, normalization_component='SiO2', normalization_amount=1.) olivine_composition.print('weight', significant_figures=4, normalization_component='total', normalization_amount=1.) olivine_composition.print('atomic', significant_figures=4, normalization_component='total', normalization_amount=7.) Explanation: After initialization, the "print" method can be used to directly print molar, weight or atomic amounts. Optional variables control the print precision and normalization of amounts. End of explanation KLB1 = Composition({'SiO2': 44.48, 'Al2O3': 3.59, 'FeO': 8.10, 'MgO': 39.22, 'CaO': 3.44, 'Na2O': 0.30}, 'weight') Explanation: Let's do something a little more complicated. When we're making a starting mix for petrological experiments, we often have to add additional components. For example, we add iron as Fe2O3 even if we want a reduced oxide starting mix, because FeO is not a stable stoichiometric compound. Here we show how to use BurnMan to create such mixes. In this case, let's say we want to create a KLB-1 starting mix (Takahashi, 1986). We know the weight proportions of the various oxides (including only components in the NCFMAS system): End of explanation CO2_molar = KLB1.molar_composition['CaO'] + KLB1.molar_composition['Na2O'] O_molar = KLB1.molar_composition['FeO']*0.5 KLB1.add_components(composition_dictionary = {'CO2': CO2_molar, 'O': O_molar}, unit_type = 'molar') Explanation: However, this composition is not the composition we wish to make in the lab. We need to make the following changes: - $\text{CaO}$ and $\text{Na}_2\text{O}$ should be added as $\text{CaCO}_3$ and $\text{Na}_2\text{CO}_3$. - $\text{FeO}$ should be added as $\text{Fe}_2\text{O}_3$ First, we change the bulk composition to satisfy these requirements. The molar amounts of the existing components are stored in a dictionary "molar_composition", and can be used to determine the amounts of CO2 and O to add to the bulk composition: End of explanation KLB1.change_component_set(['Na2CO3', 'CaCO3', 'Fe2O3', 'MgO', 'Al2O3', 'SiO2']) KLB1.print('weight', significant_figures=4, normalization_amount=2.) Explanation: Then we can change the component set to the oxidised, carbonated compounds and print the desired starting compositions, for 2 g total mass: End of explanation
2,403
Given the following text description, write Python code to implement the functionality described below step by step Description: Regression based on Iris dataset We ll use the Iris dataset in the regression setup - not use the target variable (typicall classification case) - use petal width (cm) as dependent variable using others as independent Step1: Regression with Tree Classifier Step2: Score of Regression Some evaluation metrics (like mean squared error) are naturally descending scores (the smallest score is best) </br> This is important to note, because some scores will be reported as negative that by definition can never be negative. </br> In order to keep this clear Step3: Regression Performance over tree depth
Python Code: import sklearn.datasets as datasets import pandas as pd iris=datasets.load_iris() df = pd.DataFrame(iris.data, columns=iris.feature_names) df.head(2) Explanation: Regression based on Iris dataset We ll use the Iris dataset in the regression setup - not use the target variable (typicall classification case) - use petal width (cm) as dependent variable using others as independent End of explanation independent_vars = ['sepal length (cm)','sepal width (cm)', 'petal length (cm)'] dependent_var = 'petal width (cm)' X = df[independent_vars] y = df[dependent_var] from sklearn import tree model = tree.DecisionTreeRegressor() model.fit(X,y) # get feature importances importances = model.feature_importances_ pd.Series(importances, index=independent_vars) Explanation: Regression with Tree Classifier End of explanation from sklearn import model_selection results = model_selection.cross_val_score(tree.DecisionTreeRegressor(), X, y, cv=10, scoring='neg_mean_squared_error') print("MSE: %.3f (%.3f)") % (results.mean(), results.std()) Explanation: Score of Regression Some evaluation metrics (like mean squared error) are naturally descending scores (the smallest score is best) </br> This is important to note, because some scores will be reported as negative that by definition can never be negative. </br> In order to keep this clear: metrics which measure the distance between the model and the data, like metrics.mean_squared_error, are available as neg_mean_squared_error which return the negated value of the metric. End of explanation import matplotlib.pyplot as plt from sklearn import model_selection scores = [] depths = [] for depth in range(1, 25): scores.append( neg_mean_squared_error = model_selection.cross_val_score(tree.DecisionTreeRegressor(max_depth = depth), X, y, cv=10, scoring='neg_mean_squared_error'), neg_median_absolute_error = model_selection.cross_val_score(tree.DecisionTreeRegressor(max_depth = depth), X, y, cv=10, scoring='neg_median_absolute_error') neg_median_absolute_error = model_selection.cross_val_score(tree.DecisionTreeRegressor(max_depth = depth), X, y, cv=10, scoring='neg_median_absolute_error') ) depths.append(depth) _ = pd.DataFrame(data = scores, index = depths, columns = ['score']).plot() # looks like a best depth around 5 is the best choice for regression Explanation: Regression Performance over tree depth End of explanation
2,404
Given the following text description, write Python code to implement the functionality described below step by step Description: Step 1 Step1: Each row represents a different person and each column is on of many physical measurments lke the position of their arm, or forearm and each person gets one of 5 labels (classes) like sitting, standing, jumping, running and jogging. Step2: Step 2 Step3: Step 3 Step4: Step 4 Step5: Step 5 Step6: t-distributed Stochastic Neighbor Embedding (t-SNE) visualization Step7: Scatter plot the sample points among 5 classes
Python Code: dataframe_all = pd.read_csv("https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv") num_rows = dataframe_all.shape[0] print('No. of rows:', num_rows) dataframe_all.head() Explanation: Step 1: download the data End of explanation #List all fators from our response variable dataframe_all.classe.unique() Explanation: Each row represents a different person and each column is on of many physical measurments lke the position of their arm, or forearm and each person gets one of 5 labels (classes) like sitting, standing, jumping, running and jogging. End of explanation # count the number of missing elements (NaN) in each column counter_nan = dataframe_all.isnull().sum() counter_without_nan = counter_nan[counter_nan==0] print('Columns without Nan:', counter_without_nan ) # remove the columns with missing elements dataframe_all = dataframe_all[counter_without_nan.keys()] # remove the first 7 columns which contain no discriminative information dataframe_all = dataframe_all.iloc[:,7:] # the list of columns (the last column is the class label) columns = dataframe_all.columns print (columns) Explanation: Step 2: remove useless data End of explanation # get x and convert it to numpy array x = dataframe_all.iloc[:,:-1].values standard_scaler = StandardScaler() x_std = standard_scaler.fit_transform(x) Explanation: Step 3: get features (x) and scale the features End of explanation # get class label data y = dataframe_all.iloc[:,-1].values # encode the class label class_labels = np.unique(y) label_encoder = LabelEncoder() y = label_encoder.fit_transform(y) Explanation: Step 4: get class labels y and then encode it into number End of explanation test_percentage = 0.3 x_train, x_test, y_train, y_test = train_test_split(x_std, y, test_size = test_percentage, random_state = 0) Explanation: Step 5: split the data into training set and test set End of explanation from sklearn.manifold import TSNE tsne = TSNE(n_components=2, random_state=0) x_test_2d = tsne.fit_transform(x_test) Explanation: t-distributed Stochastic Neighbor Embedding (t-SNE) visualization End of explanation markers=('s', 'd', 'o', '^', 'v') color_map = {0:'red', 1:'blue', 2:'lightgreen', 3:'purple', 4:'cyan'} plt.figure() plt.figure(figsize=(10,10)) for idx, cl in enumerate(np.unique(y_test)): plt.scatter(x=x_test_2d[y_test==cl,0], y=x_test_2d[y_test==cl,1], c=color_map[idx], marker=markers[idx], label=cl) plt.xlabel('X in t-SNE') plt.ylabel('Y in t-SNE') plt.legend(loc='upper right') plt.title('t-SNE visualization of test data') plt.show() Explanation: Scatter plot the sample points among 5 classes End of explanation
2,405
Given the following text description, write Python code to implement the functionality described below step by step Description: How to select a classifier This document will guide you through the process of selecting a classifier for your problem. Note that there is no established, scientifically proven rule-set for selecting a classifier to solve a general multi-label classification problem. Succesful approaches often come from mixing intuitions about which classifiers are worth considering, decomposition in to subproblems, and experimental model selection. There are two things you need to consider before choosing a classifier Step1: Usually classifier's performance depends on three elements Step2: We can use numpy and the list of rows with non-zero values in output matrices to get the number of unique label combinations. Step3: Number of features can be found in the shape of the input matrix Step4: Intutions Generalization quality measures There are several ways to measure a classifier's generalization quality Step5: Performance Scikit-multilearn provides 11 classifiers that allow a strong variety of classification scenarios through label partitioning and ensemble classification, let's look at the important factors influencing performance. $ g(x) $ denotes the performance of the base classifier in some of the classifiers. <dl> <dt>[BRkNNaClassifier](api/skmultilearn.adapt.brknn.html#skmultilearn.adapt.brknn.BRkNNaClassifier), [BRkNNbClassifier](api/skmultilearn.adapt.brknn.html#skmultilearn.adapt.brknn.BRkNNbClassifier)</dt> <dd> **Parameter estimation needed** Step6: These values can be then used directly with the classifier. Estimating hyper-parameter k for embedded classifiers In problem transformation classifiers we often need to estimate not only a hyper parameter, but also the parameter of the base classifier, and also - maybe even the problem transformation method. Let's take a look at this on a three-layer construction of ensemble of problem transformation classifiers using label space partitioning, the parameters include
Python Code: from skmultilearn.dataset import load_dataset X_train, y_train, feature_names, label_names = load_dataset('emotions', 'train') X_test, y_test, _, _ =load_dataset('emotions', 'test') Explanation: How to select a classifier This document will guide you through the process of selecting a classifier for your problem. Note that there is no established, scientifically proven rule-set for selecting a classifier to solve a general multi-label classification problem. Succesful approaches often come from mixing intuitions about which classifiers are worth considering, decomposition in to subproblems, and experimental model selection. There are two things you need to consider before choosing a classifier: performance, i.e. generalization quality, how well will the model understand the relationship between features and labels, note that there for different use cases you might want to measure the quality using different measures, we'll talk about the measures in a moment efficiency, i.e. how fast the classifier will perform, does it scale, is it usable in your problem based on number of labels, samples or label combinations There are two ways to make the choice: - intuition based on asymptotic performance and results from empirical studies - data-driven model selection using cross-validated parameter search Let's load up a data set to see have some thing to work on first. End of explanation y_train.shape, y_test.shape Explanation: Usually classifier's performance depends on three elements: number of samples number of labels number of unique label classes number of features We can obtain the first two from the shape of our output space matrices: End of explanation import numpy as np np.unique(y_train.rows).shape, np.unique(y_test.rows).shape Explanation: We can use numpy and the list of rows with non-zero values in output matrices to get the number of unique label combinations. End of explanation X_train.shape[1] Explanation: Number of features can be found in the shape of the input matrix: End of explanation from skmultilearn.adapt import MLkNN classifier = MLkNN(k=3) prediction = classifier.fit(X_train, y_train).predict(X_test) import sklearn.metrics as metrics metrics.hamming_loss(y_test, prediction) Explanation: Intutions Generalization quality measures There are several ways to measure a classifier's generalization quality: Hamming loss measures how well the classifier predicts each of the labels, averaged over samples, then over labels accuracy score measures how well the classifier predicts label combinations, averaged over samples jaccard similarity measures the proportion of predicted labels for a sample to its correct assignment, averaged over samples precision measures how many samples with , recall measures how many samples , F1 score measures a weighted average of precision and recall, where both have the same impact on the score These measures are conveniently provided by sklearn: End of explanation from skmultilearn.adapt import MLkNN from sklearn.model_selection import GridSearchCV parameters = {'k': range(1,3), 's': [0.5, 0.7, 1.0]} clf = GridSearchCV(MLkNN(), parameters, scoring='f1_macro') clf.fit(X_train, y_train) print (clf.best_params_, clf.best_score_) Explanation: Performance Scikit-multilearn provides 11 classifiers that allow a strong variety of classification scenarios through label partitioning and ensemble classification, let's look at the important factors influencing performance. $ g(x) $ denotes the performance of the base classifier in some of the classifiers. <dl> <dt>[BRkNNaClassifier](api/skmultilearn.adapt.brknn.html#skmultilearn.adapt.brknn.BRkNNaClassifier), [BRkNNbClassifier](api/skmultilearn.adapt.brknn.html#skmultilearn.adapt.brknn.BRkNNbClassifier)</dt> <dd> **Parameter estimation needed**: Yes, 1 parameter **Complexity**: ``O(n_{labels} * n_{samples} * n_{features} * k)`` BRkNN classifiers train a k Nearest Neighbor per label and use infer label assignment in one of the two variants. **Strong sides**: - takes some label relations into account while estimating single-label classifers - works when distance between samples is a good predictor for label assignment. Often used in biosciences. **Weak sides**: - trains a classifier per label - less suitable for large label space - requires parameter estimation. </dd> <dt>[MLTSVN](api/skmultilearn.adapt.mltsvn.html)</dt> <dd> **Parameter estimation needed**: Yes, 2 parameters **Complexity**: ``O((n_{samples} * n_{features} + n_{labels}) * k)`` MLkNN builds uses k-NearestNeighbors find nearest examples to a test class and uses Bayesian inference to select assigned labels. **Strong sides**: - estimates one multi-label SVM subclassifier without any one-vs-all or one-vs-rest comparisons, O(1) classifiers instead of O(l^2). - works when distance between samples is a good predictor for label assignment **Weak sides**: - requires parameter estimation </dd> <dt>[MLkNN](api/skmultilearn.adapt.mlknn.html#multilabel-k-nearest-neighbours)</dt> <dd> **Parameter estimation needed**: Yes, 2 parameters **Complexity**: ``O((n_{samples} * n_{features} + n_{labels}) * k)`` MLkNN builds uses k-NearestNeighbors find nearest examples to a test class and uses Bayesian inference to select assigned labels. **Strong sides**: - estimates one multi-class subclassifier - works when distance between samples is a good predictor for label assignment - often used in biosciences. **Weak sides**: - requires parameter estimation </dd> <dt>[MLARAM](api/skmultilearn.adapt.mlaram.html)</dt> <dd> **Parameter estimation needed**: Yes, 2 parameters **Complexity**: ``O(n_{samples})`` An ART classifier which uses clustering of learned prototypes into large clusters improve performance. **Strong sides**: - linear in number of samples, scales well **Weak sides**: - requires parameter estimation - ART techniques have had generalization limits in the past </dd> <dt>[BinaryRelevance](api/skmultilearn.problem_transform.br.html#skmultilearn.problem_transform.BinaryRelevance)</dt> <dd> **Parameter estimation needed**: Only for base classifier **Complexity**: ``O(n_{labels} * base_single_class_classifier_complexity)`` Transforms a multi-label classification problem with L labels into L single-label separate binary classification problems. **Strong sides**: - estimates single-label classifiers - can generalize beyond avialable label combinations **Weak sides**: - not suitable for large number of labels - ignores label relations </dd> <dt>[ClassifierChain](api/skmultilearn.problem_transform.cc.html#skmultilearn.problem_transform.ClassifierChain)</dt> <dd> **Parameter estimation needed**: Yes, 1 + parameters for base classifier **Complexity**: ``O(n_{labels} * base_single_class_classifier_complexity)`` Transforms multi-label problem to a multi-class problem where each label combination is a separate class. **Strong sides**: - estimates single-label classifiers - can generalize beyond avialable label combinations - takes label relations into account **Weak sides**: - not suitable for large number of labels - quality strongly depends on the label ordering in chain. </dd> <dt>[LabelPowerset](api/skmultilearn.problem_transform.lp.html#skmultilearn.problem_transform.LabelPowerset)</dt> <dd> **Parameter estimation needed**: Only for base classifier **Complexity**: ``O(base_multi_class_classifier_complexity(n_classes = n_label_combinations))`` Transforms multi-label problem to a multi-class problem where each label combination is a separate class and uses a multi-class classifier to solve the problem. **Strong sides**: - estimates label dependencies, with only one classifier - often best solution for subset accuracy if training data contains all relevant label combinations **Weak sides**: - requires all label combinations predictable by the classifier to be present in the training data - very prone to underfitting with large label spaces </dd> <dt>[RakelD](api/skmultilearn.ensemble.rakeld.html#skmultilearn.ensemble.RakelD)</dt> <dd> **Parameter estimation needed**: Yes, 1 + base classifier's parameters **Complexity**: ``O(n_{partitions} * base_multi_class_classifier_complexity(n_classes = n_label_combinations_per_partition))`` Randomly partitions label space and trains a Label Powerset classifier per partition with a base multi-class classifier. **Strong sides**: - may use less classifiers than Binary Relevance and still generalize label relations while not underfitting like LabelPowerset **Weak sides**: - using random approach is not very probable to draw an optimal label space division </dd> <dt>[RakelO](api/skmultilearn.ensemble.rakeld.html#skmultilearn.ensemble.RakelO)</dt> <dd> **Parameter estimation needed**: Yes, 2 + base classifier's parameters **Complexity**: ``O(n_{partitions} * base_multi_class_classifier_complexity(n_classes = n_label_combinations_per_cluster))`` Randomly draw label subspaces (possibly overlapping) and trains a Label Powerset classifier per partition with a base multi-class classifier, labels are assigned based on voting. **Strong sides**: - may provide better results with overlapping models **Weak sides**: - takes large number of classifiers to generate improvement, not scalable - random subspaces may not be optimal </dd> <dt>[LabelSpacePartitioningClassifier](api/skmultilearn.ensemble.partition.html#skmultilearn.ensemble.LabelSpacePartitioningClassifier)</dt> <dd> **Parameter estimation needed**: Only base classifier **Complexity**: ``O(n_{partitions} * base_classifier_complexity(n_classes = n_label_combinations_per_partition))`` Uses clustering methods to divide the label space into subspaces and trains a base classifier per partition with a base multi-class classifier. **Strong sides**: - accomodates to different types of problems - infers when to divide into subproblems or not and decide when to use less classifiers than Binary Relevance - scalable to data sets with large numbers of labels - generalizes label relations well while not underfitting like LabelPowerset - does not require parameter estimation **Weak sides**: - requires label relationships present in training data to be representable of the problem - partitioning may prevent certain label combinations from being correctly classified, depends on base classifier </dd> <dt>[MajorityVotingClassifier](api/skmultilearn.ensemble.voting.html#skmultilearn.ensemble.MajorityVotingClassifier)</dt> <dd> **Parameter estimation needed**: Only base classifier **Complexity**: ``O(n_{clusters} * base_classifier_complexity(n_classes = n_label_combinations_per_cluster))`` Uses clustering methods to divide the label space into subspaces (possibly overlapping) and trains a base classifier per partition with a base multi-class classifier, labels are assigned based on voting. **Strong sides**: - accomodates to different types of problems - infers when to divide into subproblems or not and decide when to use less classifiers than Binary Relevance - scalable to data sets with large numbers of labels - generalizes label relations well while not underfitting like LabelPowerset - does not require parameter estimation **Weak sides**: - requires label relationships present in training data to be representable of the problem </dd> <dt>[EmbeddingClassifier](api/skmultilearn.embedding.partition.html#skmultilearn.ensemble.LabelSpacePartitioningClassifier)</dt> <dd> **Parameter estimation needed**: Only for embedder **Complexity**: depends on the selection of embedder, regressor and classifier Embedds the label space, trains a regressor (or many) for unseen samples to predict their embeddings, and a classifier to correct the regression error **Strong sides**: - improves discriminability and joint label probability distributions - good results with low-complexity linear embeddings and weak regressors/classifiers - **Weak sides**: - requires some parameter estimation while rule-of-thumb ideas exist in papers </dd> </dl> Data-driven model selection Scikit-multilearn allows estimating parameters to select best models for multi-label classification using scikit-learn's model selection GridSearchCV API. In the simplest version it can look for the best parameter of a scikit-multilearn's classifier, which we'll show on the example case of estimating parameters for MLkNN, and in the more complicated cases of problem transformation methods it can estimate both the method's hyper parameters and the base classifiers parameter. Estimating hyper-parameter k for MLkNN In the case of estimating the hyperparameter of a multi-label classifier, we first import the relevant classifier and scikit-learn's GridSearchCV class. Then we define the values of parameters we want to evaluate. We are interested in which combination of k - the number of neighbours, s - the smoothing parameter works best. We also need to select a measure which we want to optimize - we've chosen the F1 macro score. After selecting the parameters we intialize and _run the cross validation grid search and print the best hyper parameters. End of explanation from skmultilearn.problem_transform import ClassifierChain, LabelPowerset from sklearn.model_selection import GridSearchCV from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier from skmultilearn.cluster import NetworkXLabelGraphClusterer from skmultilearn.cluster import LabelCooccurrenceGraphBuilder from skmultilearn.ensemble import LabelSpacePartitioningClassifier from sklearn.svm import SVC parameters = { 'classifier': [LabelPowerset(), ClassifierChain()], 'classifier__classifier': [RandomForestClassifier()], 'classifier__classifier__n_estimators': [10, 20, 50], 'clusterer' : [ NetworkXLabelGraphClusterer(LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False), 'louvain'), NetworkXLabelGraphClusterer(LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False), 'lpa') ] } clf = GridSearchCV(LabelSpacePartitioningClassifier(), parameters, scoring = 'f1_macro') clf.fit(X_train, y_train) print (clf.best_params_, clf.best_score_) Explanation: These values can be then used directly with the classifier. Estimating hyper-parameter k for embedded classifiers In problem transformation classifiers we often need to estimate not only a hyper parameter, but also the parameter of the base classifier, and also - maybe even the problem transformation method. Let's take a look at this on a three-layer construction of ensemble of problem transformation classifiers using label space partitioning, the parameters include: classifier: which takes a parameter - a classifier for transforming multi-label classification problem to a single-label classification, we will decide between the Label Powerset and Classifier Chains classifier__classifier: which is the base classifier for the transformation strategy, we will use random forests here classifier__classifier__n_estimators: the number of trees to be used in the forest, will be passed to the random forest object clusterer: a label space partitioning class, we will decide between two approaches provided by the NetworkX library. End of explanation
2,406
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: how to save a trained machine learning or deep learning model
Python Code:: model.save('filename')
2,407
Given the following text description, write Python code to implement the functionality described below step by step Description: Exponentials, Radicals, and Logs Up to this point, all of our equations have included standard arithmetic operations, such as division, multiplication, addition, and subtraction. Many real-world calculations involve exponential values in which numbers are raised by a specific power. Exponentials A simple case of of using an exponential is squaring a number; in other words, multipying a number by itself. For example, 2 squared is 2 times 2, which is 4. This is written like this Step1: Multiplying a number by itself twice or three times to calculate the square or cube of a number is a common operation, but you can raise a number by any exponential power. For example, the following notation shows 4 to the power of 7 (or 4 x 4 x 4 x 4 x 4 x 4 x 4), which has the value Step2: The code used in Python to calculate roots other than the square root reveals something about the relationship between roots and exponentials. The exponential root of a number is the same as that number raised to the power of 1 divided by the exponential. For example, consider the following statement Step3: Logarithms Another consideration for exponential values is the requirement occassionally to determine the exponent for a given number and base. In other words, how many times do I need to multiply a base number by itself to get the given result. This kind of calculation is known as the logarithm. For example, consider the following expression Step4: The final thing you need to know about exponentials and logarithms is that there are some special logarithms Step5: Solving Equations with Exponentials OK, so now that you have a basic understanding of exponentials, roots, and logarithms; let's take a look at some equations that involve exponential calculations. Let's start with what might at first glance look like a complicated example, but don't worry - we'll solve it step-by-step and learn a few tricks along the way Step6: Note that the line is curved. This is symptomatic of an exponential equation Step7: Note that when the exponential is a negative number, Python reports the result as 0. Actually, it's a very small fractional number, but because the base is positive the exponential number will always positive. Also, note the rate at which y increases as x increases - exponential growth can be be pretty dramatic. So what's the practical application of this? Well, let's suppose you deposit $100 in a bank account that earns 5&#37; interest per year. What would the balance of the account be in twenty years, assuming you don't deposit or withdraw any additional funds? To work this out, you could calculate the balance for each year
Python Code: x = 5**3 print(x) Explanation: Exponentials, Radicals, and Logs Up to this point, all of our equations have included standard arithmetic operations, such as division, multiplication, addition, and subtraction. Many real-world calculations involve exponential values in which numbers are raised by a specific power. Exponentials A simple case of of using an exponential is squaring a number; in other words, multipying a number by itself. For example, 2 squared is 2 times 2, which is 4. This is written like this: \begin{equation}2^{2} = 2 \cdot 2 = 4\end{equation} Similarly, 2 cubed is 2 times 2 times 2 (which is of course 8): \begin{equation}2^{3} = 2 \cdot 2 \cdot 2 = 8\end{equation} In Python, you use the &ast;&ast; operator, like this example in which x is assigned the value of 5 raised to the power of 3 (in other words, 5 x 5 x 5, or 5-cubed): End of explanation import math # Calculate square root of 25 x = math.sqrt(25) print (x) # Calculate cube root of 64 cr = round(64 ** (1. / 3)) print(cr) Explanation: Multiplying a number by itself twice or three times to calculate the square or cube of a number is a common operation, but you can raise a number by any exponential power. For example, the following notation shows 4 to the power of 7 (or 4 x 4 x 4 x 4 x 4 x 4 x 4), which has the value: \begin{equation}4^{7} = 16384 \end{equation} In mathematical terminology, 4 is the base, and 7 is the power or exponent in this expression. Radicals (Roots) While it's common to need to calculate the solution for a given base and exponential, sometimes you'll need to calculate one or other of the elements themselves. For example, consider the following expression: \begin{equation}?^{2} = 9 \end{equation} This expression is asking, given a number (9) and an exponent (2), what's the base? In other words, which number multipled by itself results in 9? This type of operation is referred to as calculating the root, and in this particular case it's the square root (the base for a specified number given the exponential 2). In this case, the answer is 3, because 3 x 3 = 9. We show this with a &radic; symbol, like this: \begin{equation}\sqrt{9} = 3 \end{equation} Other common roots include the cube root (the base for a specified number given the exponential 3). For example, the cube root of 64 is 4 (because 4 x 4 x 4 = 64). To show that this is the cube root, we include the exponent 3 in the &radic; symbol, like this: \begin{equation}\sqrt[3]{64} = 4 \end{equation} We can calculate any root of any non-negative number, indicating the exponent in the &radic; symbol. The math package in Python includes a sqrt function that calculates the square root of a number. To calculate other roots, you need to reverse the exponential calculation by raising the given number to the power of 1 divided by the given exponent: End of explanation import math print (9**0.5) print (math.sqrt(9)) Explanation: The code used in Python to calculate roots other than the square root reveals something about the relationship between roots and exponentials. The exponential root of a number is the same as that number raised to the power of 1 divided by the exponential. For example, consider the following statement: \begin{equation} 8^{\frac{1}{3}} = \sqrt[3]{8} = 2 \end{equation} Note that a number to the power of 1/3 is the same as the cube root of that number. Based on the same arithmetic, a number to the power of 1/2 is the same as the square root of the number: \begin{equation} 9^{\frac{1}{2}} = \sqrt{9} = 3 \end{equation} You can see this for yourself with the following Python code: End of explanation import math x = math.log(16, 4) print(x) Explanation: Logarithms Another consideration for exponential values is the requirement occassionally to determine the exponent for a given number and base. In other words, how many times do I need to multiply a base number by itself to get the given result. This kind of calculation is known as the logarithm. For example, consider the following expression: \begin{equation}4^{?} = 16 \end{equation} In other words, to what power must you raise 4 to produce the result 16? The answer to this is 2, because 4 x 4 (or 4 to the power of 2) = 16. The notation looks like this: \begin{equation}log_{4}(16) = 2 \end{equation} In Python, you can calculate the logarithm of a number using the log function in the math package, indicating the number and the base: End of explanation import math # Natural log of 29 print (math.log(29)) # Common log of 100 print(math.log10(100)) Explanation: The final thing you need to know about exponentials and logarithms is that there are some special logarithms: The common logarithm of a number is its exponential for the base 10. You'll occassionally see this written using the usual log notation with the base omitted: \begin{equation}log(1000) = 3 \end{equation} Another special logarithm is something called the natural log, which is a exponential of a number for base e, where e is a constant with the approximate value 2.718. This number occurs naturally in a lot of scenarios, and you'll see it often as you work with data in many analytical contexts. For the time being, just be aware that the natural log is sometimes written as ln: \begin{equation}log_{e}(64) = ln(64) = 4.1589 \end{equation} The math.log function in Python returns the natural log (base e) when no base is specified. Note that this can be confusing, as the mathematical notation log with no base usually refers to the common log (base 10). To return the common log in Python, use the math.log10 function: End of explanation import pandas as pd # Create a dataframe with an x column containing values from -10 to 10 df = pd.DataFrame ({'x': range(-10, 11)}) # Add a y column by applying the slope-intercept equation to x df['y'] = 3*df['x']**3 #Display the dataframe print(df) # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="magenta") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() Explanation: Solving Equations with Exponentials OK, so now that you have a basic understanding of exponentials, roots, and logarithms; let's take a look at some equations that involve exponential calculations. Let's start with what might at first glance look like a complicated example, but don't worry - we'll solve it step-by-step and learn a few tricks along the way: \begin{equation}2y = 2x^{4} ( \frac{x^{2} + 2x^{2}}{x^{3}} ) \end{equation} First, let's deal with the fraction on the right side. The numerator of this fraction is x<sup>2</sup> + 2x<sup>2</sup> - so we're adding two exponential terms. When the terms you're adding (or subtracting) have the same exponential, you can simply add (or subtract) the coefficients. In this case, x<sup>2</sup> is the same as 1x<sup>2</sup>, which when added to 2x<sup>2</sup> gives us the result 3x<sup>2</sup>, so our equation now looks like this: \begin{equation}2y = 2x^{4} ( \frac{3x^{2}}{x^{3}} ) \end{equation} Now that we've condolidated the numerator, let's simplify the entire fraction by dividing the numerator by the denominator. When you divide exponential terms with the same variable, you simply divide the coefficients as you usually would and subtract the exponential of the denominator from the exponential of the numerator. In this case, we're dividing 3x<sup>2</sup> by 1x<sup>3</sup>: The coefficient 3 divided by 1 is 3, and the exponential 2 minus 3 is -1, so the result is 3x<sup>-1</sup>, making our equation: \begin{equation}2y = 2x^{4} ( 3x^{-1} ) \end{equation} So now we've got rid of the fraction on the right side, let's deal with the remaining multiplication. We need to multiply 3x<sup>-1</sup> by 2x<sup>4</sup>. Multiplication, is the opposite of division, so this time we'll multipy the coefficients and add the exponentials: 3 multiplied by 2 is 6, and -1 + 4 is 3, so the result is 6x<sup>3</sup>: \begin{equation}2y = 6x^{3} \end{equation} We're in the home stretch now, we just need to isolate y on the left side, and we can do that by dividing both sides by 2. Note that we're not dividing by an exponential, we simply need to divide the whole 6x<sup>3</sup> term by two; and half of 6 times x<sup>3</sup> is just 3 times x<sup>3</sup>: \begin{equation}y = 3x^{3} \end{equation} Now we have a solution that defines y in terms of x. We can use Python to plot the line created by this equation for a set of arbitrary x and y values: End of explanation import pandas as pd # Create a dataframe with an x column containing values from -10 to 10 df = pd.DataFrame ({'x': range(-10, 11)}) # Add a y column by applying the slope-intercept equation to x df['y'] = 2.0**df['x'] #Display the dataframe print(df) # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="magenta") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() Explanation: Note that the line is curved. This is symptomatic of an exponential equation: as values on one axis increase or decrease, the values on the other axis scale exponentially rather than linearly. Let's look at an example in which x is the exponential, not the base: \begin{equation}y = 2^{x} \end{equation} We can still plot this as a line: End of explanation import pandas as pd # Create a dataframe with 20 years df = pd.DataFrame ({'Year': range(1, 21)}) # Calculate the balance for each year based on the exponential growth from interest df['Balance'] = 100 * (1.05**df['Year']) #Display the dataframe print(df) # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.Year, df.Balance, color="green") plt.xlabel('Year') plt.ylabel('Balance') plt.show() Explanation: Note that when the exponential is a negative number, Python reports the result as 0. Actually, it's a very small fractional number, but because the base is positive the exponential number will always positive. Also, note the rate at which y increases as x increases - exponential growth can be be pretty dramatic. So what's the practical application of this? Well, let's suppose you deposit $100 in a bank account that earns 5&#37; interest per year. What would the balance of the account be in twenty years, assuming you don't deposit or withdraw any additional funds? To work this out, you could calculate the balance for each year: After the first year, the balance will be the initial deposit ($100) plus 5&#37; of that amount: \begin{equation}y1 = 100 + (100 \cdot 0.05) \end{equation} Another way of saying this is: \begin{equation}y1 = 100 \cdot 1.05 \end{equation} At the end of year two, the balance will be the year one balance plus 5&#37;: \begin{equation}y2 = 100 \cdot 1.05 \cdot 1.05 \end{equation} Note that the interest for year two, is the interest for year one multiplied by itself - in other words, squared. So another way of saying this is: \begin{equation}y2 = 100 \cdot 1.05^{2} \end{equation} It turns out, if we just use the year as the exponent, we can easily calculate the growth after twenty years like this: \begin{equation}y20 = 100 \cdot 1.05^{20} \end{equation} Let's apply this logic in Python to see how the account balance would grow over twenty years: End of explanation
2,408
Given the following text description, write Python code to implement the functionality described below step by step Description: Let's look at a traditional logistic regression model for some mildly complicated data. Step1: Another pair of metrics Step2: F1 is the harmonic mean of precision and recall
Python Code: # synthetic data X, y = make_classification(n_samples=10000, n_features=50, n_informative=12, n_redundant=2, n_classes=2, random_state=0) # statsmodels uses logit, not logistic lm = sm.Logit(y, X).fit() results = lm.summary() print(results) # hard problem lm = sm.Logit(y, X).fit(maxiter=1000) results = lm.summary() print(results) Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.2, random_state=0) # 'null' prediction print(np.sum(yte) / len(yte)) null_preds = np.ones(len(yte)) print('{:.3f}'.format(accuracy_score(yte, null_preds))) # linear model - logistic regression lm = LogisticRegression().fit(Xtr, ytr) lm.coef_ figsize(12, 6) plt.scatter(range(len(lm.coef_[0])), lm.coef_[0]) plt.xlabel('predictor') plt.ylabel('coefficient'); preds = lm.predict(Xte) prob_preds = lm.predict_proba(Xte) print(preds[:5]) print(prob_preds[:5]) accuracy_score(yte, preds) import pandas as pd pd.DataFrame(confusion_matrix(yte, preds)).apply(lambda x: x / sum(x), axis=1) def plot_roc(actual, predicted): fpr, tpr, thr = roc_curve(actual, predicted) roc_auc = auc(fpr, tpr) # exercise: add code to color curve by threshold value figsize(12, 8) plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC (Class 1)') plt.legend(loc="lower right"); return plot_roc(yte, null_preds) plot_roc(yte, prob_preds[:,1]) # more synthetic data Xub, yub = make_classification(n_samples=10000, n_features=50, n_informative=12, n_redundant=2, n_classes=2, weights=(0.99, 0.01), random_state=0) np.sum(yub) Xtrub, Xteub, ytrub, yteub = train_test_split(Xub, yub, test_size=0.2, random_state=0) lm = LogisticRegression().fit(Xtrub, ytrub) accuracy_score(yteub, lm.predict(Xteub)) plot_roc(yteub, lm.predict_proba(Xteub)[:,1]) Explanation: Let's look at a traditional logistic regression model for some mildly complicated data. End of explanation # using data from balanced classes prec, rec, thresh = precision_recall_curve(yte, prob_preds[:,1]) figsize(12, 6) plt.plot(rec, prec, label='AUC={0:0.2f}'.format(average_precision_score(yte, prob_preds[:,1]))) plt.title('Precision-Recall') plt.xlabel('Recall') plt.ylabel('Precision') plt.legend(loc='best'); # classification_report print(classification_report(yte, preds)) Explanation: Another pair of metrics: Precision and Recall: These are sometimes also plotted against each other: End of explanation # if time - l1 vs l2 penalty lm = LogisticRegression(penalty='l1').fit(Xtr, ytr) plt.scatter(range(len(lm.coef_[0])), lm.coef_[0]); Explanation: F1 is the harmonic mean of precision and recall: $$F1 = \frac{2\cdot precision\cdot recall}{precision + recall}.$$ End of explanation
2,409
Given the following text description, write Python code to implement the functionality described below step by step Description: How does the current Game2048 class work? In this short notebook, we introduce how the class method should be called in our experiment code. Step1: A game round demo Step2: Let's check whether Game2048.moves_available() works well We expect that moves is [0, 1, 2, 3] here. Step3: We expect g.moves_available() return [1, 2, 3].
Python Code: from game import Game2048 Explanation: How does the current Game2048 class work? In this short notebook, we introduce how the class method should be called in our experiment code. End of explanation g = Game2048(game_mode=False) # False means AI mode g.print_game() g.active_player moves = g.moves_available() moves # We can perform an action for the "Agent" g.perform_move(moves[0]) g.print_game() g.active_player # switched # Computer's turn, no need for a move input g.perform_move() # Return whether the board changed g.print_game() # Again, we can perform an action for the "Agent" moves = g.moves_available() g.perform_move(moves[0]) g.print_game() g.active_player g.perform_move() g.print_game() g.active_player Explanation: A game round demo End of explanation g = Game2048('', game_mode=False) g.board = [ [2, 0, 0, 0], [2, 0, 0, 0], [2, 0, 0, 0], [2, 0, 0, 0] ] g.print_game() g.active_player Explanation: Let's check whether Game2048.moves_available() works well We expect that moves is [0, 1, 2, 3] here. End of explanation g.moves_available() Explanation: We expect g.moves_available() return [1, 2, 3]. End of explanation
2,410
Given the following text description, write Python code to implement the functionality described below step by step Description: En pandas tenemos varias posibilidades para leer datos y similares posibilidades para escribirlos. Leamos unos datos de viento En la carpeta Datos tenemos un fichero que se llama mast.txt con el siguiente formato Step1: <div class="alert alert-danger"> <p>Dependiendo de tu sistema operativo puede que las fechas sean las correctas o no. Ahora no te preocupes de ellos. Más adelante lidiaremos con ello</p> </div> Step2: Con unas pocas líneas de código hemos conseguido leer un fichero de datos separado por espacios, hemos conseguido leer dos columnas y transformarlas a fechas (de forma mágica), hemos conseguido indicar que esas fechas se consideren el índice (solo puede haber un registro en cada momento),... ¡¡Warning!! índices repetidos <br> <div class="alert alert-danger"> <h3>Nota Step3: ¡¡Warning!! cuando hagamos conversión de fechas desde strings <br> <div class="alert alert-danger"> <h3>Nota Step6: Para evitar lo anterior podemos crear nuestro propio parser de fechas a, por ejemplo, pd.read_csv Step7: Vamos a salvar el resultado en formato csv Step8: ... o en formato json Step9: ... o en una tabla HTML Step10: ... o en formato xlsx Seguramente debáis instalar xlsxwriter, openpyxl, wlrd/xlwt,...
Python Code: # primero hacemos los imports de turno import os import datetime as dt import pandas as pd import numpy as np import matplotlib.pyplot as plt from IPython.display import display np.random.seed(19760812) %matplotlib inline ipath = os.path.join('Datos', 'mast.txt') wind = pd.read_csv(ipath) wind.head(3) wind = pd.read_csv(ipath, sep = "\s*") # Cuando trabajamos con texto separado por espacios podemos usar la keyword delim_whitespace: # wind = pd.read_csv(path, delim_whitespace = True) wind.head(3) cols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir', 'x1', 'x2', 'x3', 'x4', 'x5', 'wspd_std'] wind = pd.read_csv(ipath, sep = "\s*", names = cols) wind.head(3) cols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir', 'x1', 'x2', 'x3', 'x4', 'x5', 'wspd_std'] wind = pd.read_csv(ipath, sep = "\s*", names = cols, parse_dates = [[0, 1]]) wind.head(3) Explanation: En pandas tenemos varias posibilidades para leer datos y similares posibilidades para escribirlos. Leamos unos datos de viento En la carpeta Datos tenemos un fichero que se llama mast.txt con el siguiente formato: 130904 0000 2.21 2.58 113.5 999.99 999.99 99.99 9999.99 9999.99 0.11 130904 0010 1.69 2.31 99.9 999.99 999.99 99.99 9999.99 9999.99 0.35 130904 0020 1.28 1.50 96.0 999.99 999.99 99.99 9999.99 9999.99 0.08 130904 0030 1.94 2.39 99.2 999.99 999.99 99.99 9999.99 9999.99 0.26 130904 0040 2.17 2.67 108.4 999.99 999.99 99.99 9999.99 9999.99 0.23 130904 0050 2.25 2.89 105.0 999.99 999.99 99.99 9999.99 9999.99 0.35 ... Lo podemos leer de la siguiente forma: End of explanation cols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir', 'x1', 'x2', 'x3', 'x4', 'x5', 'wspd_std'] wind = pd.read_csv(ipath, sep = "\s*", names = cols, parse_dates = [[0, 1]], index_col = 0) wind.head(3) cols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir', 'x1', 'x2', 'x3', 'x4', 'x5', 'wspd_std'] wind = pd.read_csv(ipath, sep = "\s*", names = cols, parse_dates = {'timestamp': [0, 1]}, index_col = 0) wind.head(3) # The previous code is equivalent to cols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir', 'x1', 'x2', 'x3', 'x4', 'x5', 'wspd_std'] wind = pd.read_csv(ipath, sep = "\s*", names = cols, parse_dates = [[0, 1]], index_col = 0) wind.index.name = 'Timestamp' wind.head(3) # En la anterior celda de código podemos cambiar los 0's y 1's de # parse_dates e index_col por los nombres de las columnas # Probadlo!!! help(pd.read_csv) Explanation: <div class="alert alert-danger"> <p>Dependiendo de tu sistema operativo puede que las fechas sean las correctas o no. Ahora no te preocupes de ellos. Más adelante lidiaremos con ello</p> </div> End of explanation tmp = pd.DataFrame([1,10,100, 1000], index = [1,1,2,2], columns = ['values']) tmp print(tmp['values'][1], tmp['values'][2], sep = '\n') Explanation: Con unas pocas líneas de código hemos conseguido leer un fichero de datos separado por espacios, hemos conseguido leer dos columnas y transformarlas a fechas (de forma mágica), hemos conseguido indicar que esas fechas se consideren el índice (solo puede haber un registro en cada momento),... ¡¡Warning!! índices repetidos <br> <div class="alert alert-danger"> <h3>Nota:</h3> <p>Nada impide tener dos índices repetidos. Tened cuidado con esto ya que puede ser una fuente de errores.</p> </div> End of explanation # Ejemplo de error en fechas: index = [ '01/01/2015 00:00', '02/01/2015 00:00', '03/01/2015 00:00', '04/01/2015 00:00', '05/01/2015 00:00', '06/01/2015 00:00', '07/01/2015 00:00', '08/01/2015 00:00', '09/01/2015 00:00', '10/01/2015 00:00', '11/01/2015 00:00', '12/01/2015 00:00', '13/01/2015 00:00', '14/01/2015 00:00', '15/01/2015 00:00' ] values = np.random.randn(len(index)) tmp = pd.DataFrame(values, index = pd.to_datetime(index), columns = ['col1']) display(tmp) tmp.plot.line(figsize = (12, 6)) Explanation: ¡¡Warning!! cuando hagamos conversión de fechas desde strings <br> <div class="alert alert-danger"> <h3>Nota:</h3> <p>Si dejáis que pandas *parsee* las fechas escribid tests para ello pues puede haber errores en la conversión <b>automágica</b>.</p> </div> End of explanation import datetime as dt import io def dateparser(date): date, time = date.split() DD, MM, YY = date.split('/') hh, mm = time.split(':') return dt.datetime(int(YY), int(MM), int(DD), int(hh), int(mm)) virtual_file = io.StringIO(01/01/2015 00:00, 1 02/01/2015 00:00, 2 03/01/2015 00:00, 3 04/01/2015 00:00, 4 05/01/2015 00:00, 5 06/01/2015 00:00, 6 07/01/2015 00:00, 7 08/01/2015 00:00, 8 09/01/2015 00:00, 9 10/01/2015 00:00, 10 11/01/2015 00:00, 11 12/01/2015 00:00, 12 13/01/2015 00:00, 13 14/01/2015 00:00, 14 15/01/2015 00:00, 15 ) tmp_wrong = pd.read_csv(virtual_file, parse_dates = [0], index_col = 0, names = ['Date', 'values']) virtual_file = io.StringIO(01/01/2015 00:00, 1 02/01/2015 00:00, 2 03/01/2015 00:00, 3 04/01/2015 00:00, 4 05/01/2015 00:00, 5 06/01/2015 00:00, 6 07/01/2015 00:00, 7 08/01/2015 00:00, 8 09/01/2015 00:00, 9 10/01/2015 00:00, 10 11/01/2015 00:00, 11 12/01/2015 00:00, 12 13/01/2015 00:00, 13 14/01/2015 00:00, 14 15/01/2015 00:00, 15 ) tmp_right = pd.read_csv(virtual_file, parse_dates = True, index_col = 0, names = ['Date', 'values'], date_parser = dateparser) display(tmp_wrong) display(tmp_right) Explanation: Para evitar lo anterior podemos crear nuestro propio parser de fechas a, por ejemplo, pd.read_csv: End of explanation opath = os.path.join('Datos', 'mast_2.csv') #wind.to_csv(opath) wind.iloc[0:100].to_csv(opath) Explanation: Vamos a salvar el resultado en formato csv End of explanation #wind.to_json(opath.replace('csv', 'json')) wind.iloc[0:100].to_json(opath.replace('csv', 'json')) Explanation: ... o en formato json End of explanation # Si son muchos datos no os lo recomiendo, es lento #wind.to_html(opath.replace('csv', 'html')) wind.iloc[0:100].to_html(opath.replace('csv', 'html')) Explanation: ... o en una tabla HTML End of explanation writer = pd.ExcelWriter(opath.replace('csv', 'xlsx')) #wind.to_excel(writer, sheet_name= "Mi hoja 1") wind.iloc[0:100].to_excel(writer, sheet_name= "Mi hoja 1") writer.save() # Ahora que tenemos los ficheros en formato json, html, xlsx,..., podéis practicar a abrirlos con las # funciones pd.read_* Explanation: ... o en formato xlsx Seguramente debáis instalar xlsxwriter, openpyxl, wlrd/xlwt,... End of explanation
2,411
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Python for System Administrator Author Step2: Basic Arithmetic Step3: Variable assignment Step4: Formatting numbers Step5: Formatting Step6: Formatting with names
Python Code: # Importing_new_features # ..is easy. Features are collected # in packages or modules. Just import telnetlib # to use a telnetlib.Telnet # client # We can even import single classes # from a module, like from telnetlib import Telnet # And read the module or class docs help(telnetlib) help(Telnet) # you can print with the print() function print("Hello world!") # concatenate string with a + sign # and using hex notation print("Hello" + " " + "World\x21") print("Ciao") # prefixing a string with 'r' disables the # interpretation of the string content print('Hello' * 2 + r'World\x21') # the chr() function returns the corresponding # character of an integer. While \n and \t are # just the usual notation for linefeed and tab print(chr(72) + "ello\n\tWorld!") # triple-quoting allows multi-line strings # %s works like in the C printf() function # but operates on strings # ord() is just the inverse of chr() print(The answer is %s % ord('*')) Explanation: Python for System Administrator Author: [email protected] Introducing Python Python is an interpreted, object oriented language with a lot of built in features. This is a fast-track course for sysadmin with knowledge of languages like Perl, PHP, C and Java Agenda Importing features Getting help Printing Basic Arithmetic Variable assignment Formatting End of explanation # This is a comment, while a = 1 # is an integer variable b = 0x10 # is another integer in hex notation # c = 011 # ...another one in C-style oct on python 2... c = 0o11 # ...in python 2 and 3 # I can sum, multiply, and modulus print(a + b, 5 % 2) print(2 * c) Explanation: Basic Arithmetic End of explanation # variable_assignment # I can assign more than one variable on the same line a, b, c = 1, 2, 3 d, stringa_a, stringa_b = a + b, "pippo", "pluto" # ...swap them... (a, b) = (b, a) # but if right-side values are not defined, I get an exception e, f = c, e + d # We should respect reserved words and functions, like print, ord... print(("ord:\x20", ord)) ord = 4 ord('*') # ...ooops! del ord # fix it up! ord('*') # ...ooops! Explanation: Variable assignment End of explanation ## def formatting_numbers(): # bin() and hex() returns a string representation # of a number a, b1 = hex(10), bin(1) # while the format() function can be more flexible # 10 = 8ciphers + 2chars for the '0b' header binary_with_leading_zeroes = format(1, '#010b') # and reversible with b1 == int(binary_with_leading_zeroes, base=2) Explanation: Formatting numbers End of explanation #def new_formatting(): # The new str.format function just replaces # %s or %d with {}. s_a = "is a string " s_a += "that can {} extended".format("be") # Further formatting is done using ":", eg. # %.6s -> {:.6} # %3.2d -> {:3.2} s_a = "{} even with {:.6} formatting.\n".format(s_a, "positional") # Alignment identifiers are simpler: < left , ^ center, > right s_a = "Align {:>10}% python!".format(100) print(s_a) print("just prints a string") Explanation: Formatting End of explanation # you can name variables to get # a better formatting experience ;) fmt_a = "{name:<.3} {nick:^.8} {sn:>30}" print(fmt_a.format(name="-"*10, nick="*"*15, sn="-"*40)) print(fmt_a.format(name="Roberto", nick="ioggstream", sn="Polli")) Explanation: Formatting with names End of explanation
2,412
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Chemistry Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 1.8. Coupling With Chemical Reactivity Is Required Step12: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step13: 2.2. Code Version Is Required Step14: 2.3. Code Languages Is Required Step15: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required Step16: 3.2. Split Operator Advection Timestep Is Required Step17: 3.3. Split Operator Physical Timestep Is Required Step18: 3.4. Split Operator Chemistry Timestep Is Required Step19: 3.5. Split Operator Alternate Order Is Required Step20: 3.6. Integrated Timestep Is Required Step21: 3.7. Integrated Scheme Type Is Required Step22: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required Step23: 4.2. Convection Is Required Step24: 4.3. Precipitation Is Required Step25: 4.4. Emissions Is Required Step26: 4.5. Deposition Is Required Step27: 4.6. Gas Phase Chemistry Is Required Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required Step30: 4.9. Photo Chemistry Is Required Step31: 4.10. Aerosols Is Required Step32: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required Step33: 5.2. Global Mean Metrics Used Is Required Step34: 5.3. Regional Metrics Used Is Required Step35: 5.4. Trend Metrics Used Is Required Step36: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required Step37: 6.2. Matches Atmosphere Grid Is Required Step38: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required Step39: 7.2. Canonical Horizontal Resolution Is Required Step40: 7.3. Number Of Horizontal Gridpoints Is Required Step41: 7.4. Number Of Vertical Levels Is Required Step42: 7.5. Is Adaptive Grid Is Required Step43: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required Step44: 8.2. Use Atmospheric Transport Is Required Step45: 8.3. Transport Details Is Required Step46: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required Step47: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required Step48: 10.2. Method Is Required Step49: 10.3. Prescribed Climatology Emitted Species Is Required Step50: 10.4. Prescribed Spatially Uniform Emitted Species Is Required Step51: 10.5. Interactive Emitted Species Is Required Step52: 10.6. Other Emitted Species Is Required Step53: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required Step54: 11.2. Method Is Required Step55: 11.3. Prescribed Climatology Emitted Species Is Required Step56: 11.4. Prescribed Spatially Uniform Emitted Species Is Required Step57: 11.5. Interactive Emitted Species Is Required Step58: 11.6. Other Emitted Species Is Required Step59: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required Step60: 12.2. Prescribed Upper Boundary Is Required Step61: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required Step62: 13.2. Species Is Required Step63: 13.3. Number Of Bimolecular Reactions Is Required Step64: 13.4. Number Of Termolecular Reactions Is Required Step65: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required Step66: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required Step67: 13.7. Number Of Advected Species Is Required Step68: 13.8. Number Of Steady State Species Is Required Step69: 13.9. Interactive Dry Deposition Is Required Step70: 13.10. Wet Deposition Is Required Step71: 13.11. Wet Oxidation Is Required Step72: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required Step73: 14.2. Gas Phase Species Is Required Step74: 14.3. Aerosol Species Is Required Step75: 14.4. Number Of Steady State Species Is Required Step76: 14.5. Sedimentation Is Required Step77: 14.6. Coagulation Is Required Step78: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required Step79: 15.2. Gas Phase Species Is Required Step80: 15.3. Aerosol Species Is Required Step81: 15.4. Number Of Steady State Species Is Required Step82: 15.5. Interactive Dry Deposition Is Required Step83: 15.6. Coagulation Is Required Step84: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required Step85: 16.2. Number Of Reactions Is Required Step86: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required Step87: 17.2. Environmental Conditions Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-3', 'atmoschem') Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: EC-EARTH-CONSORTIUM Source ID: SANDBOX-3 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:00 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation
2,413
Given the following text description, write Python code to implement the functionality described below step by step Description: Python for Bioinformatics This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics Chapter 18 Step1: Listing 18.1 Step2: Listing 18.2
Python Code: !pip install biopython !curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/samples.tar.bz2 -o samples.tar.bz2 !mkdir samples !tar xvfj samples.tar.bz2 -C samples Explanation: Python for Bioinformatics This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics Chapter 18: Calculating Melting Temperature from a Set of Primers Note: Before opening the file, this file should be accesible from this Jupyter notebook. In order to do so, the following commands will download these files from Github and extract them into a directory called samples. End of explanation from Bio.SeqUtils import MeltingTemp as MT PRIMER_FILE = 'samples/primers.txt' for line in open(PRIMER_FILE): # prm stores the primer, without 5'- and -3' prm = line[3:len(line)-4].replace(' ','') # .2f is used to print up to 2 decimals. print('{0},{1:.2f}'.format(prm, MT.Tm_staluc(prm))) Explanation: Listing 18.1: fromtxt.py: Primer Tm calculation End of explanation from Bio.SeqUtils import MeltingTemp as MT import xlwt PRIMER_FILE = 'samples/primers.txt' # w is the name of a newly created workbook. w = xlwt.Workbook() # ws is the name of a new sheet in this workbook. ws = w.add_sheet('Result') # These two lines writes the titles of the columns. ws.write(0, 0, 'Primer Sequence') ws.write(0, 1, 'Tm') for index, line in enumerate(open(PRIMER_FILE)): # For each line in the input file, write the primer # sequence and the Tm prm = line[3:len(line)-4].replace(' ','') ws.write(index+1, 0, prm) ws.write(index+1, 1, '{0:.2f}'.format(MT.Tm_staluc(prm))) # Save the spreadsheel into a file. w.save('primerout.xls') Explanation: Listing 18.2: toexcel.py: Primer Tm calculation, Excel output End of explanation
2,414
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-0', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: INM Source ID: INM-CM5-0 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:04 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
2,415
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: Introduction to the TensorFlow Models NLP library <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Import Tensorflow and other libraries Step3: BERT pretraining model BERT (Pre-training of Deep Bidirectional Transformers for Language Understanding) introduced the method of pre-training language representations on a large text corpus and then using that model for downstream NLP tasks. In this section, we will learn how to build a model to pretrain BERT on the masked language modeling task and next sentence prediction task. For simplicity, we only show the minimum example and use dummy data. Build a BertPretrainer model wrapping TransformerEncoder The TransformerEncoder implements the Transformer-based encoder as described in BERT paper. It includes the embedding lookups and transformer layers, but not the masked language model or classification task networks. The BertPretrainer allows a user to pass in a transformer stack, and instantiates the masked language model and classification networks that are used to create the training objectives. Step4: Inspecting the encoder, we see it contains few embedding layers, stacked Transformer layers and are connected to three input layers Step5: Inspecting the bert_pretrainer, we see it wraps the encoder with additional MaskedLM and Classification heads. Step6: Compute loss Next, we can use lm_output and sentence_output to compute loss. Step7: With the loss, you can optimize the model. After training, we can save the weights of TransformerEncoder for the downstream fine-tuning tasks. Please see run_pretraining.py for the full example. Span labeling model Span labeling is the task to assign labels to a span of the text, for example, label a span of text as the answer of a given question. In this section, we will learn how to build a span labeling model. Again, we use dummy data for simplicity. Build a BertSpanLabeler wrapping TransformerEncoder BertSpanLabeler implements a simple single-span start-end predictor (that is, a model that predicts two values Step8: Inspecting the bert_span_labeler, we see it wraps the encoder with additional SpanLabeling that outputs start_position and end_postion. Step9: Compute loss With start_logits and end_logits, we can compute loss Step10: With the loss, you can optimize the model. Please see run_squad.py for the full example. Classification model In the last section, we show how to build a text classification model. Build a BertClassifier model wrapping TransformerEncoder BertClassifier implements a simple token classification model containing a single classification head using the TokenClassification network. Step11: Inspecting the bert_classifier, we see it wraps the encoder with additional Classification head. Step12: Compute loss With logits, we can compute loss
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation !pip install -q tf-nightly !pip install -q tf-models-nightly Explanation: Introduction to the TensorFlow Models NLP library <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/official_models/nlp/nlp_modeling_library_intro"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/models/blob/master/official/colab/nlp/nlp_modeling_library_intro.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/models/blob/master/official/colab/nlp/nlp_modeling_library_intro.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/models/official/colab/nlp/nlp_modeling_library_intro.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Learning objectives In this Colab notebook, you will learn how to build transformer-based models for common NLP tasks including pretraining, span labelling and classification using the building blocks from NLP modeling library. Install and import Install the TensorFlow Model Garden pip package tf-models-nightly is the nightly Model Garden package created daily automatically. pip will install all models and dependencies automatically. End of explanation import numpy as np import tensorflow as tf from official.nlp import modeling from official.nlp.modeling import layers, losses, models, networks Explanation: Import Tensorflow and other libraries End of explanation # Build a small transformer network. vocab_size = 100 sequence_length = 16 network = modeling.networks.TransformerEncoder( vocab_size=vocab_size, num_layers=2, sequence_length=16) Explanation: BERT pretraining model BERT (Pre-training of Deep Bidirectional Transformers for Language Understanding) introduced the method of pre-training language representations on a large text corpus and then using that model for downstream NLP tasks. In this section, we will learn how to build a model to pretrain BERT on the masked language modeling task and next sentence prediction task. For simplicity, we only show the minimum example and use dummy data. Build a BertPretrainer model wrapping TransformerEncoder The TransformerEncoder implements the Transformer-based encoder as described in BERT paper. It includes the embedding lookups and transformer layers, but not the masked language model or classification task networks. The BertPretrainer allows a user to pass in a transformer stack, and instantiates the masked language model and classification networks that are used to create the training objectives. End of explanation tf.keras.utils.plot_model(network, show_shapes=True, dpi=48) # Create a BERT pretrainer with the created network. num_token_predictions = 8 bert_pretrainer = modeling.models.BertPretrainer( network, num_classes=2, num_token_predictions=num_token_predictions, output='predictions') Explanation: Inspecting the encoder, we see it contains few embedding layers, stacked Transformer layers and are connected to three input layers: input_word_ids, input_type_ids and input_mask. End of explanation tf.keras.utils.plot_model(bert_pretrainer, show_shapes=True, dpi=48) # We can feed some dummy data to get masked language model and sentence output. batch_size = 2 word_id_data = np.random.randint(vocab_size, size=(batch_size, sequence_length)) mask_data = np.random.randint(2, size=(batch_size, sequence_length)) type_id_data = np.random.randint(2, size=(batch_size, sequence_length)) masked_lm_positions_data = np.random.randint(2, size=(batch_size, num_token_predictions)) outputs = bert_pretrainer( [word_id_data, mask_data, type_id_data, masked_lm_positions_data]) lm_output = outputs["masked_lm"] sentence_output = outputs["classification"] print(lm_output) print(sentence_output) Explanation: Inspecting the bert_pretrainer, we see it wraps the encoder with additional MaskedLM and Classification heads. End of explanation masked_lm_ids_data = np.random.randint(vocab_size, size=(batch_size, num_token_predictions)) masked_lm_weights_data = np.random.randint(2, size=(batch_size, num_token_predictions)) next_sentence_labels_data = np.random.randint(2, size=(batch_size)) mlm_loss = modeling.losses.weighted_sparse_categorical_crossentropy_loss( labels=masked_lm_ids_data, predictions=lm_output, weights=masked_lm_weights_data) sentence_loss = modeling.losses.weighted_sparse_categorical_crossentropy_loss( labels=next_sentence_labels_data, predictions=sentence_output) loss = mlm_loss + sentence_loss print(loss) Explanation: Compute loss Next, we can use lm_output and sentence_output to compute loss. End of explanation network = modeling.networks.TransformerEncoder( vocab_size=vocab_size, num_layers=2, sequence_length=sequence_length) # Create a BERT trainer with the created network. bert_span_labeler = modeling.models.BertSpanLabeler(network) Explanation: With the loss, you can optimize the model. After training, we can save the weights of TransformerEncoder for the downstream fine-tuning tasks. Please see run_pretraining.py for the full example. Span labeling model Span labeling is the task to assign labels to a span of the text, for example, label a span of text as the answer of a given question. In this section, we will learn how to build a span labeling model. Again, we use dummy data for simplicity. Build a BertSpanLabeler wrapping TransformerEncoder BertSpanLabeler implements a simple single-span start-end predictor (that is, a model that predicts two values: a start token index and an end token index), suitable for SQuAD-style tasks. Note that BertSpanLabeler wraps a TransformerEncoder, the weights of which can be restored from the above pretraining model. End of explanation tf.keras.utils.plot_model(bert_span_labeler, show_shapes=True, dpi=48) # Create a set of 2-dimensional data tensors to feed into the model. word_id_data = np.random.randint(vocab_size, size=(batch_size, sequence_length)) mask_data = np.random.randint(2, size=(batch_size, sequence_length)) type_id_data = np.random.randint(2, size=(batch_size, sequence_length)) # Feed the data to the model. start_logits, end_logits = bert_span_labeler([word_id_data, mask_data, type_id_data]) print(start_logits) print(end_logits) Explanation: Inspecting the bert_span_labeler, we see it wraps the encoder with additional SpanLabeling that outputs start_position and end_postion. End of explanation start_positions = np.random.randint(sequence_length, size=(batch_size)) end_positions = np.random.randint(sequence_length, size=(batch_size)) start_loss = tf.keras.losses.sparse_categorical_crossentropy( start_positions, start_logits, from_logits=True) end_loss = tf.keras.losses.sparse_categorical_crossentropy( end_positions, end_logits, from_logits=True) total_loss = (tf.reduce_mean(start_loss) + tf.reduce_mean(end_loss)) / 2 print(total_loss) Explanation: Compute loss With start_logits and end_logits, we can compute loss: End of explanation network = modeling.networks.TransformerEncoder( vocab_size=vocab_size, num_layers=2, sequence_length=sequence_length) # Create a BERT trainer with the created network. num_classes = 2 bert_classifier = modeling.models.BertClassifier( network, num_classes=num_classes) Explanation: With the loss, you can optimize the model. Please see run_squad.py for the full example. Classification model In the last section, we show how to build a text classification model. Build a BertClassifier model wrapping TransformerEncoder BertClassifier implements a simple token classification model containing a single classification head using the TokenClassification network. End of explanation tf.keras.utils.plot_model(bert_classifier, show_shapes=True, dpi=48) # Create a set of 2-dimensional data tensors to feed into the model. word_id_data = np.random.randint(vocab_size, size=(batch_size, sequence_length)) mask_data = np.random.randint(2, size=(batch_size, sequence_length)) type_id_data = np.random.randint(2, size=(batch_size, sequence_length)) # Feed the data to the model. logits = bert_classifier([word_id_data, mask_data, type_id_data]) print(logits) Explanation: Inspecting the bert_classifier, we see it wraps the encoder with additional Classification head. End of explanation labels = np.random.randint(num_classes, size=(batch_size)) loss = modeling.losses.weighted_sparse_categorical_crossentropy_loss( labels=labels, predictions=tf.nn.log_softmax(logits, axis=-1)) print(loss) Explanation: Compute loss With logits, we can compute loss: End of explanation
2,416
Given the following text description, write Python code to implement the functionality described below step by step Description: 本章将讨论继承和子类化,重点是说明对 Python 而言尤为重要的两个细节: 子类化内置类型的缺点 多重继承的方法和解析顺序 我们将通过两个重要的 Python 项目探讨多重继承,这两个项目是 GUI 工具包 Tkinter 和 Web 框架 Django 我们将首先分析子类化内置类型的问题,然后讨论多重继承,通过案例讨论类层次结构方面好的做法和不好的 子类化内置类型很麻烦 在 Python 2.2 之前内置类型(如 list 和 dict)不能子类化,之后可以了,但是有个重要事项:内置类型(使用 C 语言编写)不会调用用户定义的类覆盖的特殊方法 至于内置类型的子类覆盖的方法会不会隐式调用,CPython 没有官方规定,基本上,内置类型的方法不会调用子类覆盖的方法。例如,dict 的子类覆盖 __getitem__() 方法不会被内置类型的 get() 方法调用,下面说明了这个问题: 内置类型的 dict 的 __init__ 和 __update__ 方法会忽略我们覆盖的 __setitem__ 方法 Step1: 原生类型的这种行为违背了面向对象编程的一个基本原则:始终应该从实例(self)所属的类开始搜索方法,即使在超类实现的类中调用也是如此。在这种糟糕的局面中,__missing__ 却能按照预期工作(3.4 节),但这是特例 不止实例内部有这个问题(self.get() 不调用 self.__getitem__()),内置类型的方法调用其他类的方法,如果被覆盖了,也不会被调用。下面是个例子,改编自 PyPy 文档 dict.update 方法会忽略 AnswerDict.__getitem__ 方法 Step2: 直接子类化内置类型(如 dict,list,str)容易出错,因为内置类型的方法通常忽略用户覆盖的方法,不要子类化内置类型,用户自己定义的类应该继承 collections 模块中的类,例如 UserDict, UserList, UserString,这些类,这些类做了特殊设计,因此易于扩展 如果子类化的是 collections.UserDict,上面暴露的问题就迎刃而解了,如下: Step3: 综上,本节所述的问题只是针对与 C 语言实现的内置类型内部的方法委托上,而且只影响直接继承内置类型的用户自定义类。如果子类化使用 Python 编写的类,如 UserDict 和 MutableMapping,就不会受此影响 多重继承和方法解析顺序 任何实现多重继承的语言都要处理潜在的命名冲突,这种冲突由不相关的祖先类实现同命方法引起,这种冲突称为菱形问题。 Step4: B 和 C 都实现了 pong 方法,唯一区别就是打印不一样。在 D 上调用 d.pong 运行的是哪个 pong 方法呢? C++ 中,必须使用类名限定方法调用来避免歧义。Python 也可以,如下: Step5: Python 能区分 d.pong() 调用的是哪个方法,因为 Python 会按照特定的顺序遍历继承图,这个顺序叫顺序解析(Method Resolution Order,MRO)。类都有一个名为 __mro__ 的属性,它的值是一个元组,按照方法解析顺序列出各个超类。从当前类一直向上,直到 object 类。D 类的 __mro__ 属性如下: Step6: 若想把方法调用委托给超类,推荐的方法是使用内置的 super() 函数。在 Python 3 中,这种方式变得更容易了,如上面的 D 类中的 pingpong 方法所示。然而,有时可能幸亏绕过方法解析顺序,直接调用某个类的超方法 -- 这样有时更加方便。,例如,D.ping 方法可以这样写 Step7: 注意,直接在类上调用实例方法时,必须显式传入 self 参数,因为这样访问的是未绑定方法(unbound method) 然而,使用 super() 最安全,也不易过时,调用框架或不受自己控制的类层次结构中的方法时,尤其适合用 super()。使用 super() 调用方法时,会遵循方法解析顺序,如下所示: Step8: 下面看看 D 在实例上调用 pingpong 方法得到的结果,如下所示: Step9: 方法解析顺序不仅考虑继承图,还考虑子类声明中列出超类的顺序。也就是说,如果声明 D 类时把 D 声明为 class D(C, B),那么 D 类的 __mro__ 就会不一样,先搜索 C 类,再 搜索 B 类 分析类时,我们需要经常查看 __mro__ 属性,下面是一些常用类的方法搜索顺序 Step10: 结束方法解析之前,我们再看看 Tkinter 复杂的多重继承:
Python Code: class DoppelDict(dict): def __setitem__(self, key, value): super().__setitem__(key, [value] * 2) dd = DoppelDict(one=1) dd # 继承 dict 的 __init__ 方法忽略了我们覆盖的 __setitem__方法,'one' 值没有重复 dd['two'] = 2 # `[]` 运算符会调用我们覆盖的 __setitem__ 方法 dd dd.update(three=3) #继承自 dict 的 update 方法也不会调用我们覆盖的 __setitem__ 方法 dd Explanation: 本章将讨论继承和子类化,重点是说明对 Python 而言尤为重要的两个细节: 子类化内置类型的缺点 多重继承的方法和解析顺序 我们将通过两个重要的 Python 项目探讨多重继承,这两个项目是 GUI 工具包 Tkinter 和 Web 框架 Django 我们将首先分析子类化内置类型的问题,然后讨论多重继承,通过案例讨论类层次结构方面好的做法和不好的 子类化内置类型很麻烦 在 Python 2.2 之前内置类型(如 list 和 dict)不能子类化,之后可以了,但是有个重要事项:内置类型(使用 C 语言编写)不会调用用户定义的类覆盖的特殊方法 至于内置类型的子类覆盖的方法会不会隐式调用,CPython 没有官方规定,基本上,内置类型的方法不会调用子类覆盖的方法。例如,dict 的子类覆盖 __getitem__() 方法不会被内置类型的 get() 方法调用,下面说明了这个问题: 内置类型的 dict 的 __init__ 和 __update__ 方法会忽略我们覆盖的 __setitem__ 方法 End of explanation class AnswerDict(dict): def __getitem__(self, key): return 42 ad = AnswerDict(a='foo') ad['a'] # 返回 42,与预期相符 d = {} d.update(ad) # d 是 dict 的实例,使用 ad 中的值更新 d d['a'] #dict.update 方法忽略了 AnswerDict.__getitem__ 方法 Explanation: 原生类型的这种行为违背了面向对象编程的一个基本原则:始终应该从实例(self)所属的类开始搜索方法,即使在超类实现的类中调用也是如此。在这种糟糕的局面中,__missing__ 却能按照预期工作(3.4 节),但这是特例 不止实例内部有这个问题(self.get() 不调用 self.__getitem__()),内置类型的方法调用其他类的方法,如果被覆盖了,也不会被调用。下面是个例子,改编自 PyPy 文档 dict.update 方法会忽略 AnswerDict.__getitem__ 方法 End of explanation import collections class DoppelDict2(collections.UserDict): def __setitem__(self, key, value): super().__setitem__(key, [value] * 2) dd = DoppelDict2(one=1) dd dd['two'] = 2 dd dd.update(three=3) dd class AnswerDict2(collections.UserDict): def __getitem__(self, key): return 42 ad = AnswerDict2(a='foo') ad['a'] d = {} d.update(ad) d['a'] d ad # 这里是自己加的,感觉还是有点问题,但是调用时候结果符合预期 Explanation: 直接子类化内置类型(如 dict,list,str)容易出错,因为内置类型的方法通常忽略用户覆盖的方法,不要子类化内置类型,用户自己定义的类应该继承 collections 模块中的类,例如 UserDict, UserList, UserString,这些类,这些类做了特殊设计,因此易于扩展 如果子类化的是 collections.UserDict,上面暴露的问题就迎刃而解了,如下: End of explanation class A: def ping(self): print('ping', self) class B(A): def pong(self): print('pong', self) class C(A): def pong(self): print('PONG', self) class D(B, C): def ping(self): super().ping() print('post-ping:', self) def pingpong(self): self.ping() super().ping() self.pong() super().pong C.pong(self) Explanation: 综上,本节所述的问题只是针对与 C 语言实现的内置类型内部的方法委托上,而且只影响直接继承内置类型的用户自定义类。如果子类化使用 Python 编写的类,如 UserDict 和 MutableMapping,就不会受此影响 多重继承和方法解析顺序 任何实现多重继承的语言都要处理潜在的命名冲突,这种冲突由不相关的祖先类实现同命方法引起,这种冲突称为菱形问题。 End of explanation d = D() d.pong() # 直接调用 d.pong() 是调用的 B 类中的版本 C.pong(d) #超类中的方法都可以直接调用,此时要把实例作为显式参数传入 Explanation: B 和 C 都实现了 pong 方法,唯一区别就是打印不一样。在 D 上调用 d.pong 运行的是哪个 pong 方法呢? C++ 中,必须使用类名限定方法调用来避免歧义。Python 也可以,如下: End of explanation D.__mro__ Explanation: Python 能区分 d.pong() 调用的是哪个方法,因为 Python 会按照特定的顺序遍历继承图,这个顺序叫顺序解析(Method Resolution Order,MRO)。类都有一个名为 __mro__ 的属性,它的值是一个元组,按照方法解析顺序列出各个超类。从当前类一直向上,直到 object 类。D 类的 __mro__ 属性如下: End of explanation def ping(self): A.ping(self) # 而不是 super().ping() print('post-ping', self) Explanation: 若想把方法调用委托给超类,推荐的方法是使用内置的 super() 函数。在 Python 3 中,这种方式变得更容易了,如上面的 D 类中的 pingpong 方法所示。然而,有时可能幸亏绕过方法解析顺序,直接调用某个类的超方法 -- 这样有时更加方便。,例如,D.ping 方法可以这样写 End of explanation d = D() d.ping() # 输出了两行,第一行是 super() A 类输出,第二行是 D 类输出 Explanation: 注意,直接在类上调用实例方法时,必须显式传入 self 参数,因为这样访问的是未绑定方法(unbound method) 然而,使用 super() 最安全,也不易过时,调用框架或不受自己控制的类层次结构中的方法时,尤其适合用 super()。使用 super() 调用方法时,会遵循方法解析顺序,如下所示: End of explanation d.pingpong() #最后一个是直接找到 C 类实现 pong 方法,忽略 mro Explanation: 下面看看 D 在实例上调用 pingpong 方法得到的结果,如下所示: End of explanation bool.__mro__ def print_mro(cls): print(', '.join(c.__name__ for c in cls.__mro__)) print_mro(bool) import numbers print_mro(numbers.Integral) import io print_mro(io.BytesIO) print_mro(io.TextIOWrapper) Explanation: 方法解析顺序不仅考虑继承图,还考虑子类声明中列出超类的顺序。也就是说,如果声明 D 类时把 D 声明为 class D(C, B),那么 D 类的 __mro__ 就会不一样,先搜索 C 类,再 搜索 B 类 分析类时,我们需要经常查看 __mro__ 属性,下面是一些常用类的方法搜索顺序: End of explanation import tkinter print_mro(tkinter.Text) Explanation: 结束方法解析之前,我们再看看 Tkinter 复杂的多重继承: End of explanation
2,417
Given the following text description, write Python code to implement the functionality described below step by step Description: Kaggle Titanic Competition This Jupyter Notebook examines how to use Python's scikit-learn module to create and train Decision Tree machine learning models. It does so specifically within the context of the Kaggle Titanic Competition. kaggle.com is a website which hosts machine learning competitions where experts can win cash prizes and those new to machine learning can learn the ropes in a practical manner by exploring using various techniques to solve the same problem. Kaggle's Titanic Step1: Pandas pandas is an open source library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. One things pandas excels at is reading data from and writing data to pretty much any common format including CSV files, SQL databases, JSON, Excel files, MATLAB, etc. The primary data structure provided by pandas is the DataFrame, which is sort of like a hybrid between an Excel spreadsheet and a SQL database table. It is VERY powerful in terms of the capabilities provided by this class. But there is a bit of a learning curve for newcomers. Get the data with Pandas Let's start with loading in the training and testing set into your Python environment. You will use the training set to build your model, and the test set to validate it. The data is stored on the web as csv files; their URLs are already available as character strings in the sample code. You can load this data with the read_csv() method from the Pandas library. Step2: Understanding Your Data Before starting with the actual analysis, it's important to understand the structure of your data. Both test and train are DataFrame objects, the way pandas represent datasets. You can easily explore a DataFrame using the .describe() method. .describe() summarizes the columns/features of the DataFrame, including the count of observations, mean, max and so on. Another useful trick is to look at the dimensions of the DataFrame. This is done by requesting the .shape attribute of your DataFrame object. (ex. your_data.shape) The training and test set are already available in the workspace, as train and test. Apply .describe() method and print the .shape attribute of the training set. Step3: Female vs Male How many people in your training set survived the disaster with the Titanic? To see this, you can use the value_counts() method in combination with standard bracket notation to select a single column of a DataFrame Step4: Does age play a role? Another variable that could influence survival is age; since it's probable that children were saved first. You can test this by creating a new column with a categorical variable Child. Child will take the value 1 in cases where age is less than 18, and a value of 0 in cases where age is greater than or equal to 18. To add this new variable you need to do two things (i) create a new column, and (ii) provide the values for each observation (i.e., row) based on the age of the passenger. Adding a new column with Pandas in Python is easy and can be done via the following syntax Step5: Intro to Decision Trees In the previous sections, you did all the slicing and dicing yourself to find subsets that have a higher chance of surviving. A decision tree automates this process for you and outputs a classification model or classifier. Conceptually, the decision tree algorithm starts with all the data at the root node and scans all the variables for the best one to split on. Once a variable is chosen, you do the split and go down one level (or one node) and repeat. The final nodes at the bottom of the decision tree are known as terminal nodes, and the majority vote of the observations in that node determine how to predict for new observations that end up in that terminal node. First, let's import the necessary libraries ... Step6: Cleaning and Formatting Your Data Before you can begin constructing your trees you need to get your hands dirty and clean the data so that you can use all the features available to you. In the first chapter, we saw that the Age variable had some missing value. Missingness is a whole subject with and in itself, but we will use a simple imputation technique where we substitute each missing value with the median of the all present values. train["Age"] = train["Age"].fillna(train["Age"].median()) Another problem is that the Sex and Embarked variables are categorical but in a non-numeric format. Thus, we will need to assign each class a unique integer so that Python can handle the information. Embarked also has some missing values which you should impute witht the most common class of embarkation, which is "S". Step7: Creating your first decision tree You will use the scikit-learn and numpy libraries to build your first decision tree. scikit-learn can be used to create tree objects from the DecisionTreeClassifier class. The methods that we will use take numpy arrays as inputs and therefore we will need to create those from the DataFrame that we already have. We will need the following to build a decision tree * target Step8: Interpreting your decision tree The feature_importances_ attribute make it simple to interpret the significance of the predictors you include. Based on your decision tree, what variable plays the most important role in determining whether or not a passenger survived? Based on this decision tree, the Fare variable plays the most important role, but it is nearly tied with Age in importance. Based on the score, we can see that our decision tree fit based on the training set predicts approximately 98% of the values in the training set correctly. Predict and submit to Kaggle To send a submission to Kaggle you need to predict the survival rates for the observations in the test set. Luckily, with our decision tree, we can make use of some simple functions to "generate" our answer without having to manually perform subsetting. First, you make use of the .predict() method. You provide it the model (my_tree_one), the values of features from the dataset for which predictions need to be made (test). To extract the features we will need to create a numpy array in the same way as we did when training the model. However, we need to take care of a small but important problem first. There is a missing value in the Fare feature that needs to be imputed. Next, you need to make sure your output is in line with the submission requirements of Kaggle Step9: How well does that first decision tree do on the test set? This basic solution achieves a score of 0.75120 on the test set. So it predicts approximately 75% of the values in the test set correctly. But from earlier we saw that this decision tree predicted approximately 98% of the values in the training set correctly? So why did we do so much better predicting values in the training set as compared to the test set? Overfitting Overfitting and how to control it When you created your first decision tree the default arguments for max_depth and min_samples_split were set to None. This means that no limit on the depth of your tree was set. That's a good thing right? Not so fast. We are likely overfitting. This means that while your model describes the training data extremely well, it doesn't generalize to new data, which is frankly the point of prediction. Just look at the Kaggle submission results for the simple model based on Gender and the complex decision tree. Which one does better? * A gender-only based model achieves a score of 0.765, which is better than our first decision tree, ouch! Maybe we can improve the overfit model by making a less complex model? In DecisionTreeRegressor, the depth of our model is defined by two parameters Step10: How well did our attempts at preventing overfitting help? This second solution achieves a higher score of 0.76555 on the test set, even though it had a lower score of 0.906 on the training set. So it predicts approximately 76.6% of the values in the test set correctly. This is a little bit better than before we attempted to prevent overfitting. But it is still only par with a pure gender-based model. So there is still a lot of room for improvement. How can we do better? One way is to spend a little bit of time on feature engineering ... Feature-engineering for our Titanic data set Data Science is an art that benefits from a human element. Enter feature engineering
Python Code: # import os and urllib import os # For Python 3.x the import should be urllib.request, but for Python 2.x it should just be urllib try: from urllib.request import urlretrieve except ImportError: from urllib import urlretrieve # Make sure data directory exists data_dir = 'data/kaggle/titanic' if not os.path.isdir(data_dir): os.makedirs(data_dir) # Filenames for the train and test datasets train_csv = os.path.join(data_dir, 'train.csv') test_csv = os.path.join(data_dir, 'test.csv') # If the data doesn't already exist locally, then download it using urlretrieve if not os.path.isfile(train_csv): train_url = "http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/train.csv" urlretrieve (train_url, train_csv) if not os.path.isfile(test_csv): test_url = "http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/test.csv" urlretrieve (test_url, test_csv) Explanation: Kaggle Titanic Competition This Jupyter Notebook examines how to use Python's scikit-learn module to create and train Decision Tree machine learning models. It does so specifically within the context of the Kaggle Titanic Competition. kaggle.com is a website which hosts machine learning competitions where experts can win cash prizes and those new to machine learning can learn the ropes in a practical manner by exploring using various techniques to solve the same problem. Kaggle's Titanic: Machine Learning from Disaster competition is their most popular competition for beginners and those new to data science and machine learning. It contains numerous tutorials and examples. The particular solution presented here is based heavily on the excellent free Kaggle Python tutorial from DataCamp. When the Titanic sank, 1502 of the 2224 passengers and crew were killed. One of the main reasons for this high level of casualties was the lack of lifeboats on this self-proclaimed "unsinkable" ship. Those that have seen the movie know that some individuals were more likely to survive the sinking (lucky Rose) than others (poor Jack). In this example, you will learn how to apply machine learning techniques to predict a passenger's chance of surviving using Python. Downloading the Datasets First we can use Python's builtin os package to determine if the training and test set CSV files already exist locally. If they do not exist, then we can use Python's urllib package to download them from DataCamp. End of explanation # Import the Pandas library import pandas as pd # Load the train and test datasets from the local CSV files to create two DataFrames train = pd.read_csv(train_csv) test = pd.read_csv(test_csv) # Inspect the first few rows of the training dataset train.head() # Inspect the first few rows of the test dataset test.head() Explanation: Pandas pandas is an open source library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. One things pandas excels at is reading data from and writing data to pretty much any common format including CSV files, SQL databases, JSON, Excel files, MATLAB, etc. The primary data structure provided by pandas is the DataFrame, which is sort of like a hybrid between an Excel spreadsheet and a SQL database table. It is VERY powerful in terms of the capabilities provided by this class. But there is a bit of a learning curve for newcomers. Get the data with Pandas Let's start with loading in the training and testing set into your Python environment. You will use the training set to build your model, and the test set to validate it. The data is stored on the web as csv files; their URLs are already available as character strings in the sample code. You can load this data with the read_csv() method from the Pandas library. End of explanation # DataFrame.describe() generates various various summary statistics, automatically excluding NaN or missing values train.describe() # This means the training set has 891 observations with 12 variables each train.shape Explanation: Understanding Your Data Before starting with the actual analysis, it's important to understand the structure of your data. Both test and train are DataFrame objects, the way pandas represent datasets. You can easily explore a DataFrame using the .describe() method. .describe() summarizes the columns/features of the DataFrame, including the count of observations, mean, max and so on. Another useful trick is to look at the dimensions of the DataFrame. This is done by requesting the .shape attribute of your DataFrame object. (ex. your_data.shape) The training and test set are already available in the workspace, as train and test. Apply .describe() method and print the .shape attribute of the training set. End of explanation # Passengers that survived vs passengers that passed away print(train["Survived"].value_counts()) # As proportions print(train["Survived"].value_counts(normalize=True)) # Males that survived vs males that passed away print(train["Survived"][train["Sex"] == 'male'].value_counts()) # Females that survived vs Females that passed away print(train["Survived"][train["Sex"] == 'female'].value_counts()) # Normalized male survival print("\nMale survival rates:\n{}".format(train["Survived"][train["Sex"] == 'male'].value_counts(normalize=True))) # Normalized female survival print("\nFemale survival rates:\n{}".format(train["Survived"][train["Sex"] == 'female'].value_counts(normalize=True))) Explanation: Female vs Male How many people in your training set survived the disaster with the Titanic? To see this, you can use the value_counts() method in combination with standard bracket notation to select a single column of a DataFrame: # absolute numbers train["Survived"].value_counts() # percentages train["Survived"].value_counts(normalize = True) If you run these commands in the console, you'll see that 549 individuals died (62%) and 342 survived (38%). A simple way to predict heuristically could be: "majority wins". This would mean that you will predict every unseen observation to not survive. To dive in a little deeper we can perform similar counts and percentage calculations on subsets of the Survived column. For example, maybe gender could play a role as well? You can explore this using the .value_counts() method for a two-way comparison on the number of males and females that survived, with this syntax: train["Survived"][train["Sex"] == 'male'].value_counts() train["Survived"][train["Sex"] == 'female'].value_counts() To get proportions, you can again pass in the argument normalize = True to the .value_counts() method. The results below show that 81% of the men died, but only 26% of the women died. So gender matters tremendously. End of explanation # Added a new column called Child in the train data frame that initially takes the value 0 for all observations train["Child"] = float(0) # Assign 1 to passengers under 18 train.loc[train["Age"] < 18, "Child"] = 1 # Print normalized Survival Rates for passengers under 18 print("Survival rates for children:\n{}".format(train["Survived"][train["Child"] == 1].value_counts(normalize = True))) # Print normalized Survival Rates for passengers 18 or older print("\nSurvival rates for adults:\n{}".format(train["Survived"][train["Child"] == 0].value_counts(normalize = True))) Explanation: Does age play a role? Another variable that could influence survival is age; since it's probable that children were saved first. You can test this by creating a new column with a categorical variable Child. Child will take the value 1 in cases where age is less than 18, and a value of 0 in cases where age is greater than or equal to 18. To add this new variable you need to do two things (i) create a new column, and (ii) provide the values for each observation (i.e., row) based on the age of the passenger. Adding a new column with Pandas in Python is easy and can be done via the following syntax: your_data["new_var"] = 0 This code would create a new column in the train DataFrame titled new_var with 0 for each observation. To set the values based on the age of the passenger, you make use of a boolean test inside the square bracket operator. With the []-operator you create a subset of rows and assign a value to a certain variable of that subset of observations. For example, train["new_var"][train["Fare"] &gt; 10] = 1 would give a value of 1 to the variable new_var for the subset of passengers whose fares greater than 10. Remember that new_var has a value of 0 for all other values (including missing values). The data below shows that 54% of children survived, but only 38% of adults survived. So yes, age does play a role. End of explanation # Import the Numpy library import numpy as np # Import 'tree' from scikit-learn library from sklearn import tree Explanation: Intro to Decision Trees In the previous sections, you did all the slicing and dicing yourself to find subsets that have a higher chance of surviving. A decision tree automates this process for you and outputs a classification model or classifier. Conceptually, the decision tree algorithm starts with all the data at the root node and scans all the variables for the best one to split on. Once a variable is chosen, you do the split and go down one level (or one node) and repeat. The final nodes at the bottom of the decision tree are known as terminal nodes, and the majority vote of the observations in that node determine how to predict for new observations that end up in that terminal node. First, let's import the necessary libraries ... End of explanation # Fill missing Age values with median age from training set train["Age"] = train["Age"].fillna(train["Age"].median()) test["Age"] = test["Age"].fillna(train["Age"].median()) # Convert the male and female groups to integer form by replacing "male" with 0 and "female" with 1 train.loc[train["Sex"] == "male", "Sex"] = 0 train.loc[train["Sex"] == "female", "Sex"] = 1 test.loc[test["Sex"] == "male", "Sex"] = 0 test.loc[test["Sex"] == "female", "Sex"] = 1 # Print value counts for the Sex and Embarked columns print(train["Sex"].value_counts()) # Impute the Embarked variable train["Embarked"] = train["Embarked"].fillna("S") test["Embarked"] = test["Embarked"].fillna("S") # Convert the Embarked classes to integer form train.loc[train["Embarked"] == "S", "Embarked"] = 0 train.loc[train["Embarked"] == "C", "Embarked"] = 1 train.loc[train["Embarked"] == "Q", "Embarked"] = 2 test.loc[test["Embarked"] == "S", "Embarked"] = 0 test.loc[test["Embarked"] == "C", "Embarked"] = 1 test.loc[test["Embarked"] == "Q", "Embarked"] = 2 print(train["Embarked"].value_counts()) # Impute any missing values in Fare train["Fare"] = train["Fare"].fillna(train["Fare"].median()) test["Fare"] = test["Fare"].fillna(train["Fare"].median()) Explanation: Cleaning and Formatting Your Data Before you can begin constructing your trees you need to get your hands dirty and clean the data so that you can use all the features available to you. In the first chapter, we saw that the Age variable had some missing value. Missingness is a whole subject with and in itself, but we will use a simple imputation technique where we substitute each missing value with the median of the all present values. train["Age"] = train["Age"].fillna(train["Age"].median()) Another problem is that the Sex and Embarked variables are categorical but in a non-numeric format. Thus, we will need to assign each class a unique integer so that Python can handle the information. Embarked also has some missing values which you should impute witht the most common class of embarkation, which is "S". End of explanation # Create the target and features numpy arrays: target, features_one target = train["Survived"].values columns_one = ["Pclass", "Sex", "Age", "Fare"] features_one = train[columns_one].values features_one # Fit your first decision tree: my_tree_one my_tree_one = tree.DecisionTreeClassifier() my_tree_one = my_tree_one.fit(features_one, target) # Look at the importance and score of the included features print(my_tree_one.feature_importances_) print(my_tree_one.score(features_one, target)) Explanation: Creating your first decision tree You will use the scikit-learn and numpy libraries to build your first decision tree. scikit-learn can be used to create tree objects from the DecisionTreeClassifier class. The methods that we will use take numpy arrays as inputs and therefore we will need to create those from the DataFrame that we already have. We will need the following to build a decision tree * target: A one-dimensional numpy array containing the target/response from the train data. (Survival in your case) * features: A multidimensional numpy array containing the features/predictors from the train data. (ex. Sex, Age) Take a look at the sample code below to see what this would look like: target = train["Survived"].values features = train[["Sex", "Age"]].values my_tree = tree.DecisionTreeClassifier() my_tree = my_tree.fit(features, target) One way to quickly see the result of your decision tree is to see the importance of the features that are included. This is done by requesting the .feature_importances_ attribute of your tree object. Another quick metric is the mean accuracy that you can compute using the .score() function with features_one and target as arguments. Ok, time for you to build your first decision tree in Python! End of explanation # Make sure solution directory exists solution_dir = 'solutions/kaggle/titanic' if not os.path.isdir(solution_dir): os.makedirs(solution_dir) # Impute the missing value with the median test.Fare.fillna(test.Fare.median()) # Extract the features from the test set: Pclass, Sex, Age, and Fare. test_features = test[columns_one].values # Make your prediction using the test set my_prediction = my_tree_one.predict(test_features) # Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions PassengerId = np.array(test["PassengerId"]).astype(int) my_solution = pd.DataFrame(my_prediction, PassengerId, columns = ["Survived"]) # Check that your data frame has 418 entries print(my_solution.shape) # Write your solution to a csv file my_solution.to_csv(os.path.join(solution_dir, "decision_tree_one.csv"), index_label = ["PassengerId"]) Explanation: Interpreting your decision tree The feature_importances_ attribute make it simple to interpret the significance of the predictors you include. Based on your decision tree, what variable plays the most important role in determining whether or not a passenger survived? Based on this decision tree, the Fare variable plays the most important role, but it is nearly tied with Age in importance. Based on the score, we can see that our decision tree fit based on the training set predicts approximately 98% of the values in the training set correctly. Predict and submit to Kaggle To send a submission to Kaggle you need to predict the survival rates for the observations in the test set. Luckily, with our decision tree, we can make use of some simple functions to "generate" our answer without having to manually perform subsetting. First, you make use of the .predict() method. You provide it the model (my_tree_one), the values of features from the dataset for which predictions need to be made (test). To extract the features we will need to create a numpy array in the same way as we did when training the model. However, we need to take care of a small but important problem first. There is a missing value in the Fare feature that needs to be imputed. Next, you need to make sure your output is in line with the submission requirements of Kaggle: a csv file with exactly 418 entries and two columns: PassengerId and Survived. Then use the code provided to make a new data frame using DataFrame(), and create a csv file using to_csv() method from Pandas. End of explanation # Create a new array with the added features: features_two columns_two = ["Pclass","Age","Sex","Fare", "SibSp", "Parch", "Embarked"] features_two = train[columns_two].values #Control overfitting by setting "max_depth" to 10 and "min_samples_split" to 5 : my_tree_two max_depth = 10 min_samples_split = 5 my_tree_two = tree.DecisionTreeClassifier(max_depth = max_depth, min_samples_split = min_samples_split, random_state = 1) my_tree_two.fit(features_two, target) #Print the score of the new decison tree print(my_tree_two.score(features_two, target)) # Make your prediction using the test set test_features_two = test[columns_two].values predition_two = my_tree_two.predict(test_features_two) predition_two.shape # Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions solution_two = pd.DataFrame(predition_two, PassengerId, columns = ["Survived"]) # Check that your data frame has 418 entries print(solution_two.shape) # Write your solution to a csv file with the name my_solution.csv solution_two.to_csv(os.path.join(solution_dir,"decision_tree_two.csv"), index_label = ["PassengerId"]) Explanation: How well does that first decision tree do on the test set? This basic solution achieves a score of 0.75120 on the test set. So it predicts approximately 75% of the values in the test set correctly. But from earlier we saw that this decision tree predicted approximately 98% of the values in the training set correctly? So why did we do so much better predicting values in the training set as compared to the test set? Overfitting Overfitting and how to control it When you created your first decision tree the default arguments for max_depth and min_samples_split were set to None. This means that no limit on the depth of your tree was set. That's a good thing right? Not so fast. We are likely overfitting. This means that while your model describes the training data extremely well, it doesn't generalize to new data, which is frankly the point of prediction. Just look at the Kaggle submission results for the simple model based on Gender and the complex decision tree. Which one does better? * A gender-only based model achieves a score of 0.765, which is better than our first decision tree, ouch! Maybe we can improve the overfit model by making a less complex model? In DecisionTreeRegressor, the depth of our model is defined by two parameters: - the max_depth parameter determines when the splitting up of the decision tree stops. - the min_samples_split parameter monitors the amount of observations in a bucket. If a certain threshold is not reached (e.g minimum 10 passengers) no further splitting can be done. By limiting the complexity of your decision tree you will increase its generality and thus its usefulness for prediction! It may also help to add additional features. End of explanation # Create train_two with the newly defined feature train_two = train.copy() train_two["family_size"] = train_two["SibSp"] + train_two["Parch"] + 1 cols_three = ["Pclass", "Sex", "Age", "Fare", "SibSp", "Parch", "Embarked", "family_size", "Child"] # Create a new feature set and add the new feature features_three = train_two[cols_three].values #Control overfitting by setting "max_depth" to 9 and "min_samples_split" to 6 max_depth = 9 min_samples_split = 6 # Define the tree classifier, then fit the model my_tree_three = tree.DecisionTreeClassifier(max_depth = max_depth, min_samples_split = min_samples_split, random_state = 1) my_tree_three.fit(features_three, target) # Print the score of this decision tree print(my_tree_three.score(features_three, target)) # Make your prediction using the test set test_two = test.copy() test_two["Child"] = float(0) test_two.loc[test_two["Age"] < 18, "Child"] = 1 test_two["family_size"] = test_two["SibSp"] + test_two["Parch"] + 1 test_features_three = test_two[cols_three].values predition_three = my_tree_three.predict(test_features_three) predition_three.shape # Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions solution_three = pd.DataFrame(predition_three, PassengerId, columns = ["Survived"]) # Check that your data frame has 418 entries print(solution_three.shape) # Write your solution to a csv file with the name my_solution.csv solution_three.to_csv(os.path.join(solution_dir, "decision_tree_three.csv"), index_label = ["PassengerId"]) Explanation: How well did our attempts at preventing overfitting help? This second solution achieves a higher score of 0.76555 on the test set, even though it had a lower score of 0.906 on the training set. So it predicts approximately 76.6% of the values in the test set correctly. This is a little bit better than before we attempted to prevent overfitting. But it is still only par with a pure gender-based model. So there is still a lot of room for improvement. How can we do better? One way is to spend a little bit of time on feature engineering ... Feature-engineering for our Titanic data set Data Science is an art that benefits from a human element. Enter feature engineering: creatively engineering your own features by combining the different existing variables. While feature engineering is a discipline in itself, too broad to be covered here in detail, you will have a look at a simple example by creating your own new predictive attribute: family_size. A valid assumption is that larger families need more time to get together on a sinking ship, and hence have lower probability of surviving. Family size is determined by the variables SibSp and Parch, which indicate the number of family members a certain passenger is traveling with. So when doing feature engineering, you add a new variable family_size, which is the sum of SibSp and Parch plus one (the observation itself), to the test and train set. End of explanation
2,418
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Introduction to Document Similarity with Elasticsearch In a text analytics context, document similarity relies on reimagining texts as points in space that can be close (similar) or different (far apart). However, it's not always a straightforward process to determine which document features should be encoded into a similarity measure (words/phrases? document length/structure?). Moreover, in practice it can be challenging to find a quick, efficient way of finding similar documents given some input document. In this post I'll explore some of the similarity tools implemented in Elasticsearch, which can enable us to augment search speed without having to sacrifice too much in the way of nuance. Document Distance and Similarity In this post I'll be focusing mostly on getting started with Elasticsearch and comparing the built-in similarity measures currently implemented in ES. However, if you're new to the concept of document similarity, here's a quick overview. Essentially, to represent the distance between documents, we need two things Step2: The categories in the hobbies corpus include Step3: Most of the articles, like the one above, are straightforward and are clearly correctly labeled, though there are some exceptions Step5: Elasticsearch and Python We can use the elasticsearch library in Python to hop out of the command line and interact with our Elasticsearch instance a bit more systematically. Here we'll create a class that goes through each of the hobbies categories in the corpus and indexes each to a new index appropriately named after it's category Step6: Let's poke around a bit to see what's in our instance. Note Step8: More Like This Elasticsearch exposes a convenient way of doing more advanced querying based on document similarity, which is called "More Like This" (MLT). Given an input document or set of documents, MLT wraps all of the following behavior Step9: Unlike Note that we can also add the unlike parameter to limit our search. Here I've indicated some of the less food-related stories that we found while doing exploratory analysis Step10: We can also expand our search to other indices, to see if there are documents related to our red sauce renaissance article that may appear in other hobbies corpus categories Step11: Advanced Similarity So far we've explored how to get started with Elasticsearch and to perform basic search and fuzzy search. These search tools all use the practical scoring function to compute the relevance score for search results. This scoring function is a variation of TF-IDF that also takes into account a few other things, including the length of the query and the field that's being searched. Now we will look at some of the more advanced tools implemented in Elasticsearch. Similarity algorithms can be set on a per-index or per-field basis. The available similarity computations include Step12: LMDirichlet Similarity ...whereas when we change "type" Step13: LMJelinekMercer Similarity ...and higher still with "type"
Python Code: import os from sklearn.datasets.base import Bunch from yellowbrick.download import download_all ## The path to the test data sets FIXTURES = os.path.join(os.getcwd(), "data") ## Dataset loading mechanisms datasets = { "hobbies": os.path.join(FIXTURES, "hobbies") } def load_data(name, download=True): Loads and wrangles the passed in text corpus by name. If download is specified, this method will download any missing files. # Get the path from the datasets path = datasets[name] # Check if the data exists, otherwise download or raise if not os.path.exists(path): if download: download_all() else: raise ValueError(( "'{}' dataset has not been downloaded, " "use the download.py module to fetch datasets" ).format(name)) # Read the directories in the directory as the categories. categories = [ cat for cat in os.listdir(path) if os.path.isdir(os.path.join(path, cat)) ] files = [] # holds the file names relative to the root data = [] # holds the text read from the file target = [] # holds the string of the category # Load the data from the files in the corpus for cat in categories: for name in os.listdir(os.path.join(path, cat)): files.append(os.path.join(path, cat, name)) target.append(cat) with open(os.path.join(path, cat, name), 'r') as f: data.append(f.read()) # Return the data bunch for use similar to the newsgroups example return Bunch( categories=categories, files=files, data=data, target=target, ) corpus = load_data('hobbies') hobby_types = {} for category in corpus.categories: texts = [] for idx in range(len(corpus.data)): if corpus['target'][idx] == category: texts.append(' '.join(corpus.data[idx].split())) hobby_types[category] = texts Explanation: Introduction to Document Similarity with Elasticsearch In a text analytics context, document similarity relies on reimagining texts as points in space that can be close (similar) or different (far apart). However, it's not always a straightforward process to determine which document features should be encoded into a similarity measure (words/phrases? document length/structure?). Moreover, in practice it can be challenging to find a quick, efficient way of finding similar documents given some input document. In this post I'll explore some of the similarity tools implemented in Elasticsearch, which can enable us to augment search speed without having to sacrifice too much in the way of nuance. Document Distance and Similarity In this post I'll be focusing mostly on getting started with Elasticsearch and comparing the built-in similarity measures currently implemented in ES. However, if you're new to the concept of document similarity, here's a quick overview. Essentially, to represent the distance between documents, we need two things: first, a way of encoding text as vectors, and second, a way of measuring distance. The bag-of-words (BOW) model enables us to represent document similarity with respect to vocabulary and is easy to do. Some common options for BOW encoding include one-hot encoding, frequency encoding, TF-IDF, and distributed representations. How should we measure distance between documents in space? Euclidean distance is often where we start, but is not always the best choice for text. Documents encoded as vectors are sparse; each vector could be as long as the number of unique words across the full corpus. That means that two documents of very different lengths (e.g. a single recipe and a cookbook), could be encoded with the same length vector, which might overemphasize the magnitude of the book's document vector at the expense of the recipe's document vector. Cosine distance helps to correct for variations in vector magnitudes resulting from uneven length documents, and enables us to measure the distance between the book and recipe. For more about vector encoding, you can check out Chapter 4 of our book, and for more about different distance metrics check out Chapter 6. In Chapter 10, we prototype a kitchen chatbot that, among other things, uses a nearest neigbor search to recommend recipes that are similar to the ingredients listed by the user. You can also poke around in the code for the book here. One of my observations during the prototyping phase for that chapter is how slow vanilla nearest neighbor search is. This led me to think about different ways to optimize the search, from using variations like ball tree, to using other Python libraries like Spotify's Annoy, and also to other kind of tools altogether that attempt to deliver a similar results as quickly as possible. Enter Elasticsearch... What is Elasticsearch Elasticsearch is a open source text search engine that leverages the information retrieval library Lucene together with a key-value store to expose deep and rapid search functionalities. It combines the features of a NoSQL document store database, an analytics engine, and RESTful API, and is particularly useful for indexing and searching text documents. The Basics To run Elasticsearch, you need to have the Java JVM (>= 8) installed. For more on this, read the installation instructions. In this section, we'll go over the basics of starting up a local elasticsearch instance, creating a new index, querying for all the existing indices, and deleting a given index. If you know how to do this, feel free to skip to the next section! Start Elasticsearch In the command line, start running an instance by navigating to where ever you have elasticsearch installed and typing: bash $ cd elasticsearch-&lt;version&gt; $ ./bin/elasticsearch Create an Index Now we will create an index. Think of an index as a database in PostgreSQL or MongoDB. An Elasticsearch cluster can contain multiple indices (e.g. relational or noSql databases), which in turn contain multiple types (similar to MongoDB collections or PostgreSQL tables). These types hold multiple documents (similar to MongoDB documents or PostgreSQL rows), and each document has properties (like MongoDB document key-values or PostgreSQL columns). bash curl -X PUT "localhost:9200/cooking " -H 'Content-Type: application/json' -d' { "settings" : { "index" : { "number_of_shards" : 1, "number_of_replicas" : 1 } } } ' And the response: bash {"acknowledged":true,"shards_acknowledged":true,"index":"cooking"} Get All Indices bash $ curl -X GET "localhost:9200/_cat/indices?v" Delete a Specific Index bash $ curl -X DELETE "localhost:9200/cooking" Document Relevance To explore how Elasticsearch approaches document relevance, let's begin by manually adding some documents to the cooking index we created above: bash $ curl -X PUT "localhost:9200/cooking/_doc/1?pretty" -H 'Content-Type: application/json' -d' { "description": "Smoothies are one of our favorite breakfast options year-round." } ' bash $ curl -X PUT "localhost:9200/cooking/_doc/2?pretty" -H 'Content-Type: application/json' -d' { "description": "A smoothie is a thick, cold beverage made from pureed raw fruit." } ' bash $ curl -X PUT "localhost:9200/cooking/_doc/3?pretty" -H 'Content-Type: application/json' -d' { "description": "Eggs Benedict is a traditional American breakfast or brunch dish." } ' At a very basic level, we can think of Elasticsearch's basic search functionality as a kind of similarity search, where we are essentially comparing the bag-of-words formed by the search query with that of each of our documents. This allows Elasticsearch not only to return results that explicitly mention the desired search terms, but also to surface a score that conveys some measure of relevance. We now have three breakfast-related documents in our cooking index; let's use the basic search function to find documents that explicitly mention "breakfast": bash $ curl -XGET 'localhost:9200/cooking/_search?q=description:breakfast&amp;pretty' And the response: bash { "took" : 1, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : 2, "max_score" : 0.48233607, "hits" : [ { "_index" : "cooking", "_type" : "_doc", "_id" : "1", "_score" : 0.48233607, "_source" : { "description" : "Smoothies are one of our favorite breakfast options year-round." } }, { "_index" : "cooking", "_type" : "_doc", "_id" : "3", "_score" : 0.48233607, "_source" : { "description" : "Eggs Benedict is a traditional American breakfast or brunch dish." } } ] } } We get two results back, the first and third documents, which each have the same relevance score, because both include the single search term exactly once. However if we look for documents that mention "smoothie"... bash $ curl -XGET 'localhost:9200/cooking/_search?q=description:smoothie&amp;pretty' ...we only get the second document back, since the word "smoothie" is pluralized in the first document. On the other hand, our relevance score has jumped up to nearly 1, since it is the only result in the index that contains the search term. bash { "took" : 1, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : 1, "max_score" : 0.9331132, "hits" : [ { "_index" : "cooking", "_type" : "_doc", "_id" : "2", "_score" : 0.9331132, "_source" : { "description" : "A smoothie is a thick, cold beverage made from pureed raw fruit." } } ] } } We can work around this by using a fuzzy search, which will return both the first and second documents: bash curl -XGET "localhost:9200/cooking/_search?pretty=true" -H 'Content-Type: application/json' -d' { "query": { "fuzzy" : { "description" : "smoothie" } } } ' With the following results: bash { "took" : 2, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : 2, "max_score" : 0.9331132, "hits" : [ { "_index" : "cooking", "_type" : "_doc", "_id" : "2", "_score" : 0.9331132, "_source" : { "description" : "A smoothie is a thick, cold beverage made from pureed raw fruit." } }, { "_index" : "cooking", "_type" : "_doc", "_id" : "1", "_score" : 0.8807446, "_source" : { "description" : "Smoothies are one of our favorite breakfast options year-round." } } ] } } We can work around this by using a fuzzy search, which will return both the first and second documents: bash curl -XGET "localhost:9200/cooking/_search?pretty=true" -H 'Content-Type: application/json' -d' { "query": { "fuzzy" : { "description" : "smoothie" } } } ' With the following results: bash { "took" : 2, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : 2, "max_score" : 0.9331132, "hits" : [ { "_index" : "cooking", "_type" : "_doc", "_id" : "2", "_score" : 0.9331132, "_source" : { "description" : "A smoothie is a thick, cold beverage made from pureed raw fruit." } }, { "_index" : "cooking", "_type" : "_doc", "_id" : "1", "_score" : 0.8807446, "_source" : { "description" : "Smoothies are one of our favorite breakfast options year-round." } } ] } } Searching a Real Corpus In order to really appreciate the differences and nuances of different similarity measures, we need more than three documents! For convenience, we'll use the sample text corpus that comes with the machine learning visualization library Yellowbrick. Yellowbrick hosts several datasets wrangled from the UCI Machine Learning Repository or built by District Data Labs to present the examples used throughout this documentation, one of which is a text corpus of news documents collected from different domain area RSS feeds. If you haven't downloaded the data, you can do so by running: $ python -m yellowbrick.download This should create a folder named data in your current working directory that contains all of the datasets. You can load a specified dataset as follows: End of explanation food_stories = [text for text in hobby_types['cooking']] print(food_stories[5]) Explanation: The categories in the hobbies corpus include: "cinema", "books", "cooking", "sports", and "gaming". We can explore them like this: End of explanation print(food_stories[23]) Explanation: Most of the articles, like the one above, are straightforward and are clearly correctly labeled, though there are some exceptions: End of explanation from elasticsearch.helpers import bulk from elasticsearch import Elasticsearch class ElasticIndexer(object): Create an ElasticSearch instance, and given a list of documents, index the documents into ElasticSearch. def __init__(self): self.elastic_search = Elasticsearch() def make_documents(self, textdict): for category, docs in textdict: for document in docs: yield { "_index": category, "_type": "_doc", "description": document } def index(self, textdict): bulk(self.elastic_search, self.make_documents(textdict)) indexer = ElasticIndexer() indexer.index(hobby_types.items()) Explanation: Elasticsearch and Python We can use the elasticsearch library in Python to hop out of the command line and interact with our Elasticsearch instance a bit more systematically. Here we'll create a class that goes through each of the hobbies categories in the corpus and indexes each to a new index appropriately named after it's category: End of explanation from pprint import pprint query = {"match_all": {}} result = indexer.elastic_search.search(index="cooking", body={"query":query}) print("%d hits \n" % result['hits']['total']) print("First result:\n") pprint(result['hits']['hits'][0]) query = {"fuzzy":{"description":"breakfast"}} result = indexer.elastic_search.search(index="cooking", body={"query":query}) print("%d hits \n" % result['hits']['total']) print("First result:\n") pprint(result['hits']['hits'][0]) Explanation: Let's poke around a bit to see what's in our instance. Note: after running the above, you should see the indices appear when you type curl -X GET "localhost:9200/_cat/indices?v" into the command line. End of explanation red_sauce_renaissance = Ever since Rich Torrisi and Mario Carbone began rehabilitating chicken Parm and Neapolitan cookies around 2010, I’ve been waiting for other restaurants to carry the torch of Italian-American food boldly into the future. This is a major branch of American cuisine, too important for its fate to be left to the Olive Garden. For the most part, though, the torch has gone uncarried. I have been told that Palizzi Social Club, in Philadelphia, may qualify, but because Palizzi is a veritable club — members and guests only, no new applications accepted — I don’t expect to eat there before the nation’s tricentennial. Then in October, a place opened in the West Village that seemed to hit all the right tropes. It’s called Don Angie. Two chefs share the kitchen — Angela Rito and her husband, Scott Tacinelli — and they make versions of chicken scarpariello, antipasto salad and braciole. The dining room brings back the high-glitz Italian restaurant décor of the 1970s and ’80s, the period when Formica and oil paintings of the Bay of Naples went out and mirrors with gold pinstripes came in. The floor is a black-and-white checkerboard. The bar is made of polished marble the color of beef carpaccio. There is a house Chianti, and it comes in a straw-covered bottle. There is hope for a red-sauce renaissance, after all. query = { "more_like_this" : { "fields" : ["description"], "like" : red_sauce_renaissance, "min_term_freq" : 3, "max_query_terms" : 50, "min_doc_freq" : 4 } } result = indexer.elastic_search.search(index="cooking", body={"query":query}) print("%d hits \n" % result['hits']['total']) print("First result:\n") pprint(result['hits']['hits'][0]) Explanation: More Like This Elasticsearch exposes a convenient way of doing more advanced querying based on document similarity, which is called "More Like This" (MLT). Given an input document or set of documents, MLT wraps all of the following behavior: extraction of a set of representative terms from the input selection of terms with the highest scores* formation of a disjunctive query using these terms query execution results returned *Note: this is done using term frequency-inverse document frequency (TF-IDF). Term frequency-inverse document frequency is an encoding method that normalizes term frequency in a document with respect to the rest of the corpus. As such, TF-IDF measures the relevance of a term to a document by the scaled frequency of the appearance of the term in the document, normalized by the inverse of the scaled frequency of the term in the entire corpus. This has the effect of selecting terms that make the input document or documents the most unique. We can now build an MLT query in much the same way as we did the "fuzzy" search above. The Elasticsearch MLT query exposes many search parameters, but the only required one is "like", to which we can specify a string, a document, or multiple documents. Let's see if we can find any documents from our corpus that are similar to a New York Times review for the Italian restaurant Don Angie. End of explanation query = { "more_like_this" : { "fields" : ["description"], "like" : red_sauce_renaissance, "unlike" : [food_stories[23], food_stories[28]], "min_term_freq" : 2, "max_query_terms" : 50, "min_doc_freq" : 4 } } result = indexer.elastic_search.search(index="cooking", body={"query":query}) print("%d hits \n" % result['hits']['total']) print("First result:\n") pprint(result['hits']['hits'][0]) Explanation: Unlike Note that we can also add the unlike parameter to limit our search. Here I've indicated some of the less food-related stories that we found while doing exploratory analysis: End of explanation query = { "more_like_this" : { "fields" : ["description"], "like" : red_sauce_renaissance, "unlike" : [food_stories[23], food_stories[28]], "min_term_freq" : 2, "max_query_terms" : 50, "min_doc_freq" : 4 } } result = indexer.elastic_search.search(index=["cooking","books","sports"], body={"query":query}) print("%d hits \n" % result['hits']['total']) print("First result:\n") pprint(result['hits']['hits'][0]) Explanation: We can also expand our search to other indices, to see if there are documents related to our red sauce renaissance article that may appear in other hobbies corpus categories: End of explanation query = { "more_like_this" : { "fields" : ["description"], "like" : red_sauce_renaissance, "unlike" : [food_stories[23], food_stories[28]], "min_term_freq" : 2, "max_query_terms" : 50, "min_doc_freq" : 4 } } result = indexer.elastic_search.search(index=["cooking"], body={"query":query}) print("%d hits \n" % result['hits']['total']) print("First result:\n") pprint(result['hits']['hits'][0]) Explanation: Advanced Similarity So far we've explored how to get started with Elasticsearch and to perform basic search and fuzzy search. These search tools all use the practical scoring function to compute the relevance score for search results. This scoring function is a variation of TF-IDF that also takes into account a few other things, including the length of the query and the field that's being searched. Now we will look at some of the more advanced tools implemented in Elasticsearch. Similarity algorithms can be set on a per-index or per-field basis. The available similarity computations include: BM25 similarity (BM25): currently the default setting in Elasticsearch, BM25 is a TF-IDF based similarity that has built-in tf normalization and supposedly works better for short fields (like names). Classic similarity (classic): TF-IDF Divergence from Randomness (DFR): Similarity that implements the divergence from randomness framework. Divergence from Independence (DFI): Similarity that implements the divergence from independence model. Information Base Model (IB): Algorithm that presumes the content in any symbolic 'distribution' sequence is primarily determined by the repetitive usage of its basic elements. LMDirichlet Model (LMDirichlet): Bayesian smoothing using Dirichlet priors. LM Jelinek Mercer (LMJelinekMercer): Attempts to capture important patterns in the text but leave out the noise. Changing the Default Similarity If you want to change the default similarity after creating an index you must close your index, send the following request and open it again afterwards: ```bash curl -X POST "localhost:9200/cooking/_close" curl -X PUT "localhost:9200/cooking/_settings" -H 'Content-Type: application/json' -d' { "index": { "similarity": { "default": { "type": "classic" } } } } ' curl -X POST "localhost:9200/cooking/_open" ``` Classic TF-IDF Now that we've manually changed the similarity scoring metric (in this case to classic TF-IDF), we can see how this effects the results of our previous queries, where we note right away that the first result is the same, but it's relevance score is lower. End of explanation query = { "more_like_this" : { "fields" : ["description"], "like" : red_sauce_renaissance, "unlike" : [food_stories[23], food_stories[28]], "min_term_freq" : 2, "max_query_terms" : 50, "min_doc_freq" : 4 } } result = indexer.elastic_search.search(index=["cooking"], body={"query":query}) print("%d hits \n" % result['hits']['total']) print("First result:\n") pprint(result['hits']['hits'][0]) Explanation: LMDirichlet Similarity ...whereas when we change "type": "LMDirichlet", our same document appears with a much higher score: End of explanation query = { "more_like_this" : { "fields" : ["description"], "like" : red_sauce_renaissance, "unlike" : [food_stories[23], food_stories[28]], "min_term_freq" : 2, "max_query_terms" : 50, "min_doc_freq" : 4 } } result = indexer.elastic_search.search(index=["cooking"], body={"query":query}) print("%d hits \n" % result['hits']['total']) print("First result:\n") pprint(result['hits']['hits'][0]) Explanation: LMJelinekMercer Similarity ...and higher still with "type": "LMJelinekMercer"! End of explanation
2,419
Given the following text description, write Python code to implement the functionality described below step by step Description: Understanding masking & padding Authors Step1: Introduction Masking is a way to tell sequence-processing layers that certain timesteps in an input are missing, and thus should be skipped when processing the data. Padding is a special form of masking where the masked steps are at the start or at the beginning of a sequence. Padding comes from the need to encode sequence data into contiguous batches Step2: Masking Now that all samples have a uniform length, the model must be informed that some part of the data is actually padding and should be ignored. That mechanism is masking. There are three ways to introduce input masks in Keras models Step3: As you can see from the printed result, the mask is a 2D boolean tensor with shape (batch_size, sequence_length), where each individual False entry indicates that the corresponding timestep should be ignored during processing. Mask propagation in the Functional API and Sequential API When using the Functional API or the Sequential API, a mask generated by an Embedding or Masking layer will be propagated through the network for any layer that is capable of using them (for example, RNN layers). Keras will automatically fetch the mask corresponding to an input and pass it to any layer that knows how to use it. For instance, in the following Sequential model, the LSTM layer will automatically receive a mask, which means it will ignore padded values Step4: This is also the case for the following Functional API model Step5: Passing mask tensors directly to layers Layers that can handle masks (such as the LSTM layer) have a mask argument in their __call__ method. Meanwhile, layers that produce a mask (e.g. Embedding) expose a compute_mask(input, previous_mask) method which you can call. Thus, you can pass the output of the compute_mask() method of a mask-producing layer to the __call__ method of a mask-consuming layer, like this Step7: Supporting masking in your custom layers Sometimes, you may need to write layers that generate a mask (like Embedding), or layers that need to modify the current mask. For instance, any layer that produces a tensor with a different time dimension than its input, such as a Concatenate layer that concatenates on the time dimension, will need to modify the current mask so that downstream layers will be able to properly take masked timesteps into account. To do this, your layer should implement the layer.compute_mask() method, which produces a new mask given the input and the current mask. Here is an example of a TemporalSplit layer that needs to modify the current mask. Step8: Here is another example of a CustomEmbedding layer that is capable of generating a mask from input values Step9: Opting-in to mask propagation on compatible layers Most layers don't modify the time dimension, so don't need to modify the current mask. However, they may still want to be able to propagate the current mask, unchanged, to the next layer. This is an opt-in behavior. By default, a custom layer will destroy the current mask (since the framework has no way to tell whether propagating the mask is safe to do). If you have a custom layer that does not modify the time dimension, and if you want it to be able to propagate the current input mask, you should set self.supports_masking = True in the layer constructor. In this case, the default behavior of compute_mask() is to just pass the current mask through. Here's an example of a layer that is whitelisted for mask propagation Step10: You can now use this custom layer in-between a mask-generating layer (like Embedding) and a mask-consuming layer (like LSTM), and it will pass the mask along so that it reaches the mask-consuming layer. Step11: Writing layers that need mask information Some layers are mask consumers
Python Code: import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Explanation: Understanding masking & padding Authors: Scott Zhu, Francois Chollet<br> Date created: 2019/07/16<br> Last modified: 2020/04/14<br> Description: Complete guide to using mask-aware sequence layers in Keras. Setup End of explanation raw_inputs = [ [711, 632, 71], [73, 8, 3215, 55, 927], [83, 91, 1, 645, 1253, 927], ] # By default, this will pad using 0s; it is configurable via the # "value" parameter. # Note that you could "pre" padding (at the beginning) or # "post" padding (at the end). # We recommend using "post" padding when working with RNN layers # (in order to be able to use the # CuDNN implementation of the layers). padded_inputs = tf.keras.preprocessing.sequence.pad_sequences( raw_inputs, padding="post" ) print(padded_inputs) Explanation: Introduction Masking is a way to tell sequence-processing layers that certain timesteps in an input are missing, and thus should be skipped when processing the data. Padding is a special form of masking where the masked steps are at the start or at the beginning of a sequence. Padding comes from the need to encode sequence data into contiguous batches: in order to make all sequences in a batch fit a given standard length, it is necessary to pad or truncate some sequences. Let's take a close look. Padding sequence data When processing sequence data, it is very common for individual samples to have different lengths. Consider the following example (text tokenized as words): [ ["Hello", "world", "!"], ["How", "are", "you", "doing", "today"], ["The", "weather", "will", "be", "nice", "tomorrow"], ] After vocabulary lookup, the data might be vectorized as integers, e.g.: [ [71, 1331, 4231] [73, 8, 3215, 55, 927], [83, 91, 1, 645, 1253, 927], ] The data is a nested list where individual samples have length 3, 5, and 6, respectively. Since the input data for a deep learning model must be a single tensor (of shape e.g. (batch_size, 6, vocab_size) in this case), samples that are shorter than the longest item need to be padded with some placeholder value (alternatively, one might also truncate long samples before padding short samples). Keras provides a utility function to truncate and pad Python lists to a common length: tf.keras.preprocessing.sequence.pad_sequences. End of explanation embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True) masked_output = embedding(padded_inputs) print(masked_output._keras_mask) masking_layer = layers.Masking() # Simulate the embedding lookup by expanding the 2D input to 3D, # with embedding dimension of 10. unmasked_embedding = tf.cast( tf.tile(tf.expand_dims(padded_inputs, axis=-1), [1, 1, 10]), tf.float32 ) masked_embedding = masking_layer(unmasked_embedding) print(masked_embedding._keras_mask) Explanation: Masking Now that all samples have a uniform length, the model must be informed that some part of the data is actually padding and should be ignored. That mechanism is masking. There are three ways to introduce input masks in Keras models: Add a keras.layers.Masking layer. Configure a keras.layers.Embedding layer with mask_zero=True. Pass a mask argument manually when calling layers that support this argument (e.g. RNN layers). Mask-generating layers: Embedding and Masking Under the hood, these layers will create a mask tensor (2D tensor with shape (batch, sequence_length)), and attach it to the tensor output returned by the Masking or Embedding layer. End of explanation model = keras.Sequential( [layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True), layers.LSTM(32),] ) Explanation: As you can see from the printed result, the mask is a 2D boolean tensor with shape (batch_size, sequence_length), where each individual False entry indicates that the corresponding timestep should be ignored during processing. Mask propagation in the Functional API and Sequential API When using the Functional API or the Sequential API, a mask generated by an Embedding or Masking layer will be propagated through the network for any layer that is capable of using them (for example, RNN layers). Keras will automatically fetch the mask corresponding to an input and pass it to any layer that knows how to use it. For instance, in the following Sequential model, the LSTM layer will automatically receive a mask, which means it will ignore padded values: End of explanation inputs = keras.Input(shape=(None,), dtype="int32") x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs) outputs = layers.LSTM(32)(x) model = keras.Model(inputs, outputs) Explanation: This is also the case for the following Functional API model: End of explanation class MyLayer(layers.Layer): def __init__(self, **kwargs): super(MyLayer, self).__init__(**kwargs) self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True) self.lstm = layers.LSTM(32) def call(self, inputs): x = self.embedding(inputs) # Note that you could also prepare a `mask` tensor manually. # It only needs to be a boolean tensor # with the right shape, i.e. (batch_size, timesteps). mask = self.embedding.compute_mask(inputs) output = self.lstm(x, mask=mask) # The layer will ignore the masked values return output layer = MyLayer() x = np.random.random((32, 10)) * 100 x = x.astype("int32") layer(x) Explanation: Passing mask tensors directly to layers Layers that can handle masks (such as the LSTM layer) have a mask argument in their __call__ method. Meanwhile, layers that produce a mask (e.g. Embedding) expose a compute_mask(input, previous_mask) method which you can call. Thus, you can pass the output of the compute_mask() method of a mask-producing layer to the __call__ method of a mask-consuming layer, like this: End of explanation class TemporalSplit(keras.layers.Layer): Split the input tensor into 2 tensors along the time dimension. def call(self, inputs): # Expect the input to be 3D and mask to be 2D, split the input tensor into 2 # subtensors along the time axis (axis 1). return tf.split(inputs, 2, axis=1) def compute_mask(self, inputs, mask=None): # Also split the mask into 2 if it presents. if mask is None: return None return tf.split(mask, 2, axis=1) first_half, second_half = TemporalSplit()(masked_embedding) print(first_half._keras_mask) print(second_half._keras_mask) Explanation: Supporting masking in your custom layers Sometimes, you may need to write layers that generate a mask (like Embedding), or layers that need to modify the current mask. For instance, any layer that produces a tensor with a different time dimension than its input, such as a Concatenate layer that concatenates on the time dimension, will need to modify the current mask so that downstream layers will be able to properly take masked timesteps into account. To do this, your layer should implement the layer.compute_mask() method, which produces a new mask given the input and the current mask. Here is an example of a TemporalSplit layer that needs to modify the current mask. End of explanation class CustomEmbedding(keras.layers.Layer): def __init__(self, input_dim, output_dim, mask_zero=False, **kwargs): super(CustomEmbedding, self).__init__(**kwargs) self.input_dim = input_dim self.output_dim = output_dim self.mask_zero = mask_zero def build(self, input_shape): self.embeddings = self.add_weight( shape=(self.input_dim, self.output_dim), initializer="random_normal", dtype="float32", ) def call(self, inputs): return tf.nn.embedding_lookup(self.embeddings, inputs) def compute_mask(self, inputs, mask=None): if not self.mask_zero: return None return tf.not_equal(inputs, 0) layer = CustomEmbedding(10, 32, mask_zero=True) x = np.random.random((3, 10)) * 9 x = x.astype("int32") y = layer(x) mask = layer.compute_mask(x) print(mask) Explanation: Here is another example of a CustomEmbedding layer that is capable of generating a mask from input values: End of explanation class MyActivation(keras.layers.Layer): def __init__(self, **kwargs): super(MyActivation, self).__init__(**kwargs) # Signal that the layer is safe for mask propagation self.supports_masking = True def call(self, inputs): return tf.nn.relu(inputs) Explanation: Opting-in to mask propagation on compatible layers Most layers don't modify the time dimension, so don't need to modify the current mask. However, they may still want to be able to propagate the current mask, unchanged, to the next layer. This is an opt-in behavior. By default, a custom layer will destroy the current mask (since the framework has no way to tell whether propagating the mask is safe to do). If you have a custom layer that does not modify the time dimension, and if you want it to be able to propagate the current input mask, you should set self.supports_masking = True in the layer constructor. In this case, the default behavior of compute_mask() is to just pass the current mask through. Here's an example of a layer that is whitelisted for mask propagation: End of explanation inputs = keras.Input(shape=(None,), dtype="int32") x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs) x = MyActivation()(x) # Will pass the mask along print("Mask found:", x._keras_mask) outputs = layers.LSTM(32)(x) # Will receive the mask model = keras.Model(inputs, outputs) Explanation: You can now use this custom layer in-between a mask-generating layer (like Embedding) and a mask-consuming layer (like LSTM), and it will pass the mask along so that it reaches the mask-consuming layer. End of explanation class TemporalSoftmax(keras.layers.Layer): def call(self, inputs, mask=None): broadcast_float_mask = tf.expand_dims(tf.cast(mask, "float32"), -1) inputs_exp = tf.exp(inputs) * broadcast_float_mask inputs_sum = tf.reduce_sum(inputs * broadcast_float_mask, axis=1, keepdims=True) return inputs_exp / inputs_sum inputs = keras.Input(shape=(None,), dtype="int32") x = layers.Embedding(input_dim=10, output_dim=32, mask_zero=True)(inputs) x = layers.Dense(1)(x) outputs = TemporalSoftmax()(x) model = keras.Model(inputs, outputs) y = model(np.random.randint(0, 10, size=(32, 100)), np.random.random((32, 100, 1))) Explanation: Writing layers that need mask information Some layers are mask consumers: they accept a mask argument in call and use it to determine whether to skip certain time steps. To write such a layer, you can simply add a mask=None argument in your call signature. The mask associated with the inputs will be passed to your layer whenever it is available. Here's a simple example below: a layer that computes a softmax over the time dimension (axis 1) of an input sequence, while discarding masked timesteps. End of explanation
2,420
Given the following text description, write Python code to implement the functionality described below step by step Description: Modifying the Variational Strategy/Variational Distribution The predictive distribution for approximate GPs is given by $$ p( \mathbf f(\mathbf x^) ) = \int_{\mathbf u} p( f(\mathbf x^) \mid \mathbf u) \ Step1: Some quick training/testing code This will allow us to train/test different model classes. Step2: The Standard Approach As a default, we'll use the default VariationalStrategy class with a CholeskyVariationalDistribution. The CholeskyVariationalDistribution class allows $\mathbf S$ to be on any positive semidefinite matrix. This is the most general/expressive option for approximate GPs. Step3: Reducing parameters MeanFieldVariationalDistribution Step4: DeltaVariationalDistribution Step5: Reducing computation (through decoupled inducing points) One way to reduce the computational complexity is to use separate inducing points for the mean and covariance computations. The Orthogonally Decoupled Variational Gaussian Processes method of Salimbeni et al. (2018) uses more inducing points for the (computationally easy) mean computations and fewer inducing points for the (computationally intensive) covariance computations. In GPyTorch we implement this method in a modular way. The OrthogonallyDecoupledVariationalStrategy defines the variational strategy for the mean inducing points. It wraps an existing variational strategy/distribution that defines the covariance inducing points Step6: Putting it all together we have
Python Code: import urllib.request import os from scipy.io import loadmat from math import floor # this is for running the notebook in our testing framework smoke_test = ('CI' in os.environ) if not smoke_test and not os.path.isfile('../elevators.mat'): print('Downloading \'elevators\' UCI dataset...') urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', '../elevators.mat') if smoke_test: # this is for running the notebook in our testing framework X, y = torch.randn(1000, 3), torch.randn(1000) else: data = torch.Tensor(loadmat('../elevators.mat')['data']) X = data[:, :-1] X = X - X.min(0)[0] X = 2 * (X / X.max(0)[0]) - 1 y = data[:, -1] train_n = int(floor(0.8 * len(X))) train_x = X[:train_n, :].contiguous() train_y = y[:train_n].contiguous() test_x = X[train_n:, :].contiguous() test_y = y[train_n:].contiguous() if torch.cuda.is_available(): train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda() from torch.utils.data import TensorDataset, DataLoader train_dataset = TensorDataset(train_x, train_y) train_loader = DataLoader(train_dataset, batch_size=500, shuffle=True) test_dataset = TensorDataset(test_x, test_y) test_loader = DataLoader(test_dataset, batch_size=500, shuffle=False) Explanation: Modifying the Variational Strategy/Variational Distribution The predictive distribution for approximate GPs is given by $$ p( \mathbf f(\mathbf x^) ) = \int_{\mathbf u} p( f(\mathbf x^) \mid \mathbf u) \: q(\mathbf u) \: d\mathbf u, \quad q(\mathbf u) = \mathcal N( \mathbf m, \mathbf S). $$ $\mathbf u$ represents the function values at the $m$ inducing points. Here, $\mathbf m \in \mathbb R^m$ and $\mathbf S \in \mathbb R^{m \times m}$ are learnable parameters. If $m$ (the number of inducing points) is quite large, the number of learnable parameters in $\mathbf S$ can be quite unwieldy. Furthermore, a large $m$ might make some of the computations rather slow. Here we show a few ways to use different variational distributions and variational strategies to accomplish this. Experimental setup We're going to train an approximate GP on a medium-sized regression dataset, taken from the UCI repository. End of explanation # this is for running the notebook in our testing framework num_epochs = 1 if smoke_test else 10 # Our testing script takes in a GPyTorch MLL (objective function) class # and then trains/tests an approximate GP with it on the supplied dataset def train_and_test_approximate_gp(model_cls): inducing_points = torch.randn(128, train_x.size(-1), dtype=train_x.dtype, device=train_x.device) model = model_cls(inducing_points) likelihood = gpytorch.likelihoods.GaussianLikelihood() mll = gpytorch.mlls.VariationalELBO(likelihood, model, num_data=train_y.numel()) optimizer = torch.optim.Adam(list(model.parameters()) + list(likelihood.parameters()), lr=0.1) if torch.cuda.is_available(): model = model.cuda() likelihood = likelihood.cuda() # Training model.train() likelihood.train() epochs_iter = tqdm.notebook.tqdm(range(num_epochs), desc=f"Training {model_cls.__name__}") for i in epochs_iter: # Within each iteration, we will go over each minibatch of data for x_batch, y_batch in train_loader: optimizer.zero_grad() output = model(x_batch) loss = -mll(output, y_batch) epochs_iter.set_postfix(loss=loss.item()) loss.backward() optimizer.step() # Testing model.eval() likelihood.eval() means = torch.tensor([0.]) with torch.no_grad(): for x_batch, y_batch in test_loader: preds = model(x_batch) means = torch.cat([means, preds.mean.cpu()]) means = means[1:] error = torch.mean(torch.abs(means - test_y.cpu())) print(f"Test {model_cls.__name__} MAE: {error.item()}") Explanation: Some quick training/testing code This will allow us to train/test different model classes. End of explanation class StandardApproximateGP(gpytorch.models.ApproximateGP): def __init__(self, inducing_points): variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(inducing_points.size(-2)) variational_strategy = gpytorch.variational.VariationalStrategy( self, inducing_points, variational_distribution, learn_inducing_locations=True ) super().__init__(variational_strategy) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) train_and_test_approximate_gp(StandardApproximateGP) Explanation: The Standard Approach As a default, we'll use the default VariationalStrategy class with a CholeskyVariationalDistribution. The CholeskyVariationalDistribution class allows $\mathbf S$ to be on any positive semidefinite matrix. This is the most general/expressive option for approximate GPs. End of explanation class MeanFieldApproximateGP(gpytorch.models.ApproximateGP): def __init__(self, inducing_points): variational_distribution = gpytorch.variational.MeanFieldVariationalDistribution(inducing_points.size(-2)) variational_strategy = gpytorch.variational.VariationalStrategy( self, inducing_points, variational_distribution, learn_inducing_locations=True ) super().__init__(variational_strategy) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) train_and_test_approximate_gp(MeanFieldApproximateGP) Explanation: Reducing parameters MeanFieldVariationalDistribution: a diagonal $\mathbf S$ matrix One way to reduce the number of parameters is to restrict that $\mathbf S$ is only diagonal. This is less expressive, but the number of parameters is now linear in $m$ instead of quadratic. All we have to do is take the previous example, and change CholeskyVariationalDistribution (full $\mathbf S$ matrix) to MeanFieldVariationalDistribution (diagonal $\mathbf S$ matrix). End of explanation class MAPApproximateGP(gpytorch.models.ApproximateGP): def __init__(self, inducing_points): variational_distribution = gpytorch.variational.DeltaVariationalDistribution(inducing_points.size(-2)) variational_strategy = gpytorch.variational.VariationalStrategy( self, inducing_points, variational_distribution, learn_inducing_locations=True ) super().__init__(variational_strategy) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) train_and_test_approximate_gp(MAPApproximateGP) Explanation: DeltaVariationalDistribution: no $\mathbf S$ matrix A more extreme method of reducing parameters is to get rid of $\mathbf S$ entirely. This corresponds to learning a delta distribution ($\mathbf u = \mathbf m$) rather than a multivariate Normal distribution for $\mathbf u$. In other words, this corresponds to performing MAP estimation rather than variational inference. In GPyTorch, getting rid of $\mathbf S$ can be accomplished by using a DeltaVariationalDistribution. End of explanation def make_orthogonal_vs(model, train_x): mean_inducing_points = torch.randn(1000, train_x.size(-1), dtype=train_x.dtype, device=train_x.device) covar_inducing_points = torch.randn(100, train_x.size(-1), dtype=train_x.dtype, device=train_x.device) covar_variational_strategy = gpytorch.variational.VariationalStrategy( model, covar_inducing_points, gpytorch.variational.CholeskyVariationalDistribution(covar_inducing_points.size(-2)), learn_inducing_locations=True ) variational_strategy = gpytorch.variational.OrthogonallyDecoupledVariationalStrategy( covar_variational_strategy, mean_inducing_points, gpytorch.variational.DeltaVariationalDistribution(mean_inducing_points.size(-2)), ) return variational_strategy Explanation: Reducing computation (through decoupled inducing points) One way to reduce the computational complexity is to use separate inducing points for the mean and covariance computations. The Orthogonally Decoupled Variational Gaussian Processes method of Salimbeni et al. (2018) uses more inducing points for the (computationally easy) mean computations and fewer inducing points for the (computationally intensive) covariance computations. In GPyTorch we implement this method in a modular way. The OrthogonallyDecoupledVariationalStrategy defines the variational strategy for the mean inducing points. It wraps an existing variational strategy/distribution that defines the covariance inducing points: End of explanation class OrthDecoupledApproximateGP(gpytorch.models.ApproximateGP): def __init__(self, inducing_points): variational_distribution = gpytorch.variational.DeltaVariationalDistribution(inducing_points.size(-2)) variational_strategy = make_orthogonal_vs(self, train_x) super().__init__(variational_strategy) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) train_and_test_approximate_gp(OrthDecoupledApproximateGP) Explanation: Putting it all together we have: End of explanation
2,421
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The AdaNet Authors. Step1: Customizing AdaNet With TensorFlow Hub Modules <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Getting started Data We will try to solve the Large Movie Review Dataset v1.0 task (Mass et al., 2011). The dataset consists of IMDB movie reviews labeled by positivity from 1 to 10. The task is to label the reviews as negative or positive. Step3: Supply the data in TensorFlow Our first task is to supply the data in TensorFlow. We define three kinds of input_fn that will be used in training later using pandas_input_fn. Step4: Launch TensorBoard Let's run TensorBoard to visualize model training over time. We'll use ngrok to tunnel traffic to localhost. The instructions for setting up Tensorboard were obtained from https Step5: Establish baselines The next task should be to get somes baselines to see how our model performs on this dataset. Let's define some information to share with all our tf.estimator.Estimators Step6: Let's start simple, and train a linear model Step7: The linear model with default parameters achieves about 78% accuracy. Let's see if we can do better with the simple_dnn AdaNet Step14: The simple_dnn AdaNet model with default parameters achieves about 80% accuracy. This improvement can be attributed to simple_dnn searching over fully-connected neural networks which have more expressive power than the linear model due to their non-linear activations. The above simple_dnn generator only generates subnetworks that take embedding results from one module. We can add diversity to the search space by building subnetworks that take different embeddings, hence might improve the performance. To do that, we need to define a custom adanet.subnetwork.Builder and adanet.subnetwork.Generator. Define a AdaNet model with TensorFlow Hub text embedding modules Creating a new search space for AdaNet to explore is straightforward. There are two abstract classes you need to extend Step18: Next, we extend a adanet.subnetwork.Generator, which defines the search space of candidate SimpleNetworkBuilder to consider including the final network. It can create one or more at each iteration with different parameters, and the AdaNet algorithm will select the candidate that best improves the overall neural network's adanet_loss on the training set. The one below loops through the text embedding modules listed in MODULES and gives it a different random seed at each iteration. These modules are selected from TensorFlow Hub text modules Step20: With these defined, we pass them into a new adanet.Estimator Step21: Our SimpleNetworkGenerator code achieves about <b>87% accuracy </b>, which is almost <b>7%</b> higher than with using just one network directly. You can see how the performance improves step by step
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 The AdaNet Authors. End of explanation #@test {"skip": true} # If you are running this in Colab, first install the adanet package: !pip install adanet from __future__ import absolute_import from __future__ import division from __future__ import print_function import functools import os import re import shutil import numpy as np import pandas as pd import tensorflow.compat.v1 as tf import tensorflow_hub as hub import adanet from adanet.examples import simple_dnn # The random seed to use. RANDOM_SEED = 42 LOG_DIR = '/tmp/models' Explanation: Customizing AdaNet With TensorFlow Hub Modules <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/customizing_adanet_with_tfhub.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/adanet/blob/master/adanet/examples/tutorials/customizing_adanet_with_tfhub.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> From the customizing AdaNet tutorial, you know how to define your own neural architecture search space for AdaNet algorithm to explore. One can simplify this process further by using TensorFlow Hub modules as the basic building blocks for AdaNet. These modules have already been pre-trained on large corpuses of data which enables you to leverage the power of transfer learning. In this tutorial, we will create a custom search space for sentiment analysis dataset using TensorFlow Hub text embedding modules. End of explanation def load_directory_data(directory): data = {} data["sentence"] = [] data["sentiment"] = [] for file_path in os.listdir(directory): with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f: data["sentence"].append(f.read()) data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1)) return pd.DataFrame.from_dict(data) def load_dataset(directory): pos_df = load_directory_data(os.path.join(directory, "pos")) neg_df = load_directory_data(os.path.join(directory, "neg")) pos_df["polarity"] = 1 neg_df["polarity"] = 0 return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True) def download_and_load_datasets(force_download=False): dataset = tf.keras.utils.get_file( fname="aclImdb.tar.gz", origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz", extract=True ) train_df = load_dataset(os.path.join(os.path.dirname(dataset), "aclImdb", "train")) test_df = load_dataset(os.path.join(os.path.dirname(dataset), "aclImdb", "test")) return train_df, test_df tf.logging.set_verbosity(tf.logging.INFO) train_df, test_df = download_and_load_datasets() train_df.head() Explanation: Getting started Data We will try to solve the Large Movie Review Dataset v1.0 task (Mass et al., 2011). The dataset consists of IMDB movie reviews labeled by positivity from 1 to 10. The task is to label the reviews as negative or positive. End of explanation FEATURES_KEY = "sentence" train_input_fn = tf.estimator.inputs.pandas_input_fn( train_df, train_df["polarity"], num_epochs=None, shuffle=True) predict_train_input_fn = tf.estimator.inputs.pandas_input_fn( train_df, train_df["polarity"], shuffle=False) predict_test_input_fn = tf.estimator.inputs.pandas_input_fn( test_df, test_df["polarity"], shuffle=False) Explanation: Supply the data in TensorFlow Our first task is to supply the data in TensorFlow. We define three kinds of input_fn that will be used in training later using pandas_input_fn. End of explanation #@test {"skip": true} get_ipython().system_raw( 'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &' .format(LOG_DIR) ) # Install ngrok binary. ! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip ! unzip ngrok-stable-linux-amd64.zip # Delete old logs dir. shutil.rmtree(LOG_DIR, ignore_errors=True) print("Follow this link to open TensorBoard in a new tab.") get_ipython().system_raw('./ngrok http 6006 &') ! curl -s http://localhost:4040/api/tunnels | python3 -c \ "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])" Explanation: Launch TensorBoard Let's run TensorBoard to visualize model training over time. We'll use ngrok to tunnel traffic to localhost. The instructions for setting up Tensorboard were obtained from https://www.dlology.com/blog/quick-guide-to-run-tensorboard-in-google-colab/ Run the next cells and follow the link to see the TensorBoard in a new tab. End of explanation NUM_CLASSES = 2 loss_reduction = tf.losses.Reduction.SUM_OVER_BATCH_SIZE head = tf.contrib.estimator.binary_classification_head( loss_reduction=loss_reduction) hub_columns=hub.text_embedding_column( key=FEATURES_KEY, module_spec="https://tfhub.dev/google/nnlm-en-dim128/1") def make_config(experiment_name): # Estimator configuration. return tf.estimator.RunConfig( save_checkpoints_steps=1000, save_summary_steps=1000, tf_random_seed=RANDOM_SEED, model_dir=os.path.join(LOG_DIR, experiment_name)) Explanation: Establish baselines The next task should be to get somes baselines to see how our model performs on this dataset. Let's define some information to share with all our tf.estimator.Estimators: End of explanation #@test {"skip": true} #@title Parameters LEARNING_RATE = 0.001 #@param {type:"number"} TRAIN_STEPS = 5000 #@param {type:"integer"} estimator = tf.estimator.LinearClassifier( feature_columns=[hub_columns], n_classes=NUM_CLASSES, optimizer=tf.train.RMSPropOptimizer(learning_rate=LEARNING_RATE), loss_reduction=loss_reduction, config=make_config("linear")) results, _ = tf.estimator.train_and_evaluate( estimator, train_spec=tf.estimator.TrainSpec( input_fn=train_input_fn, max_steps=TRAIN_STEPS), eval_spec=tf.estimator.EvalSpec( input_fn=predict_test_input_fn, steps=None)) print("Accuracy: ", results["accuracy"]) print("Loss: ", results["average_loss"]) Explanation: Let's start simple, and train a linear model: End of explanation #@test {"skip": true} #@title Parameters LEARNING_RATE = 0.003 #@param {type:"number"} TRAIN_STEPS = 5000 #@param {type:"integer"} ADANET_ITERATIONS = 2 #@param {type:"integer"} estimator = adanet.Estimator( head=head, # Define the generator, which defines our search space of subnetworks # to train as candidates to add to the final AdaNet model. subnetwork_generator=simple_dnn.Generator( feature_columns=[hub_columns], optimizer=tf.train.RMSPropOptimizer(learning_rate=LEARNING_RATE), seed=RANDOM_SEED), # The number of train steps per iteration. max_iteration_steps=TRAIN_STEPS // ADANET_ITERATIONS, # The evaluator will evaluate the model on the full training set to # compute the overall AdaNet loss (train loss + complexity # regularization) to select the best candidate to include in the # final AdaNet model. evaluator=adanet.Evaluator( input_fn=predict_train_input_fn, steps=1000), # Configuration for Estimators. config=make_config("simple_dnn")) results, _ = tf.estimator.train_and_evaluate( estimator, train_spec=tf.estimator.TrainSpec( input_fn=train_input_fn, max_steps=TRAIN_STEPS), eval_spec=tf.estimator.EvalSpec( input_fn=predict_test_input_fn, steps=None)) print("Accuracy:", results["accuracy"]) print("Loss:", results["average_loss"]) Explanation: The linear model with default parameters achieves about 78% accuracy. Let's see if we can do better with the simple_dnn AdaNet: End of explanation class SimpleNetworkBuilder(adanet.subnetwork.Builder): Builds a simple subnetwork with text embedding module. def __init__(self, learning_rate, max_iteration_steps, seed, module_name, module): Initializes a `SimpleNetworkBuilder`. Args: learning_rate: The float learning rate to use. max_iteration_steps: The number of steps per iteration. seed: The random seed. Returns: An instance of `SimpleNetworkBuilder`. self._learning_rate = learning_rate self._max_iteration_steps = max_iteration_steps self._seed = seed self._module_name = module_name self._module = module def build_subnetwork(self, features, logits_dimension, training, iteration_step, summary, previous_ensemble=None): See `adanet.subnetwork.Builder`. sentence = features["sentence"] # Load module and apply text embedding, setting trainable=True. m = hub.Module(self._module, trainable=True) x = m(sentence) kernel_initializer = tf.keras.initializers.he_normal(seed=self._seed) # The `Head` passed to adanet.Estimator will apply the softmax activation. logits = tf.layers.dense( x, units=1, activation=None, kernel_initializer=kernel_initializer) # Use a constant complexity measure, since all subnetworks have the same # architecture and hyperparameters. complexity = tf.constant(1) return adanet.Subnetwork( last_layer=x, logits=logits, complexity=complexity, persisted_tensors={}) def build_subnetwork_train_op(self, subnetwork, loss, var_list, labels, iteration_step, summary, previous_ensemble=None): See `adanet.subnetwork.Builder`. learning_rate = tf.train.cosine_decay( learning_rate=self._learning_rate, global_step=iteration_step, decay_steps=self._max_iteration_steps) optimizer = tf.train.MomentumOptimizer(learning_rate, .9) # NOTE: The `adanet.Estimator` increments the global step. return optimizer.minimize(loss=loss, var_list=var_list) def build_mixture_weights_train_op(self, loss, var_list, logits, labels, iteration_step, summary): See `adanet.subnetwork.Builder`. return tf.no_op("mixture_weights_train_op") @property def name(self): See `adanet.subnetwork.Builder`. return self._module_name Explanation: The simple_dnn AdaNet model with default parameters achieves about 80% accuracy. This improvement can be attributed to simple_dnn searching over fully-connected neural networks which have more expressive power than the linear model due to their non-linear activations. The above simple_dnn generator only generates subnetworks that take embedding results from one module. We can add diversity to the search space by building subnetworks that take different embeddings, hence might improve the performance. To do that, we need to define a custom adanet.subnetwork.Builder and adanet.subnetwork.Generator. Define a AdaNet model with TensorFlow Hub text embedding modules Creating a new search space for AdaNet to explore is straightforward. There are two abstract classes you need to extend: adanet.subnetwork.Builder adanet.subnetwork.Generator Similar to the tf.estimator.Estimator model_fn, adanet.subnetwork.Builder allows you to define your own TensorFlow graph for creating a neural network, and specify the training operations. Below we define one that applies text embedding using TensorFlow Hub text modules first, and then a fully-connected layer to the sentiment polarity. End of explanation MODULES = [ "https://tfhub.dev/google/nnlm-en-dim50/1", "https://tfhub.dev/google/nnlm-en-dim128/1", "https://tfhub.dev/google/universal-sentence-encoder/1" ] class SimpleNetworkGenerator(adanet.subnetwork.Generator): Generates a `SimpleNetwork` at each iteration. def __init__(self, learning_rate, max_iteration_steps, seed=None): Initializes a `Generator` that builds `SimpleNetwork`. Args: learning_rate: The float learning rate to use. max_iteration_steps: The number of steps per iteration. seed: The random seed. Returns: An instance of `Generator`. self._seed = seed self._dnn_builder_fn = functools.partial( SimpleNetworkBuilder, learning_rate=learning_rate, max_iteration_steps=max_iteration_steps) def generate_candidates(self, previous_ensemble, iteration_number, previous_ensemble_reports, all_reports): See `adanet.subnetwork.Generator`. module_index = iteration_number % len(MODULES) module_name = MODULES[module_index].split("/")[-2] print("generating candidate: %s" % module_name) seed = self._seed # Change the seed according to the iteration so that each subnetwork # learns something different. if seed is not None: seed += iteration_number return [self._dnn_builder_fn(seed=seed, module_name=module_name, module=MODULES[module_index])] Explanation: Next, we extend a adanet.subnetwork.Generator, which defines the search space of candidate SimpleNetworkBuilder to consider including the final network. It can create one or more at each iteration with different parameters, and the AdaNet algorithm will select the candidate that best improves the overall neural network's adanet_loss on the training set. The one below loops through the text embedding modules listed in MODULES and gives it a different random seed at each iteration. These modules are selected from TensorFlow Hub text modules: End of explanation #@title Parameters LEARNING_RATE = 0.05 #@param {type:"number"} TRAIN_STEPS = 7500 #@param {type:"integer"} ADANET_ITERATIONS = 3 #@param {type:"integer"} max_iteration_steps = TRAIN_STEPS // ADANET_ITERATIONS estimator = adanet.Estimator( head=head, subnetwork_generator=SimpleNetworkGenerator( learning_rate=LEARNING_RATE, max_iteration_steps=max_iteration_steps, seed=RANDOM_SEED), max_iteration_steps=max_iteration_steps, evaluator=adanet.Evaluator(input_fn=train_input_fn, steps=10), report_materializer=None, adanet_loss_decay=.99, config=make_config("tfhub")) results, _ = tf.estimator.train_and_evaluate( estimator, train_spec=tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=TRAIN_STEPS), eval_spec=tf.estimator.EvalSpec(input_fn=predict_test_input_fn, steps=None)) print("Accuracy:", results["accuracy"]) print("Loss:", results["average_loss"]) def ensemble_architecture(result): Extracts the ensemble architecture from evaluation results. architecture = result["architecture/adanet/ensembles"] # The architecture is a serialized Summary proto for TensorBoard. summary_proto = tf.summary.Summary.FromString(architecture) Explanation: With these defined, we pass them into a new adanet.Estimator: End of explanation predict_input_fn = tf.estimator.inputs.pandas_input_fn( test_df.iloc[:10], test_df["polarity"].iloc[:10], shuffle=False) predictions = estimator.predict(input_fn=predict_input_fn) for i, val in enumerate(predictions): predicted_class = val['class_ids'][0] prediction_confidence = val['probabilities'][predicted_class] * 100 print('Actual text: ' + test_df["sentence"][i]) print('Predicted class: %s, confidence: %s%%' % (predicted_class, round(prediction_confidence, 3))) Explanation: Our SimpleNetworkGenerator code achieves about <b>87% accuracy </b>, which is almost <b>7%</b> higher than with using just one network directly. You can see how the performance improves step by step: | Linear Baseline | Adanet + simple_dnn | Adanet + TensorFlow Hub | | --- |:---:| ---:| | 78% | 80%| 87% | Generating predictions on our trained model Now that we've got a trained model, we can use it to generate predictions on new input. To keep things simple, here we'll generate predictions on our estimator using the first 10 examples from the test set. End of explanation
2,422
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook shows how BigBang can help you explore a mailing list archive. First, use this IPython magic to tell the notebook to display matplotlib graphics inline. This is a nice way to display results. Step1: Import the BigBang modules as needed. These should be in your Python environment if you've installed BigBang correctly. Step2: Also, let's import a number of other dependencies we'll use later. Step3: Now let's load the data for analysis. Step4: This variable is for the range of days used in computing rolling averages. Step5: For each of the mailing lists we are looking at, plot the rolling average of number of emails sent per day. Step6: Now, let's see Step7: This might be useful for seeing the distribution (does the top message sender dominate?) or for identifying key participants to talk to. Many mailing lists will have some duplicate senders Step8: The dark blue diagonal is comparing an entry to itself (we know the distance is zero in that case), but a few other dark blue patches suggest there are duplicates even using this most naive measure. Below is a variant of the visualization for inspecting the particular apparent duplicates. Step9: For this still naive measure (edit distance on a normalized string), it appears that there are many duplicates in the &lt;10 range, but that above that the edit distance of short email addresses at common domain names can take over. Step10: We can create the same color plot with the consolidated dataframe to see how the distribution has changed. Step11: Of course, there are still some duplicates, mostly people who are using the same name, but with a different email address at an unrelated domain name. How does our consolidation affect the graph of distribution of senders?
Python Code: %matplotlib inline Explanation: This notebook shows how BigBang can help you explore a mailing list archive. First, use this IPython magic to tell the notebook to display matplotlib graphics inline. This is a nice way to display results. End of explanation import bigbang.mailman as mailman import bigbang.graph as graph import bigbang.process as process from bigbang.parse import get_date #from bigbang.functions import * from bigbang.archive import Archive Explanation: Import the BigBang modules as needed. These should be in your Python environment if you've installed BigBang correctly. End of explanation import pandas as pd import datetime import matplotlib.pyplot as plt import numpy as np import math import pytz import pickle import os pd.options.display.mpl_style = 'default' # pandas has a set of preferred graph formatting options Explanation: Also, let's import a number of other dependencies we'll use later. End of explanation urls = ["ipython-dev", "ipython-user"] archives = [Archive(url,archive_dir="../archives",mbox=True) for url in urls] activities = [arx.get_activity() for arx in archives] archives[0].data Explanation: Now let's load the data for analysis. End of explanation window = 100 Explanation: This variable is for the range of days used in computing rolling averages. End of explanation plt.figure(figsize=(12.5, 7.5)) for i, activity in enumerate(activities): colors = 'rgbkm' ta = activity.sum(1) rmta = pd.rolling_mean(ta,window) rmtadna = rmta.dropna() plt.plot_date(rmtadna.index, rmtadna.values, colors[i], label=mailman.get_list_name(urls[i]) + ' activity',xdate=True) plt.legend() plt.savefig("activites-marked.png") plt.show() arx.data Explanation: For each of the mailing lists we are looking at, plot the rolling average of number of emails sent per day. End of explanation a = activities[0] # for the first mailing list ta = a.sum(0) # sum along the first axis ta.sort() ta[-10:].plot(kind='barh') Explanation: Now, let's see: who are the authors of the most messages to one particular list? End of explanation import Levenshtein distancedf = process.matricize(a.columns[:100], lambda a,b: Levenshtein.distance(a,b)) # calculate the edit distance between the two From titles df = distancedf.astype(int) # specify that the values in the matrix are integers fig = plt.figure(figsize=(18, 18)) plt.pcolor(df) #plt.yticks(np.arange(0.5, len(df.index), 1), df.index) # these lines would show labels, but that gets messy #plt.xticks(np.arange(0.5, len(df.columns), 1), df.columns) plt.show() Explanation: This might be useful for seeing the distribution (does the top message sender dominate?) or for identifying key participants to talk to. Many mailing lists will have some duplicate senders: individuals who use multiple email addresses or are recorded as different senders when using the same email address. We want to identify those potential duplicates in order to get a more accurate representation of the distribution of senders. To begin with, let's do a naive calculation of the similarity of the From strings, based on the Levenshtein distance. This can take a long time for a large matrix, so we will truncate it for purposes of demonstration. End of explanation levdf = process.sorted_lev(a) # creates a slightly more nuanced edit distance matrix # and sorts by rows/columns that have the best candidates levdf_corner = levdf.iloc[:25,:25] # just take the top 25 fig = plt.figure(figsize=(15, 12)) plt.pcolor(levdf_corner) plt.yticks(np.arange(0.5, len(levdf_corner.index), 1), levdf_corner.index) plt.xticks(np.arange(0.5, len(levdf_corner.columns), 1), levdf_corner.columns, rotation='vertical') plt.colorbar() plt.show() Explanation: The dark blue diagonal is comparing an entry to itself (we know the distance is zero in that case), but a few other dark blue patches suggest there are duplicates even using this most naive measure. Below is a variant of the visualization for inspecting the particular apparent duplicates. End of explanation consolidates = [] # gather pairs of names which have a distance of less than 10 for col in levdf.columns: for index, value in levdf.loc[levdf[col] < 10, col].iteritems(): if index != col: # the name shouldn't be a pair for itself consolidates.append((col, index)) print str(len(consolidates)) + ' candidates for consolidation.' c = process.consolidate_senders_activity(a, consolidates) print 'We removed: ' + str(len(a.columns) - len(c.columns)) + ' columns.' Explanation: For this still naive measure (edit distance on a normalized string), it appears that there are many duplicates in the &lt;10 range, but that above that the edit distance of short email addresses at common domain names can take over. End of explanation lev_c = process.sorted_lev(c) levc_corner = lev_c.iloc[:25,:25] fig = plt.figure(figsize=(15, 12)) plt.pcolor(levc_corner) plt.yticks(np.arange(0.5, len(levc_corner.index), 1), levc_corner.index) plt.xticks(np.arange(0.5, len(levc_corner.columns), 1), levc_corner.columns, rotation='vertical') plt.colorbar() plt.show() Explanation: We can create the same color plot with the consolidated dataframe to see how the distribution has changed. End of explanation fig, axes = plt.subplots(nrows=2, figsize=(15, 12)) ta = a.sum(0) # sum along the first axis ta.sort() ta[-20:].plot(kind='barh',ax=axes[0], title='Before consolidation') tc = c.sum(0) tc.sort() tc[-20:].plot(kind='barh',ax=axes[1], title='After consolidation') plt.show() Explanation: Of course, there are still some duplicates, mostly people who are using the same name, but with a different email address at an unrelated domain name. How does our consolidation affect the graph of distribution of senders? End of explanation
2,423
Given the following text description, write Python code to implement the functionality described below step by step Description: If we had wanted to manage the same result using the StatsModels.formula.api, we should have typed the following Step1: 第一个参数是y0值,即x=0时,y轴上的值。 第二个是斜率 这两个是最重要的参数 y=9.10*X-34.67è¿™æ˜¯æ‹Ÿåˆçš„æ›²çº¿å ¬å¼ã€‚ä½†è¦æ³¨æ„å®ƒåªå¯¹ä¸€å®šçš„æ•°æ®èŒƒå›´æœ‰æ•ˆï¼Œè¾“å ¥çš„X在3.56到8.7799èŒƒå›´å† ï¼Œæ‰€ä»¥å ¬å¼åªé€‚ç”¨äºŽè¿™ä¸ªèŒƒå›´ Step2: The resulting scatterplot indicates that the residuals show some of the problems we previously indicated as a warning that something is not going well with your regression analysis. First, there are a few points lying outside the band delimited by the two dotted lines at normalized residual values −3 and +3 (a range that should hypothetically cover 99.7% of values if the residuals have a normal distribution). These are surely influential points with large errors and they can actually make the entire linear regression under-perform. We will talk about possible solutions to this problem when we discuss outliers in the next chapter.
Python Code: fitted_model.summary() print (fitted_model.params) betas = np.array(fitted_model.params) fitted_values = fitted_model.predict(X) Explanation: If we had wanted to manage the same result using the StatsModels.formula.api, we should have typed the following: linear_regression = smf.ols(formula='target ~ RM', data=dataset) fitted_model = linear_regression.fit() The coefficient of determinationLet's start from the first table of results. The first table is divided into two columns. The first one contains a description of the fitted model:Dep. Variable: It just reminds you what the target variable wasModel: Another reminder of the model that you have fitted, the OLS is ordinary least squares, another way to refer to linear regressionMethod: The parameters fitting method (in this case least squares, the classical computation method)No. Observations: The number of observations that have been usedDF Residuals: The degrees of freedom of the residuals, which is the number of observations minus the number of parameters DF Model: The number of estimated parameters in the model (excluding the constant term from the count)The second table gives a more interesting picture, focusing how good the fit of the linear regression model is and pointing out any possible problems with the model:R-squared: This is the coefficient of determination, a measure of how well the regression does with respect to a simple mean.Adj. R-squared: This is the coefficient of determination adjusted based on the number of parameters in a model and the number of observations that helped build it.F-statistic: This is a measure telling you if, from a statistical point of view, all your coefficients, apart from the bias and taken together, are different from zero. In simple words, it tells you if your regression is really better than a simple average.Prob (F-statistic): This is the probability that you got that F-statistic just by lucky chance due to the observations that you have used (such a probability is actually called the p-value of F-statistic). If it is low enough you can be confident that your regression is really better than a simple mean. Usually in statistics and science a test probability has to be equal or lower than 0.05 (a conventional criterion of statistical significance) for having such a confidence.AIC: This is the Akaike Information Criterion. AIC is a score that evaluates the model based on the number of observations and the complexity of the model itself. The lesser the AIC score, the better. It is very useful for comparing different models and for statistical variable selection.BIC: This is the Bayesian Information Criterion. It works as AIC, but it presents a higher penalty for models with more parameters.Most of these statistics make sense when we are dealing with more than one predictor variable, so they will be discussed in the next chapter. Thus, for the moment, as we are working with a simple linear regression, the two measures that are worth examining closely are F-statistic and R-squared. F-statistic is actually a test that doesn't tell you too much if you have enough observations and you can count on a minimally correlated predictor variable. Usually it shouldn't be much of a concern in a data science project.R-squared is instead much more interesting because it tells you how much better your regression model is in comparison to a single mean. It does so by providing you with a percentage of the unexplained variance of a mean as a predictor that actually your model was able to explain. Meaning and significance of coefficientsThe second output table informs us about the coefficients and provides us with a series of tests. These tests can make us confident that we have not been fooled by a few extreme observations in the foundations of our analysis or by some other problem:coef: The estimated coefficientstd err: The standard error of the estimate of the coefficient; the larger it is, the more uncertain the estimation of the coefficientt: The t-statistic value, a measure indicating whether the coefficient true value is different from zeroP > |t|: The p-value indicating the probability that the coefficient is different from zero just by chance[95.0% Conf. Interval]: The lower and upper values of the coefficient, considering 95% of all the chances of having different observations and so different estimated coefficientsFrom a data science viewpoint, t-tests and confidence bounds are not very useful because we are mostly interested in verifying whether our regression is working while predicting answer variables. Consequently, we will focus just on the coef value (the estimated coefficients) and on their standard error.The coefficients are the most important output that we can obtain from our regression model because they allow us to re-create the weighted summation that can predict our outcomes. End of explanation betas dataset['RM'].mean() dataset['RM'].min() dataset['RM'].max() def standardize(x): return (x-np.mean(x))/np.std(x) Explanation: 第一个参数是y0值,即x=0时,y轴上的值。 第二个是斜率 这两个是最重要的参数 y=9.10*X-34.67è¿™æ˜¯æ‹Ÿåˆçš„æ›²çº¿å ¬å¼ã€‚ä½†è¦æ³¨æ„å®ƒåªå¯¹ä¸€å®šçš„æ•°æ®èŒƒå›´æœ‰æ•ˆï¼Œè¾“å ¥çš„X在3.56到8.7799èŒƒå›´å† ï¼Œæ‰€ä»¥å ¬å¼åªé€‚ç”¨äºŽè¿™ä¸ªèŒƒå›´ End of explanation x_range=[dataset['RM'].min(),dataset['RM'].max()] residuals = dataset['target']-fitted_values normalized_residuals = standardize(residuals) residual_scatter_plot = plt.plot(dataset['RM'], normalized_residuals,'bp') mean_residual = plt.plot([int(x_range[0]),round(x_range[1],0)], [0,0], '-', color='red', linewidth=2) upper_bound = plt.plot([int(x_range[0]),round(x_range[1],0)], [3,3], '--', color='red', linewidth=1) lower_bound = plt.plot([int(x_range[0]),round(x_range[1],0)], [-3,-3], '--', color='red', linewidth=1) plt.grid() plt.show() RM = 5 Xp = np.array([1,RM]) print ("Our model predicts if RM = %01.f the answer value is %0.1f" % (RM, fitted_model.predict(Xp))) x_range = [dataset['RM'].min(),dataset['RM'].max()] y_range = [dataset['target'].min(),dataset['target'].max()] scatter_plot = dataset.plot(kind='scatter', x='RM', y='target',xlim=x_range, ylim=y_range) regression_line = scatter_plot.plot(dataset['RM'], fitted_values,'-', color='orange', linewidth=1) meanY = scatter_plot.plot(x_range,[dataset['target'].mean(),dataset['target'].mean()], '--',color='red', linewidth=1) meanX =scatter_plot.plot([dataset['RM'].mean(),dataset['RM'].mean()], y_range, '--', color='red', linewidth=1) plt.show() Explanation: The resulting scatterplot indicates that the residuals show some of the problems we previously indicated as a warning that something is not going well with your regression analysis. First, there are a few points lying outside the band delimited by the two dotted lines at normalized residual values −3 and +3 (a range that should hypothetically cover 99.7% of values if the residuals have a normal distribution). These are surely influential points with large errors and they can actually make the entire linear regression under-perform. We will talk about possible solutions to this problem when we discuss outliers in the next chapter. End of explanation
2,424
Given the following text description, write Python code to implement the functionality described below step by step Description: OpenPIV tutorial 1 In this tutorial we read a pair of images and perform the PIV using a standard algorithm. At the end, the velocity vector field is plotted. Step1: Reading images Step2: Processing In this tutorial, we are going to use the extended_search_area_piv function, wich is a standard PIV cross-correlation algorithm. This function allows the search area (search_area_size) in the second frame to be larger than the interrogation window in the first frame (window_size). Also, the search areas can overlap (overlap). The extended_search_area_piv function will return three arrays. 1. The u component of the velocity vectors 2. The v component of the velocity vectors 3. The signal-to-noise ratio (S2N) of the cross-correlation map of each vector. The higher the S2N of a vector, the higher the probability that its magnitude and direction are correct. Step3: The function get_coordinates finds the center of each interrogation window. This will be useful later on when plotting the vector field. Step4: Post-processing Strictly speaking, we are ready to plot the vector field. But before we do that, we can perform some convenient pos-processing. To start, lets use the function sig2noise_val to get a mask indicating which vectors have a minimum amount of S2N. Vectors below a certain threshold are substituted by NaN. If you are not sure about which threshold value to use, try taking a look at the S2N histogram with Step5: Another useful function is replace_outliers, which will find outlier vectors, and substitute them by an average of neighboring vectors. The larger the kernel_size the larger is the considered neighborhood. This function uses an iterative image inpainting algorithm. The amount of iterations can be chosen via max_iter. Step6: Next, we are going to convert pixels to millimeters, and flip the coordinate system such that the origin becomes the bottom left corner of the image. Step7: Results The function save is used to save the vector field to a ASCII tabular file. The coordinates and S2N mask are also saved. Step8: Finally, the vector field can be plotted with display_vector_field. Vectors with S2N bellow the threshold are displayed in red.
Python Code: from openpiv import tools, pyprocess, validation, filters, scaling import numpy as np import matplotlib.pyplot as plt %matplotlib inline import imageio Explanation: OpenPIV tutorial 1 In this tutorial we read a pair of images and perform the PIV using a standard algorithm. At the end, the velocity vector field is plotted. End of explanation frame_a = tools.imread( '../../examples/test1/exp1_001_a.bmp' ) frame_b = tools.imread( '../../examples/test1/exp1_001_b.bmp' ) fig,ax = plt.subplots(1,2,figsize=(12,10)) ax[0].imshow(frame_a,cmap=plt.cm.gray); ax[1].imshow(frame_b,cmap=plt.cm.gray); Explanation: Reading images: The images can be read using the imread function, and diplayed with matplotlib. End of explanation winsize = 32 # pixels, interrogation window size in frame A searchsize = 38 # pixels, search area size in frame B overlap = 17 # pixels, 50% overlap dt = 0.02 # sec, time interval between the two frames u0, v0, sig2noise = pyprocess.extended_search_area_piv( frame_a.astype(np.int32), frame_b.astype(np.int32), window_size=winsize, overlap=overlap, dt=dt, search_area_size=searchsize, sig2noise_method='peak2peak', ) Explanation: Processing In this tutorial, we are going to use the extended_search_area_piv function, wich is a standard PIV cross-correlation algorithm. This function allows the search area (search_area_size) in the second frame to be larger than the interrogation window in the first frame (window_size). Also, the search areas can overlap (overlap). The extended_search_area_piv function will return three arrays. 1. The u component of the velocity vectors 2. The v component of the velocity vectors 3. The signal-to-noise ratio (S2N) of the cross-correlation map of each vector. The higher the S2N of a vector, the higher the probability that its magnitude and direction are correct. End of explanation x, y = pyprocess.get_coordinates( image_size=frame_a.shape, search_area_size=searchsize, overlap=overlap, ) Explanation: The function get_coordinates finds the center of each interrogation window. This will be useful later on when plotting the vector field. End of explanation u1, v1, mask = validation.sig2noise_val( u0, v0, sig2noise, threshold = 1.05, ) Explanation: Post-processing Strictly speaking, we are ready to plot the vector field. But before we do that, we can perform some convenient pos-processing. To start, lets use the function sig2noise_val to get a mask indicating which vectors have a minimum amount of S2N. Vectors below a certain threshold are substituted by NaN. If you are not sure about which threshold value to use, try taking a look at the S2N histogram with: plt.hist(sig2noise.flatten()) End of explanation u2, v2 = filters.replace_outliers( u1, v1, method='localmean', max_iter=3, kernel_size=3, ) Explanation: Another useful function is replace_outliers, which will find outlier vectors, and substitute them by an average of neighboring vectors. The larger the kernel_size the larger is the considered neighborhood. This function uses an iterative image inpainting algorithm. The amount of iterations can be chosen via max_iter. End of explanation # convert x,y to mm # convert u,v to mm/sec x, y, u3, v3 = scaling.uniform( x, y, u2, v2, scaling_factor = 96.52, # 96.52 pixels/millimeter ) # 0,0 shall be bottom left, positive rotation rate is counterclockwise x, y, u3, v3 = tools.transform_coordinates(x, y, u3, v3) Explanation: Next, we are going to convert pixels to millimeters, and flip the coordinate system such that the origin becomes the bottom left corner of the image. End of explanation tools.save(x, y, u3, v3, mask, 'exp1_001.txt' ) Explanation: Results The function save is used to save the vector field to a ASCII tabular file. The coordinates and S2N mask are also saved. End of explanation fig, ax = plt.subplots(figsize=(8,8)) tools.display_vector_field( 'exp1_001.txt', ax=ax, scaling_factor=96.52, scale=50, # scale defines here the arrow length width=0.0035, # width is the thickness of the arrow on_img=True, # overlay on the image image_name='../../examples/test1/exp1_001_a.bmp', ); Explanation: Finally, the vector field can be plotted with display_vector_field. Vectors with S2N bellow the threshold are displayed in red. End of explanation
2,425
Given the following text description, write Python code to implement the functionality described below step by step Description: In this post, we will build a couple different quiver plots using Python and matplotlib. A quiver plot is a type of 2D plot that shows vector lines as arrows. Quiver plots are useful in electrical engineering to visualize electrical potential and valuable in mechanical engineering to show stress gradients. Install matplotlib and numpy To create the quiver plots, we'll use Python, matplotlib, numpy and a Jupyter notebook. I recommend undergraduate engineers use the Anaconda distribution of Python, which comes with matplotlib, numpy and Jupyter notebooks pre-installed installed. If matplotlib, numpy and Jupyter are not available, these packages can be installed with conda or pip. To install, open the Anaconda Prompt or a terminal and type Step1: Quiver plot with one arrow Let's buid a simple quiver plot that contains one arrow to see how matplotlib's ax.quiver() method works. The ax.quiver() method takes four positional arguments Step2: We see a plot with one arrow pointing up and to the right. Quiver plot with two arrows Now let's add a second arrow to our quiver plot by passing in two starting points and two arrow directions. We'll keep our first arrow- starting position at the origin 0,0 and pointing up and to the right, direction 1,1. We'll define the second arrow with a starting position of -0.5,0.5 which points straight down (in the 0,-1 direction). An additional keyword argument to add the ax.quiver() method is scale=5. This keyword argument scales the arrow lengths, so the arrows are longer and easier to see on the quiver plot. To see the start and end of both arrows, we'll set the axis limits between -1.5 and 1.5 using the ax.axis() method. ax.axis() accepts a list of axis limits in the form [xmin, xmax, ymin, ymax]. Step3: We see two arrows on the quiver plot. One arrow points to the upper right and the other arrow points straight down. Quiver plot using np.meshgrid() Two arrows are great, but to create a whole 2D surface worth of arrows, we'll utilize numpy's meshgrid() function. We need to build a set of arrays that denote the x and y starting positions of each quiver arrow on the plot. We will call our quiver arrow starting position arrays X and Y. We can use the x,y arrow starting positions to define the x and y components of each quiver arrow direction. We' ll call the quiver arrow direction arrays u and v. On this quiver plot, we'll define the quiver arrow direction based on the quiver arrow starting point using Step4: Now we can build the quiver plot using matplotlib's ax.quiver() method. Remember, the ax.quiver() method call takes four positional arguments Step5: Now let's build another quiver plot with the $\hat{i}$ and $\hat{j}$ components (the horizontal and vertical components) of vector, $\vec{F}$ are dependant upon the arrow starting point $x,y$ according to the function Step6: Quiver plot containing a gradient Next, let's build a quiver plot using the gradient function. The gradient function has the form Step7: Quiver plot with four vortices Now let's build a quiver plot that contains four vortices. The function $\vec{F}$ which describes the 2D vortex field is Step8: Cool. Nice and swirly. Quiver plots with color Next, let's add some color to the quiver plots. The ax.quiver() method has an optional fifth positional argument that specifies the quiver arrow color. The quiver arrow color argument needs to have the same dimensions as the position and direction arrays. Step9: Let's add some color to the gradient quiver plot Step10: Finally, we'll add some color to the vortex plot
Python Code: import matplotlib.pyplot as plt import numpy as np #add %matplotlib inline if using a Jupyter notebook, remove if using a .py script %matplotlib inline Explanation: In this post, we will build a couple different quiver plots using Python and matplotlib. A quiver plot is a type of 2D plot that shows vector lines as arrows. Quiver plots are useful in electrical engineering to visualize electrical potential and valuable in mechanical engineering to show stress gradients. Install matplotlib and numpy To create the quiver plots, we'll use Python, matplotlib, numpy and a Jupyter notebook. I recommend undergraduate engineers use the Anaconda distribution of Python, which comes with matplotlib, numpy and Jupyter notebooks pre-installed installed. If matplotlib, numpy and Jupyter are not available, these packages can be installed with conda or pip. To install, open the Anaconda Prompt or a terminal and type: ```text conda install matplotlib numpy jupyter ``` or text $ pip install matplotlib numpy jupyter To start a Jupyter notebook, open the Anaconda Prompt and type: ```text jupyter notebook ``` Jupyter notebooks can also be started on at a command prompt with: text $ jupyter notebook Import matplotlib and numpy At the top of the Jupyter notebook, we need to import the required packages to build our quiver plots: matplotlib numpy The %matplotlib inline magic command is added so that we can see our plots right in the Jupyter notebook. End of explanation fig, ax = plt.subplots() x_pos = 0 y_pos = 0 x_direct = 1 y_direct = 1 ax.quiver(x_pos,y_pos,x_direct,y_direct) plt.show() Explanation: Quiver plot with one arrow Let's buid a simple quiver plot that contains one arrow to see how matplotlib's ax.quiver() method works. The ax.quiver() method takes four positional arguments: ax.quiver(x_pos, y_pos, x_direct, y_direct) Where x_pos and y_pos are the arrow starting positions and x_direct, y_direct are the arrow directions. Let's build our first plot to contain one quiver arrow at the starting point x_pos = 0, y_pos = 0. We'll define this one quiver arrow's direction as pointing up and to the right x_direct = 1, y_direct = 1. End of explanation fig, ax = plt.subplots() x_pos = [0, -0.5] y_pos = [0, 0.5] x_direct = [1, 0] y_direct = [1, -1] ax.quiver(x_pos,y_pos,x_direct,y_direct, scale=5) ax.axis([-1.5,1.5,-1.5,1.5]) plt.show() Explanation: We see a plot with one arrow pointing up and to the right. Quiver plot with two arrows Now let's add a second arrow to our quiver plot by passing in two starting points and two arrow directions. We'll keep our first arrow- starting position at the origin 0,0 and pointing up and to the right, direction 1,1. We'll define the second arrow with a starting position of -0.5,0.5 which points straight down (in the 0,-1 direction). An additional keyword argument to add the ax.quiver() method is scale=5. This keyword argument scales the arrow lengths, so the arrows are longer and easier to see on the quiver plot. To see the start and end of both arrows, we'll set the axis limits between -1.5 and 1.5 using the ax.axis() method. ax.axis() accepts a list of axis limits in the form [xmin, xmax, ymin, ymax]. End of explanation x = np.arange(0,2.2,0.2) y = np.arange(0,2.2,0.2) X, Y = np.meshgrid(x, y) u = np.cos(X)*Y v = np.sin(y)*Y Explanation: We see two arrows on the quiver plot. One arrow points to the upper right and the other arrow points straight down. Quiver plot using np.meshgrid() Two arrows are great, but to create a whole 2D surface worth of arrows, we'll utilize numpy's meshgrid() function. We need to build a set of arrays that denote the x and y starting positions of each quiver arrow on the plot. We will call our quiver arrow starting position arrays X and Y. We can use the x,y arrow starting positions to define the x and y components of each quiver arrow direction. We' ll call the quiver arrow direction arrays u and v. On this quiver plot, we'll define the quiver arrow direction based on the quiver arrow starting point using: $$ x_{direction} = cos(x_{starting \ point}) $$ $$ y_{direction} = sin(y_{starting \ point}) $$ End of explanation fig, ax = plt.subplots(figsize=(7,7)) ax.quiver(X,Y,u,v) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) ax.axis([-0.2, 2.3, -0.2, 2.3]) ax.set_aspect('equal') plt.show() Explanation: Now we can build the quiver plot using matplotlib's ax.quiver() method. Remember, the ax.quiver() method call takes four positional arguments: text ax.quiver(x_pos, y_pos, x_direct, y_direct) This time x_pos and y_pos are 2D arrays which contain the starting positions of the arrows and x_direct, y_direct are 2D arrays which contain the arrow directions. The commands ax.xaxis.set_ticks([]) and ax.yaxis.set_ticks([]) removes the tick marks from the axis and ax.set_aspect('equal') sets the aspect ratio of the plot to 1:1. End of explanation x = np.arange(-5,6,1) y = np.arange(-5,6,1) X, Y = np.meshgrid(x, y) u, v = X/5, -Y/5 fig, ax = plt.subplots(figsize=(7,7)) ax.quiver(X,Y,u,v) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) ax.axis([-6, 6, -6, 6]) ax.set_aspect('equal') plt.show() Explanation: Now let's build another quiver plot with the $\hat{i}$ and $\hat{j}$ components (the horizontal and vertical components) of vector, $\vec{F}$ are dependant upon the arrow starting point $x,y$ according to the function: $$ \vec{F} = \frac{x}{5} \ \hat{i} - \frac{y}{5} \ \hat{j} $$ Again we'll use numpy's np.meshgrid() function to build the arrow starting position arrays, then apply our function $\vec{F}$ to the $x$ and $y$ arrow starting point arrays. End of explanation x = np.arange(-2,2.2,0.2) y = np.arange(-2,2.2,0.2) X, Y = np.meshgrid(x, y) z = X*np.exp(-X**2 -Y**2) dx, dy = np.gradient(z) fig, ax = plt.subplots(figsize=(7,7)) ax.quiver(X,Y,dx,dy) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) ax.set_aspect('equal') plt.show() Explanation: Quiver plot containing a gradient Next, let's build a quiver plot using the gradient function. The gradient function has the form: $$ z = xe^{-x^2-y^2} $$ We'll use numpy's np.gradient() function to apply the gradient function based on every arrow's x,y starting position. End of explanation x = np.arange(0,2*np.pi+2*np.pi/20,2*np.pi/20) y = np.arange(0,2*np.pi+2*np.pi/20,2*np.pi/20) X,Y = np.meshgrid(x,y) u = np.sin(X)*np.cos(Y) v = -np.cos(X)*np.sin(Y) fig, ax = plt.subplots(figsize=(7,7)) ax.quiver(X,Y,u,v) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) ax.axis([0,2*np.pi,0,2*np.pi]) ax.set_aspect('equal') plt.show() Explanation: Quiver plot with four vortices Now let's build a quiver plot that contains four vortices. The function $\vec{F}$ which describes the 2D vortex field is: $$ \vec{F} = sin(x)cos(y) \ \hat{i} -cos(x)sin(y) \ \hat{j} $$ We'll build these direction arrays using numpy and the np.meshgrid() function. End of explanation x = np.arange(0,2.2,0.2) y = np.arange(0,2.2,0.2) X, Y = np.meshgrid(x, y) u = np.cos(X)*Y v = np.sin(y)*Y n = -2 color_array = np.sqrt(((v-n)/2)**2 + ((u-n)/2)**2) fig, ax = plt.subplots(figsize=(7,7)) ax.quiver(X,Y,u,v, color_array, alpha=0.8) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) ax.axis([-0.2, 2.3, -0.2, 2.3]) ax.set_aspect('equal') plt.show() Explanation: Cool. Nice and swirly. Quiver plots with color Next, let's add some color to the quiver plots. The ax.quiver() method has an optional fifth positional argument that specifies the quiver arrow color. The quiver arrow color argument needs to have the same dimensions as the position and direction arrays. End of explanation x = np.arange(-2,2.2,0.2) y = np.arange(-2,2.2,0.2) X, Y = np.meshgrid(x, y) z = X*np.exp(-X**2 -Y**2) dx, dy = np.gradient(z) n = -2 color_array = np.sqrt(((dx-n)/2)**2 + ((dy-n)/2)**2) fig, ax = plt.subplots(figsize=(7,7)) ax.quiver(X,Y,dx,dy,color_array) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) ax.set_aspect('equal') plt.show() Explanation: Let's add some color to the gradient quiver plot: End of explanation x = np.arange(0,2*np.pi+2*np.pi/20,2*np.pi/20) y = np.arange(0,2*np.pi+2*np.pi/20,2*np.pi/20) X,Y = np.meshgrid(x,y) u = np.sin(X)*np.cos(Y) v = -np.cos(X)*np.sin(Y) n = -1 color_array = np.sqrt(((dx-n)/2)**2 + ((dy-n)/2)**2) fig, ax = plt.subplots(figsize=(7,7)) ax.quiver(X,Y,u,v,color_array, scale=17) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) ax.axis([0,2*np.pi,0,2*np.pi]) ax.set_aspect('equal') plt.show() Explanation: Finally, we'll add some color to the vortex plot: End of explanation
2,426
Given the following text description, write Python code to implement the functionality described below step by step Description: 1. Understand Data Step1: 1.1 Conclusion We have 10 886 observations and 12 features. We don't have missing value. Most value is integer, few of them float and object (should be a date). Let's check more detail each features. Step2: 1.2 Conlusion count is target value and it changes between from 1 to 977. holiday, workingday is a binary variables (0 or 1). season, weather - categorical variable (1,2,3 or 4). registered and casual there's only in training set. rest features is numerical variables. Let's try to visualise distribution each features. Step3: 1.3 conclusion atemp have some spikes on 10,20,30 (more rounded number). count the most value is in the first bucket and have big tail on the second right. We can try transform using log (it can be helpful). weather looks like corelated to count of rows. season all buckets look comparatively equal. windspeed there're some missing buckets between 0 and 10, and 19. Maybe it's some problem with data (human error). Let's try transform count using log. Step4: 2. Prepare Data Correlation matrix Importance features Feature engineering Feature selection 2.1 Correlation matrix Let's to see correlation matrix Step5: count vs registered, casual (0.97, 0.69) correlated (it's not big surprise). count vs temp, atemp also correlated, but less (0.39, 0.39). count vs humidity, weather have inverse correlation (-0.32, -0.13). 2.2 Importances features Step6: 2.3 Feature enginnering Let's try figure out new feature based on exists. Feature datetime can be split into few part Step7: Let's see how changed importances features after adding new one. Step8: hour - looks like very importance feature. new generated features Step9: On working day we have few spikes at 7,8,9 (going to work) and 17,18,19 (return to home). On non working day looks more smoothing (without any spikes). In the middle day (from 10 to 15) count of rent looks simmilar. On working day count of rent greater than non-working day. A little greater count of rent at the night in non-working day. Let's see how change range count of rent by hour. Step10: The biggest variance for working day is at 8,9 and 18,19,20. The biggest variance for non-working day is from 12 to 20. Why there's to big range at 9am on working day? Step11: There's some wave pattern (from Jan to Aug increased, then decreased). More bikes rent in 2012 (blue points) rather than 2011 (red points). Ther're some outliers (e.g. Mar, Apr, May 2011 - red points). Let's plot by years. Step12: In general count of rent in 2012 increased, but median looks simillar (111 and 199). At 9am median for 2011 is equal to 188, but for 2012 is 319. It looks like count of rent increased twice in 2012. 2.4 Feature selection Step13: 3. Modeling and Evaluation Preapre quality function Prepare training set and validate set Single variable model (base line) Using more advanced models Tuning hyperparameters Detect problem in actual models 3.1 Quality function $$ \sqrt{\frac{1}{n} \sum_{i=1}^n (\log(p_i + 1) - \log(a_i+1))^2 }$$ where n is the number of hours in the test set pi is your predicted count ai is the actual count log(x) is the natural logarithm Step14: 3.2 Train and test sets Let's check distribution by days in train set. Step15: So we have from 1 to 19. We can try split this from 1-14 (2 weeks) like train set and 15-19 like a test set. Note Step16: 3.3 Single variable model Step17: Median looks a little better. Let's try to build more advanced model. 3.4 More advanced models Step18: The best results RandomForestRegressor BaggingRegressor ExtraTreesRegressor Improving twice result comparing to single model variable (~0.37 compare to ~0.77). 3.5 Tuning model Let's try tuning two models Step19: ExtraTreesRegressor with params {'min_samples_split' Step20: Predict value on test set
Python Code: train = pd.read_csv('train.csv') test = pd.read_csv('test.csv') train.info() Explanation: 1. Understand Data End of explanation train.describe() Explanation: 1.1 Conclusion We have 10 886 observations and 12 features. We don't have missing value. Most value is integer, few of them float and object (should be a date). Let's check more detail each features. End of explanation train.hist(figsize=(16,12),bins=30) plt.show() Explanation: 1.2 Conlusion count is target value and it changes between from 1 to 977. holiday, workingday is a binary variables (0 or 1). season, weather - categorical variable (1,2,3 or 4). registered and casual there's only in training set. rest features is numerical variables. Let's try to visualise distribution each features. End of explanation train['count_log'] = train['count'].map(lambda x: np.log2(x)) train.count_log.hist(bins=15) Explanation: 1.3 conclusion atemp have some spikes on 10,20,30 (more rounded number). count the most value is in the first bucket and have big tail on the second right. We can try transform using log (it can be helpful). weather looks like corelated to count of rows. season all buckets look comparatively equal. windspeed there're some missing buckets between 0 and 10, and 19. Maybe it's some problem with data (human error). Let's try transform count using log. End of explanation f, ax = plt.subplots(figsize=(20,20)) sns.corrplot(train, sig_stars=False, ax=ax) Explanation: 2. Prepare Data Correlation matrix Importance features Feature engineering Feature selection 2.1 Correlation matrix Let's to see correlation matrix End of explanation def select_features(data): return [feat for feat in data.columns if feat not in ['count_log', 'count', 'casual', 'registered', 'datetime']] def get_X_y(data, cols=None): if not cols: cols = select_features(data) X = data[cols].values y = data['count'].values return X,y def get_importance__features(data, model=RandomForestRegressor(), limit=20): X,y = get_X_y(data) cols = select_features(data) model.fit(X, y) feats = pd.DataFrame(model.feature_importances_, index=data[cols].columns) feats = feats.sort([0], ascending=False) [:limit] return feats.rename(columns={0:'name'}) def draw_importance_features(data, model=RandomForestRegressor(), limit=20): feats = get_importance__features(data, model, limit) feats.plot(kind='bar') draw_importance_features(train) Explanation: count vs registered, casual (0.97, 0.69) correlated (it's not big surprise). count vs temp, atemp also correlated, but less (0.39, 0.39). count vs humidity, weather have inverse correlation (-0.32, -0.13). 2.2 Importances features End of explanation def temp_cat(temp): if temp < 15: return 1 if temp < 25: return 2 return 3 def cat_hour(hour): if 5 >= hour < 10: return 1#morning elif 10 >= hour < 17: return 2#day elif 17 >= hour < 23: return 3 #evening else: return 4 #night def etl(df): df['datetime'] = pd.to_datetime( df['datetime'] ) #time df['year'] = df['datetime'].map(lambda x: x.year) df['month'] = df['datetime'].map(lambda x: x.month) df['day'] = df['datetime'].map(lambda x: x.day) df['hour'] = df['datetime'].map(lambda x: x.hour) df['minute'] = df['datetime'].map(lambda x: x.minute) df['dayofweek'] = df['datetime'].map(lambda x: x.dayofweek) df['weekend'] = df['datetime'].map(lambda x: x.dayofweek in [5,6]) df['time_of_day'] = df['hour'].map(cat_hour) #temp df['temp_cat'] = df['temp'].map(temp_cat) df['temp_hour'] = df.apply(lambda x: x['temp'] * x['hour'], axis=1) #season df['season_weather'] = df.apply(lambda x: x['season'] * x['weather'], axis=1) df['season_temp'] = df.apply(lambda x: x['season'] * x['temp'], axis=1) df['season_atemp'] = df.apply(lambda x: x['season'] * x['atemp'], axis=1) df['season_hour'] = df.apply(lambda x: x['season'] * x['hour'], axis=1) #squared df['temp2'] = df['temp'].map(lambda x: x**2) df['humidity2'] = df['humidity'].map(lambda x: x**2) df['weather2'] = df['weather'].map(lambda x: x**2) etl(train) etl(test) Explanation: 2.3 Feature enginnering Let's try figure out new feature based on exists. Feature datetime can be split into few part: year, month, day, hour, minute, dayofweek, weekend. End of explanation print draw_importance_features(train) Explanation: Let's see how changed importances features after adding new one. End of explanation by_hour = train.groupby(['hour', 'workingday'])['count'].agg('sum').unstack() by_hour.plot(kind='bar', figsize=(15,5), width=0.9) Explanation: hour - looks like very importance feature. new generated features: seasson_attemp, temp2, dayofwork, humidity2 - there're on top 10. Let's check distrubution count of rent by hour in working day and non working day. End of explanation def plot_hours(data, message = ''): hours = {} for hour in range(24): hours[hour] = data[ data.hour == hour ]['count'].values plt.figure(figsize=(20,10)) plt.ylabel("Count rent") plt.xlabel("Hours") plt.title("count vs hours\n" + message) plt.boxplot( [hours[hour] for hour in range(24)] ) axis = plt.gca() axis.set_ylim([1, 1100]) plot_hours( train[train.workingday == 1], 'working day') plot_hours( train[train.workingday == 0], 'non working day') Explanation: On working day we have few spikes at 7,8,9 (going to work) and 17,18,19 (return to home). On non working day looks more smoothing (without any spikes). In the middle day (from 10 to 15) count of rent looks simmilar. On working day count of rent greater than non-working day. A little greater count of rent at the night in non-working day. Let's see how change range count of rent by hour. End of explanation t9 = train[ (train.hour == 9) & (train.workingday == 1) ].reset_index() ggplot(aes(x='count', y='month', color='year'), t9) + geom_point()\ + ggtitle('Count of rent by month (and years) at 9am') Explanation: The biggest variance for working day is at 8,9 and 18,19,20. The biggest variance for non-working day is from 12 to 20. Why there's to big range at 9am on working day? End of explanation def plot_by_years(data, title): values = [ data[data.year == 2011]['count'].values, data[data.year == 2012]['count'].values] print np.median(values[0]), np.median(values[1]) plt.figure(figsize=(15,5)) plt.boxplot(values) plt.xlabel("Years") plt.ylabel("Count of rent") plt.title(title) _ = plt.xticks([1, 2], [2011, 2012]) plot_by_years(train, 'Count of rent by year') plot_by_years(t9, 'Count of rent by year at 9am') Explanation: There's some wave pattern (from Jan to Aug increased, then decreased). More bikes rent in 2012 (blue points) rather than 2011 (red points). Ther're some outliers (e.g. Mar, Apr, May 2011 - red points). Let's plot by years. End of explanation def get_cols(): feats = get_importance__features(train, limit=50) return [feat for feat in feats[ feats.name > 0 ].index if feat not in ['is_test']] Explanation: In general count of rent in 2012 increased, but median looks simillar (111 and 199). At 9am median for 2011 is equal to 188, but for 2012 is 319. It looks like count of rent increased twice in 2012. 2.4 Feature selection End of explanation def rmsle(y_true, y_pred): diff = np.log(y_pred + 1) - np.log(y_true + 1) mean_error = np.square(diff).mean() return np.sqrt(mean_error) scorer = make_scorer(rmsle, greater_is_better=False) Explanation: 3. Modeling and Evaluation Preapre quality function Prepare training set and validate set Single variable model (base line) Using more advanced models Tuning hyperparameters Detect problem in actual models 3.1 Quality function $$ \sqrt{\frac{1}{n} \sum_{i=1}^n (\log(p_i + 1) - \log(a_i+1))^2 }$$ where n is the number of hours in the test set pi is your predicted count ai is the actual count log(x) is the natural logarithm End of explanation train.day.unique() Explanation: 3.2 Train and test sets Let's check distribution by days in train set. End of explanation def train_test_split(data, last_training_day=0.3): days = train.day.unique() shuffle(days) test_days = days[: len(days) * 0.3] data['is_test'] = data.day.isin(test_days) df_train = data[data.is_test == False] df_test = data[data.is_test == True] return df_train, df_test Explanation: So we have from 1 to 19. We can try split this from 1-14 (2 weeks) like train set and 15-19 like a test set. Note: this way have some problem, but looks good enough for starting End of explanation df_train, df_test = train_test_split(train) mean_model = df_train.groupby('hour').mean()['count'].reset_index().to_dict()['count'] median_model = df_train.groupby('hour').median()['count'].reset_index().to_dict()['count'] y_true = df_test['count'].values y_mean = df_test['hour'].map(lambda hour: int(mean_model[hour]) ).values y_median = df_test['hour'].map(lambda hour: int(median_model[hour]) ).values print 'mean', rmsle(y_true, y_mean) print 'median', rmsle(y_true, y_median) Explanation: 3.3 Single variable model End of explanation cols = get_cols() model = RandomForestRegressor() df_train, df_test = train_test_split(train) def get_y_pred(model, cols): X_train = df_train[ cols ].values y_train = df_train[ 'count' ].values X_test = df_test[ cols ].values model.fit(X_train, y_train) return model.predict(X_test) def predict(model, cols): y_true = df_test['count'].values y_pred = get_y_pred(model, cols) return rmsle(y_true, y_pred) def models(): yield ExtraTreesRegressor() yield RandomForestRegressor() yield AdaBoostRegressor() yield BaggingRegressor() yield DecisionTreeRegressor() results = {} for model in models(): results[ predict(model, cols) ] = model for key in sorted(results.keys()): print key, results[key] Explanation: Median looks a little better. Let's try to build more advanced model. 3.4 More advanced models End of explanation rf_params = { 'n_estimators': [100, 200], 'min_samples_split': [5, 15, 20], 'n_jobs': [-1], 'min_samples_leaf': [1, 2] } bagging_params = { 'n_estimators': [100, 150, 170], 'n_jobs': [-1], } extratree_params = { 'n_estimators': [50, 100, 150], 'min_samples_leaf': [1, 2], 'min_samples_split': [2, 3, 7], 'n_jobs': [-1], } models = [ (rf_params, RandomForestRegressor()), (bagging_params, BaggingRegressor()), (extratree_params, ExtraTreesRegressor()) ] results = {} for params_model, model in models: key_model = str(model) results[key_model] = [] for params in ParameterGrid(params_model): model.set_params(**params) results[key_model].append( (predict(model, cols), params) ) for key in results.keys(): print key, sorted(results[key], key=lambda x: x[0])[0] Explanation: The best results RandomForestRegressor BaggingRegressor ExtraTreesRegressor Improving twice result comparing to single model variable (~0.37 compare to ~0.77). 3.5 Tuning model Let's try tuning two models: - RandomForestRegressor - BaggingRegressor - ExtraTreesRegressor End of explanation def log_predict(df_train, df_test, is_traininig=True): cols = get_cols() X_train = df_train[cols].values y_train = df_train['count_log'].values X_test = df_test[cols].values if is_traininig: y_true = df_test['count'].values model = ExtraTreesRegressor(**{'min_samples_split': 3, 'n_estimators': 150, 'min_weight_fraction_leaf': 0.1, 'min_samples_leaf': 2}) model.fit(X_train, y_train) y_log_pred = model.predict(X_test) y_pred = np.exp2( y_log_pred ).astype(int) if is_traininig: return rmsle(y_true, y_pred) else: return y_pred print log_predict(df_train, df_test) Explanation: ExtraTreesRegressor with params {'min_samples_split': 3, 'n_estimators': 150, 'min_samples_leaf': 1} RandomForestRegressor with params {'min_samples_split': 5, 'n_estimators': 200, 'min_samples_leaf': 2} BaggingRegressor with params {'n_estimators': 170} Use count_log for prediction End of explanation test['count'] = log_predict(train, test, is_traininig=False) test[ ['datetime', 'count'] ].to_csv('result1.csv', index=False) Explanation: Predict value on test set End of explanation
2,427
Given the following text description, write Python code to implement the functionality described below step by step Description: EuroSciPy 2018 Step1: Use slicing to produce the following outputs Step2: Get the second row by slicing twice Try to get the second column by slicing. Do not use a list comprehension! Getting started Import the NumPy package Create an array Step3: The variable matrix contains a list of lists. Turn it into an ndarray and assign it to the variable myarray. Verify that its type is correct. For practicing purposes, arrays can conveniently be created with the arange method. Step4: Data types Use np.array() to create arrays containing * floats * complex numbers * booleans * strings and check the dtype attribute. Do you understand what is happening in the following statement? Step5: Strides Step6: Views Set the first entry of myarray1 to a new value, e.g. 42. What happened to myarray2? What happens when a matrix is transposed? Step7: Check the strides! Step8: View versus copy identical object Step9: view Step10: an independent copy Step11: Some array creation routines numerical ranges arange(start, stop, step), stop is not included in the array Step12: arange resembles range, but also works for floats Create the array [1, 1.1, 1.2, 1.3, 1.4, 1.5] linspace(start, stop, num) determines the step to produce num equally spaced values, stop is included by default Create the array [1., 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.] For equally spaced values on a logarithmic scale, use logspace. Step13: Application Step14: Homogeneous data Step15: Create a 4x4 array with integer zeros Step16: Create a 3x3 array filled with tens Diagonal elements Step17: diag has an optional argument k. Try to find out what its effect is. Replace the 1d array by a 2d array. What does diag do? Step18: Create the 3x3 array [[2, 1, 0], [1, 2, 1], [0, 1, 2]] Random numbers Step19: Indexing and slicing 1d arrays Step20: Create the array [7, 8, 9] Create the array [2, 4, 6, 8] Create the array [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] Higher dimensions Fancy indexing ‒ Boolean mask Step21: Application Step22: Axes Create an array and calculate the sum over all elements Now calculate the sum along axis 0 ... and now along axis 1 Identify the axis in the following array Step23: Axes in more than two dimensions Create a three-dimensional array Produce a two-dimensional array by cutting along axis 0 ... and axis 1 ... and axis 2 What do you get by simply using the index [0]? What do you get by using [..., 0]? Exploring numerical operations Step24: Operations are elementwise. Check this by multiplying two 2d array... ... and now do a real matrix multiplication Application Step25: Let's check the speed Step26: Broadcasting Step27: Create a multiplication table for the numbers from 1 to 10 starting from two appropriately chosen 1d arrays. As an alternative to reshape one can add additional axes with newaxes Step28: Check the shapes. Functions of two variables Step29: It is natural to use broadcasting. Check out what happens when you replace mgrid by ogrid. Application Step30: Application Step31: Explore whether the eigenvectors are the rows or the columns. Try out eigvals and other methods offered by linalg which your are interested in Determine the eigenvalue larger than one appearing in the Fibonacci problem. Verify the result by calculating the ratio of successive Fibonacci numbers. Do you recognize the result? Application Step32: Powers increase from left to right (index corresponds to power) Step33: Application
Python Code: mylist = list(range(10)) print(mylist) Explanation: EuroSciPy 2018: NumPy tutorial Let's do some slicing End of explanation matrix = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] Explanation: Use slicing to produce the following outputs: [2, 3, 4, 5] [0, 1, 2, 3, 4] [6, 7, 8, 9] [0, 2, 4, 6, 8] [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] [7, 5, 3] Matrices and lists of lists End of explanation np.lookfor('create array') help(np.array) Explanation: Get the second row by slicing twice Try to get the second column by slicing. Do not use a list comprehension! Getting started Import the NumPy package Create an array End of explanation myarray1 = np.arange(6) myarray1 def array_attributes(a): for attr in ('ndim', 'size', 'itemsize', 'dtype', 'shape', 'strides'): print('{:8s}: {}'.format(attr, getattr(a, attr))) array_attributes(myarray1) Explanation: The variable matrix contains a list of lists. Turn it into an ndarray and assign it to the variable myarray. Verify that its type is correct. For practicing purposes, arrays can conveniently be created with the arange method. End of explanation np.arange(1, 160, 10, dtype=np.int8) Explanation: Data types Use np.array() to create arrays containing * floats * complex numbers * booleans * strings and check the dtype attribute. Do you understand what is happening in the following statement? End of explanation myarray2 = myarray1.reshape(2, 3) myarray2 array_attributes(myarray2) myarray3 = myarray1.reshape(3, 2) array_attributes(myarray3) Explanation: Strides End of explanation a = np.arange(9).reshape(3, 3) a a.T Explanation: Views Set the first entry of myarray1 to a new value, e.g. 42. What happened to myarray2? What happens when a matrix is transposed? End of explanation a.strides a.T.strides Explanation: Check the strides! End of explanation a = np.arange(4) b = a id(a), id(b) Explanation: View versus copy identical object End of explanation b = a[:] id(a), id(b) a[0] = 42 a, b Explanation: view: a different object working on the same data End of explanation a = np.arange(4) b = np.copy(a) id(a), id(b) a[0] = 42 a, b Explanation: an independent copy End of explanation np.arange(5, 30, 5) Explanation: Some array creation routines numerical ranges arange(start, stop, step), stop is not included in the array End of explanation np.logspace(-2, 2, 5) np.logspace(0, 4, 9, base=2) Explanation: arange resembles range, but also works for floats Create the array [1, 1.1, 1.2, 1.3, 1.4, 1.5] linspace(start, stop, num) determines the step to produce num equally spaced values, stop is included by default Create the array [1., 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.] For equally spaced values on a logarithmic scale, use logspace. End of explanation import matplotlib.pyplot as plt %matplotlib inline x = np.linspace(0, 10, 100) y = np.cos(x) plt.plot(x, y) Explanation: Application End of explanation np.zeros((4, 4)) Explanation: Homogeneous data End of explanation np.ones((2, 3, 3)) Explanation: Create a 4x4 array with integer zeros End of explanation np.diag([1, 2, 3, 4]) Explanation: Create a 3x3 array filled with tens Diagonal elements End of explanation np.info(np.eye) Explanation: diag has an optional argument k. Try to find out what its effect is. Replace the 1d array by a 2d array. What does diag do? End of explanation np.random.rand(5, 2) np.random.seed(1234) np.random.rand(5, 2) data = np.random.rand(20, 20) plt.imshow(data, cmap=plt.cm.hot, interpolation='none') plt.colorbar() casts = np.random.randint(1, 7, (100, 3)) plt.hist(casts, np.linspace(0.5, 6.5, 7)) Explanation: Create the 3x3 array [[2, 1, 0], [1, 2, 1], [0, 1, 2]] Random numbers End of explanation a = np.arange(10) Explanation: Indexing and slicing 1d arrays End of explanation a = np.arange(40).reshape(5, 8) a %3 == 0 a[a %3 == 0] Explanation: Create the array [7, 8, 9] Create the array [2, 4, 6, 8] Create the array [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] Higher dimensions Fancy indexing ‒ Boolean mask End of explanation nmax = 50 integers = np.arange(nmax) is_prime = np.ones(nmax, dtype=bool) is_prime[:2] = False for j in range(2, int(np.sqrt(nmax))+1): if is_prime[j]: print(integers[is_prime]) is_prime[j*j::j] = False print(integers[is_prime]) Explanation: Application: sieve of Eratosthenes End of explanation a = np.arange(24).reshape(2, 3, 4) a Explanation: Axes Create an array and calculate the sum over all elements Now calculate the sum along axis 0 ... and now along axis 1 Identify the axis in the following array End of explanation a = np.arange(4) b = np.arange(4, 8) a, b a+b a*b Explanation: Axes in more than two dimensions Create a three-dimensional array Produce a two-dimensional array by cutting along axis 0 ... and axis 1 ... and axis 2 What do you get by simply using the index [0]? What do you get by using [..., 0]? Exploring numerical operations End of explanation length_of_walk = 10000 realizations = 5 angles = 2*np.pi*np.random.rand(length_of_walk, realizations) x = np.cumsum(np.cos(angles), axis=0) y = np.cumsum(np.sin(angles), axis=0) plt.plot(x, y) plt.axis('scaled') plt.plot(np.hypot(x, y)) plt.plot(np.mean(x**2+y**2, axis=1)) plt.axis('scaled') Explanation: Operations are elementwise. Check this by multiplying two 2d array... ... and now do a real matrix multiplication Application: Random walk End of explanation %%timeit a = np.arange(1000000) a**2 %%timeit xvals = range(1000000) [xval**2 for xval in xvals] %%timeit a = np.arange(100000) np.sin(a) import math %%timeit xvals = range(100000) [math.sin(xval) for xval in xvals] Explanation: Let's check the speed End of explanation a = np.arange(12).reshape(3, 4) a a+1 a+np.arange(4) a+np.arange(3) np.arange(3) np.arange(3).reshape(3, 1) a+np.arange(3).reshape(3, 1) %%timeit a = np.arange(10000).reshape(100, 100); b = np.ones((100, 100)) a+b %%timeit a = np.arange(10000).reshape(100, 100) a+1 Explanation: Broadcasting End of explanation a = np.arange(5) b = a[:, np.newaxis] Explanation: Create a multiplication table for the numbers from 1 to 10 starting from two appropriately chosen 1d arrays. As an alternative to reshape one can add additional axes with newaxes: End of explanation x = np.linspace(-40, 40, 200) y = x[:, np.newaxis] z = np.sin(np.hypot(x-10, y))+np.sin(np.hypot(x+10, y)) plt.imshow(z, cmap='viridis') x, y = np.mgrid[-10:10:0.1, -10:10:0.1] x y plt.imshow(np.sin(x*y)) x, y = np.mgrid[-10:10:50j, -10:10:50j] x y plt.imshow(np.arctan2(x, y)) Explanation: Check the shapes. Functions of two variables End of explanation # # put code here to create a Boolean array which contains True if a point # belongs to the Mandelbrot set # # reasonable values: 50 iterations, threshold at 100, and a 300x300 grid # but feel free to choose other values # plt.imshow(imdata, cmap='gray') Explanation: It is natural to use broadcasting. Check out what happens when you replace mgrid by ogrid. Application: Mandelbrot set End of explanation import numpy.linalg as LA a = np.arange(4).reshape(2, 2) eigenvalues, eigenvectors = LA.eig(a) eigenvalues eigenvectors Explanation: Application: π from random numbers Create an array of random numbers and determine the fraction of points with distance from the origin smaller than one. Determine an approximation for π. Linear Algebra in NumPy End of explanation from numpy.polynomial import polynomial as P Explanation: Explore whether the eigenvectors are the rows or the columns. Try out eigvals and other methods offered by linalg which your are interested in Determine the eigenvalue larger than one appearing in the Fibonacci problem. Verify the result by calculating the ratio of successive Fibonacci numbers. Do you recognize the result? Application: Brownian motion Simulate several trajectories for a one-dimensional Brownian motion Hint: np.random.choice Plot the mean distance from the origin as a function of time Plot the variance of the trajectories as a function of time Application: identify entry closest to ½ Create a 2d array containing random numbers and generate a vector containing for each row the entry closest to one-half. Polynomials End of explanation p1 = P.Polynomial([1, 2]) p1.degree() p1.roots() p4 = P.Polynomial([24, -50, 35, -10, 1]) p4.degree() p4.roots() p4.deriv() p4.integ() P.polydiv(p4.coef, p1.coef) Explanation: Powers increase from left to right (index corresponds to power) End of explanation from scipy import misc face = misc.face(gray=True) face plt.imshow(face, cmap=plt.cm.gray) Explanation: Application: polynomial fit Application: image manipulation End of explanation
2,428
Given the following text description, write Python code to implement the functionality described below step by step Description: Project Step1: Step 1 Step2: Step 2 Step3: Step 3 Step4: Step 4 Step5: Repeat above process with new tweets from API in March 2017
Python Code: # imports import pandas as pd import nltk from sklearn.cluster import KMeans import re import requests from requests_oauthlib import OAuth1 from sklearn.feature_extraction.text import TfidfVectorizer from nltk.stem import WordNetLemmatizer from textblob import TextBlob from nltk.stem.porter import PorterStemmer import numpy as np from nltk.stem.snowball import SnowballStemmer from sklearn.metrics import silhouette_score import pickle import collections from sklearn.decomposition import PCA import unicodedata import matplotlib.pyplot as plt %matplotlib inline import cnfg config = cnfg.load(".twitter_config") oauth = OAuth1(config["consumer_key"], config["consumer_secret"], config["access_token"], config["access_token_secret"]) Explanation: Project: Fletcher Date: 03/10/2017 Name: Prashant Tatineni Project Overview In this project, I use Twitter data to highlight the key issues being discussed related to the Demonetization event that occurred in India on January 1, 2017. Specifically, I use an old dataset of "tweets" with the tag #demonetization, downloaded via Kaggle, from late November 2016, when the Demonetization was announced. For comparison I also use tweets with the same tag downloaded via the Twitter API in March 2017. Summary of Solution Steps Load the two sets of data. Apply TF/IDF Cluster using KMeans Sentiment analysis in data End of explanation df = pd.read_csv('data/raw/demonetization-tweets.csv') df.head() # RESTful search API all_tweets = [] search_url = "https://api.twitter.com/1.1/search/tweets.json" parameters = {"q": "#demonetisation", "count":100, "lang": "en"} response = requests.get(search_url, params = parameters, auth=oauth) for tweet in response.json()['statuses']: all_tweets.append(tweet['text']) for _ in range(99): if 'next_results' in response.json()['search_metadata'].keys(): next_page_url = search_url + response.json()['search_metadata']['next_results'] response = requests.get(next_page_url, auth=oauth) for tweet in response.json()['statuses']: all_tweets.append(tweet['text']) else: break X = pd.DataFrame(all_tweets) with open('data/new_tweets.pkl', 'wb') as picklefile: pickle.dump(X, picklefile) Explanation: Step 1: Load Data End of explanation def tokenize_func(text): if text[:2] == 'RT': text = text.partition(':')[2] tokens = nltk.word_tokenize(text) filtered_tokens = [] for token in tokens: token = re.sub('[^A-Za-z]', '', token).strip() if token not in ['demonetization','demonetisation','https','amp','rt','']: filtered_tokens.append(token) return filtered_tokens tfidf_vectorizer = TfidfVectorizer(max_df=0.97, min_df=0.03, max_features=200000, stop_words='english', decode_error='ignore', tokenizer=tokenize_func, ngram_range=(1,3)) %time tfidf_matrix = tfidf_vectorizer.fit_transform(df['text']) #fit the vectorizer to tweets print tfidf_matrix.shape from sklearn.metrics.pairwise import cosine_similarity dist = 1 - cosine_similarity(tfidf_matrix) Explanation: Step 2: Apply TF/IDF End of explanation Inertia = [] Sil_coefs = [] for k in range(2,10): km = KMeans(n_clusters=k, random_state=42) km.fit(tfidf_matrix) labels = km.labels_ Sil_coefs.append(silhouette_score(tfidf_matrix, labels, metric='euclidean')) Inertia.append(km.inertia_) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15,5), sharex=True) k_clusters = range(2,10) ax1.plot(k_clusters, Sil_coefs) ax1.set_xlabel('number of clusters') ax1.set_ylabel('silhouette score') # plot here on ax2 ax2.plot(k_clusters, Inertia) ax2.set_xlabel('number of clusters') ax2.set_ylabel('Inertia'); km = KMeans(n_clusters=3) km.fit(tfidf_matrix) clusters = km.labels_.tolist() collections.Counter(km.labels_) pca = PCA(n_components=2) km_pca = pca.fit_transform(dist) xs, ys = km_pca[:,0], km_pca[:,1] fig, ax = plt.subplots() plt.scatter(xs,ys,c=clusters, cmap='Accent') plt.colorbar(ticks=[0,1,2]) ax.tick_params(axis='x',bottom='off',labelbottom='off') ax.tick_params(axis='y',left='off',labelleft='off') order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = tfidf_vectorizer.get_feature_names() for i in range(3): print('Cluster----------',i) for x in order_centroids[i,:10]: print(terms[x]) df['clusters'] = clusters df[df.clusters == 1]['screenName'][4340] Explanation: Step 3: Kmeans Clustering End of explanation df['sentiment'] = df['text'].apply(lambda x: TextBlob(unicode(x, errors='ignore')).sentiment[0]) fig, ax = plt.subplots() plt.scatter(xs,ys, c=df['sentiment'], cmap='cool') plt.colorbar() ax.tick_params(axis='x',bottom='off',labelbottom='off') ax.tick_params(axis='y',left='off',labelleft='off') df.sort_values('sentiment')['screenName'][6402] Explanation: Step 4: Sentiment End of explanation with open("data/new_tweets.pkl", 'rb') as picklefile: new_tweets = pickle.load(picklefile) new_tweets new_tweets.columns = ['text'] %time tfidf2 = tfidf_vectorizer.fit_transform(new_tweets['text']) #fit the vectorizer to tweets print tfidf2.shape from sklearn.metrics.pairwise import cosine_similarity dist = 1 - cosine_similarity(tfidf2) km = KMeans(n_clusters=3) km.fit(tfidf2) clusters = km.labels_.tolist() pca = PCA(n_components=2) km_pca = pca.fit_transform(dist) xs, ys = km_pca[:,0], km_pca[:,1] fig, ax = plt.subplots() plt.scatter(xs,ys,c=clusters, cmap='Accent') plt.colorbar(ticks=[0,1,2]) ax.tick_params(axis='x',bottom='off',labelbottom='off') ax.tick_params(axis='y',left='off',labelleft='off') order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = tfidf_vectorizer.get_feature_names() for i in range(3): print('Cluster----------',i) for x in order_centroids[i,:10]: print(terms[x]) collections.Counter(km.labels_) new_tweets['cluster'] = clusters new_tweets[new_tweets.cluster == 1] new_tweets['text'][1137] new_tweets['sentiment'] = new_tweets['text'].apply(lambda x: TextBlob(x).sentiment[0]) fig, ax = plt.subplots() plt.scatter(xs,ys, c=new_tweets['sentiment'], cmap='cool') plt.colorbar() ax.tick_params(axis='x',bottom='off',labelbottom='off') ax.tick_params(axis='y',left='off',labelleft='off') Explanation: Repeat above process with new tweets from API in March 2017 End of explanation
2,429
Given the following text description, write Python code to implement the functionality described below step by step Description: Bolometric Corrections Details about the bolometric correction package can be found in the GitHub repository starspot. Step1: Before requesting bolometric corrections, we need to first initialize the package, which loads the appropriate bolometric corrections tables into memory to permit faster computation of corrections hereafter. The procedure for initializing the tables can be found in the README file in the starspot.color repository. First we need to initialize a log file, where various procedural steps in the code are tracked. Step2: Next, we need to load the appropriate tables. Step3: Now that we have established which set of bolometric correction tables we wish to use, it's possible to compute bolometric correction using either the Isochrone.colorize feature, or by submitting individual requests. First, let's take a look at a couple valid queries. Note that the query is submitted as bc_eval(Teff, logg, logL/Lsun, N_filers) Step4: The extremely large (or small) magnitudes for the 5300 K star is very strange. These issues do not occur for the same command execution in the terminal shell. Now, what happens when we happen to request a temperature for a value outside the grid? Step5: Curiously, it returns values that do not appear to be out of line. It's quite possible that the code is somehow attempting to extrapolate since we use a 4-point Lagrange inteprolation, which may unknowingly provide an extrapolation beyond the defined grid space. Comparing to Phoenix model atmospheres with the Caffau et al. (2011) solar abundances for the Johnson-Cousins and 2MASS systems, our optical $BV(RI)_C$ magnitudes are systematically 1.0 mag fainter than Phoenix at 120 Myr, and our $JHK$ magnitudes are fainter by approximately 0.04 mag at the same age. Finally, safely deallocate memory and close the log file.
Python Code: # change directory %cd ../../../Projects/starspot/starspot/ from color import bolcor as bc Explanation: Bolometric Corrections Details about the bolometric correction package can be found in the GitHub repository starspot. End of explanation bc.utils.log_init('table_limits.log') # initialize bolometric correction log file Explanation: Before requesting bolometric corrections, we need to first initialize the package, which loads the appropriate bolometric corrections tables into memory to permit faster computation of corrections hereafter. The procedure for initializing the tables can be found in the README file in the starspot.color repository. First we need to initialize a log file, where various procedural steps in the code are tracked. End of explanation FeH = 0.0 # dex; atmospheric [Fe/H] aFe = 0.0 # dex; atmospheric [alpha/Fe] brand = 'marcs' # use theoretical corrections from MARCS atmospheres phot_filters = ['U', 'B', 'V', 'R', 'I', 'J', 'H', 'K'] # select a subset of filters bc.bolcorrection.bc_init(FeH, aFe, brand, phot_filters) # initialize tables Explanation: Next, we need to load the appropriate tables. End of explanation bc.bolcorrection.bc_eval(5300.0, 4.61, -0.353, len(phot_filters)) # approx. 0.9 Msun star at 120 Myr. bc.bolcorrection.bc_eval(3000.0, 4.94, -2.65, len(phot_filters)) # approx. 0.1 Msun star at 120 Myr. Explanation: Now that we have established which set of bolometric correction tables we wish to use, it's possible to compute bolometric correction using either the Isochrone.colorize feature, or by submitting individual requests. First, let's take a look at a couple valid queries. Note that the query is submitted as bc_eval(Teff, logg, logL/Lsun, N_filers) End of explanation bc.bolcorrection.bc_eval(2204.0, 4.83, -3.47, len(phot_filters)) # outside of grid -- should return garbage. Explanation: The extremely large (or small) magnitudes for the 5300 K star is very strange. These issues do not occur for the same command execution in the terminal shell. Now, what happens when we happen to request a temperature for a value outside the grid? End of explanation bc.bolcorrection.bc_clean() bc.utils.log_close() Explanation: Curiously, it returns values that do not appear to be out of line. It's quite possible that the code is somehow attempting to extrapolate since we use a 4-point Lagrange inteprolation, which may unknowingly provide an extrapolation beyond the defined grid space. Comparing to Phoenix model atmospheres with the Caffau et al. (2011) solar abundances for the Johnson-Cousins and 2MASS systems, our optical $BV(RI)_C$ magnitudes are systematically 1.0 mag fainter than Phoenix at 120 Myr, and our $JHK$ magnitudes are fainter by approximately 0.04 mag at the same age. Finally, safely deallocate memory and close the log file. End of explanation
2,430
Given the following text description, write Python code to implement the functionality described below step by step Description: Parameter exploration Purpose Step1: This tutorial will introduce you to the "psyrun" tool for parameter space exploration and serial farming step by step. It also integrates well with "ctn_benchmarks". We will use the CommunicationChannel as an example. Step2: Parameter space exploration Let us try to find out how the RMSE of the communication channel varies with the number of dimensions and number of neurons. Usually you would write a bunch of nested for loops like this Step3: However there are a few annoyances with this approach Step4: We use the Param class to define sets of parameters with keyword arguments to the constructor. As long as all sequences have the same length it is possible to give multiple arguments to the constructor Step5: If at least one argument is a sequence, non-sequence arguments will generate lists of the required length Step6: Such parameters definitions support basic operations like a cartesian product (as shown above), concatenation with +, and two subtraction like operations with - and the psyrun.pspace.missing function. Once we defined a parameter space we want to explore, it is easy to apply a function (i.e. run our model) to each set of parameters. Step7: The map_pspace function conveniently returns a dictionary with all the input parameters and output values. This makes it easy to convert it to other data structurs like a Pandas data frame for further analysis Step8: Parallelizaton It would be nice to parallelize all these simulations to make the best use of multicore processors (though, your BLAS library might be parallelized already). This is easy with psyrun as we just have to use a different map function. The only problem is that the function we want to apply needs to be pickleable which requires us to put it into an importable Python module. (It is also possible to switch the parallelization backend from multiprocessing to threading, but that can introduce other issues.) Step9: Serial farming After doing some initial experiments, you might want to run a larger number of simulations. Because all the simulations are independent from one another, they could run one after another on your computer, but that would take a very long time. It would be faster to distribute the individual simulations to a high performance cluster (like Sharcnet) with many CPU cores that can run the simulations in parallel. This is called serial farming. To do serial farming, first make sure that it is a Python module that can be installed, is installed, and can be imported. (It is possible to change sys.path if you don't want to install it as a Python module, but this approach is more complicated and error prone.) Then you need to define a "task" for what you intend to run. To define a task you have to create a file task_&lt;name&gt;.py in a directory called psy-tasks. Here is an example task for our communication channel Step10: There are a number of variables with a special meaning in these kind of task files. pspace defines the parameter space to explore which should be familar by now. execute is the function that gets called for each assignment of parameters and returns a dictionary with the results. max_jobs says that a maximum of 6 processing jobs should be created (a few other jobs not doing the core processing might be created in addition). When running this task (we will get to that in a moment), the parameter space will be split into parts of the same size and assigned to the processing jobs to be processed. At the end all the results from the processing jobs will be merged into a single file. min_items defines how many parameter assignments should at least be processd by each job. To run tasks defined in this way, we use the psy-doit command. As the name suggests, this tool is based on the excellent doit automation tool. Be sure that your working directory contains the psy-tasks directory when invoking this command. First we can list the available tasks Step11: Let us test our task first Step12: This immediatly runs the first (and only the first) parameter assignment as a way to check that nothing crashes. Now let us run the full task. Step13: This produces standard doit output with dots (.) indicating that a task was executed. As you can see the first task is splitting the parameter space and then several processing jobs are started and in the end the results are merged. All intermediary files for this will be written to the psy-work/&lt;task name&gt; directory by default. This is also where we find the result file. psyrun provides some helper functions to load this file Step14: Per default NumPy .npz files are used to store this data. However, this format has a few disadvantages. For example, to append to such a file (which happens in the merge step), the whole file has to be loaded into memory. Thus, it is possible to switch to HDF5 which is better in this regard. Just add the line store = psyrun.H5Store() to your task file. Note that this will require pytables to be installed and that the Sharcnet pytables seems to be broken at the moment. Speaking of Sharcnet
Python Code: from __future__ import print_function from pprint import pprint Explanation: Parameter exploration Purpose: Run the simulation with varying parameters and characterize the effects of those parameters Parameter exploration can either be done as grid search where a multidimensional "regular" grid of parameter values is explored or as random search where random parameters values are picked. psyrun tutorial End of explanation from ctn_benchmark.nengo.communication import CommunicationChannel Explanation: This tutorial will introduce you to the "psyrun" tool for parameter space exploration and serial farming step by step. It also integrates well with "ctn_benchmarks". We will use the CommunicationChannel as an example. End of explanation rmses = [] for D in np.arange(2, 5): for N in [10, 50, 100]: rmses.append(CommunicationChannel().run(D=D, N=N)['rmse']) pprint(rmses) Explanation: Parameter space exploration Let us try to find out how the RMSE of the communication channel varies with the number of dimensions and number of neurons. Usually you would write a bunch of nested for loops like this: End of explanation from psyrun import Param pspace = Param(D=np.arange(2, 5)) * Param(N=[10, 50, 100]) print(pspace) Explanation: However there are a few annoyances with this approach: We have to add another for loop for each dimensions in the parameter space; and we have to handle storing the results ourselves. Here, especially, the assignment to input parameters in only implicit. With psyrun, we can define a parameter space in a more natural way: End of explanation print(Param(a=[1, 2], b=[3, 4])) Explanation: We use the Param class to define sets of parameters with keyword arguments to the constructor. As long as all sequences have the same length it is possible to give multiple arguments to the constructor: End of explanation print(Param(a=[1, 2], b=3)) Explanation: If at least one argument is a sequence, non-sequence arguments will generate lists of the required length: End of explanation import psyrun result = psyrun.map_pspace(CommunicationChannel().run, pspace) pprint(result) Explanation: Such parameters definitions support basic operations like a cartesian product (as shown above), concatenation with +, and two subtraction like operations with - and the psyrun.pspace.missing function. Once we defined a parameter space we want to explore, it is easy to apply a function (i.e. run our model) to each set of parameters. End of explanation import pandas as pd pd.DataFrame(result) Explanation: The map_pspace function conveniently returns a dictionary with all the input parameters and output values. This makes it easy to convert it to other data structurs like a Pandas data frame for further analysis: End of explanation %%sh cat << EOF > model.py from ctn_benchmark.nengo.communication import CommunicationChannel def evaluate(**kwargs): return CommunicationChannel().run(**kwargs) EOF from model import evaluate result2 = psyrun.map_pspace_parallel(evaluate, pspace) pprint(result2) Explanation: Parallelizaton It would be nice to parallelize all these simulations to make the best use of multicore processors (though, your BLAS library might be parallelized already). This is easy with psyrun as we just have to use a different map function. The only problem is that the function we want to apply needs to be pickleable which requires us to put it into an importable Python module. (It is also possible to switch the parallelization backend from multiprocessing to threading, but that can introduce other issues.) End of explanation # task_cc1.py from ctn_benchmark.nengo.communication import CommunicationChannel import numpy as np import psyrun from psyrun import Param pspace = Param(D=np.arange(2, 5)) * Param(N=[10, 50, 100]) min_items = 1 max_jobs = 6 def execute(**kwargs): return CommunicationChannel().run(**kwargs) Explanation: Serial farming After doing some initial experiments, you might want to run a larger number of simulations. Because all the simulations are independent from one another, they could run one after another on your computer, but that would take a very long time. It would be faster to distribute the individual simulations to a high performance cluster (like Sharcnet) with many CPU cores that can run the simulations in parallel. This is called serial farming. To do serial farming, first make sure that it is a Python module that can be installed, is installed, and can be imported. (It is possible to change sys.path if you don't want to install it as a Python module, but this approach is more complicated and error prone.) Then you need to define a "task" for what you intend to run. To define a task you have to create a file task_&lt;name&gt;.py in a directory called psy-tasks. Here is an example task for our communication channel: End of explanation %%sh psy-doit list Explanation: There are a number of variables with a special meaning in these kind of task files. pspace defines the parameter space to explore which should be familar by now. execute is the function that gets called for each assignment of parameters and returns a dictionary with the results. max_jobs says that a maximum of 6 processing jobs should be created (a few other jobs not doing the core processing might be created in addition). When running this task (we will get to that in a moment), the parameter space will be split into parts of the same size and assigned to the processing jobs to be processed. At the end all the results from the processing jobs will be merged into a single file. min_items defines how many parameter assignments should at least be processd by each job. To run tasks defined in this way, we use the psy-doit command. As the name suggests, this tool is based on the excellent doit automation tool. Be sure that your working directory contains the psy-tasks directory when invoking this command. First we can list the available tasks: End of explanation %%sh psy-doit test cc1 Explanation: Let us test our task first: End of explanation %%sh psy-doit cc1 Explanation: This immediatly runs the first (and only the first) parameter assignment as a way to check that nothing crashes. Now let us run the full task. End of explanation data = psyrun.NpzStore().load('psy-work/cc1/result.npz') pprint(data) Explanation: This produces standard doit output with dots (.) indicating that a task was executed. As you can see the first task is splitting the parameter space and then several processing jobs are started and in the end the results are merged. All intermediary files for this will be written to the psy-work/&lt;task name&gt; directory by default. This is also where we find the result file. psyrun provides some helper functions to load this file: End of explanation # task_cc2.py import platform from ctn_benchmark.nengo.communication import CommunicationChannel import numpy as np from psyrun import Param, Sqsub pspace = Param(D=np.arange(2, 5)) * Param(N=[10, 50, 100]) min_items = 1 max_jobs = 6 sharcnet_nodes = ['narwhal', 'bul', 'kraken', 'saw'] if any(platform.node().startswith(x) for x in sharcnet_nodes): workdir = '/work/jgosmann/tcm' scheduler = Sqsub(workdir) scheduler_args = { 'timelimit': '10m', 'memory': '512M' } def execute(**kwargs): return CommunicationChannel().run(**kwargs) Explanation: Per default NumPy .npz files are used to store this data. However, this format has a few disadvantages. For example, to append to such a file (which happens in the merge step), the whole file has to be loaded into memory. Thus, it is possible to switch to HDF5 which is better in this regard. Just add the line store = psyrun.H5Store() to your task file. Note that this will require pytables to be installed and that the Sharcnet pytables seems to be broken at the moment. Speaking of Sharcnet: Currently psy-doit runs the jobs sequantially and immediatly. To actually get serial farming we have to tell psyrun to invoke the right job scheduler. For Sharcnet this would be sqsub. Here is the updated task file: End of explanation
2,431
Given the following text description, write Python code to implement the functionality described below step by step Description: VIZBI Tutorial Session Part 2 Step1: Don't forget to update this line! This should be your host machine's IP address. Step2: Step3: 2. Test Cytoscape REST API Check the status of server First, send a simple request and check the server status. Roundtrip between JSON and Python Object Object returned from the requests contains return value of API as JSON. Let's convert it into Python object. JSON library in Python converts JSON string into simple Python object. Step4: If you are comfortable with this data type conversion, you are ready to go! 3. Import Networks from various data sources There are many ways to load networks into Cytoscape from REST API Step5: What is SUID? SUID is a unique identifiers for all graph objects in Cytoscape. You can access any objects in current session as long as you have its SUID. Where is my local data file? This is a bit trickey part. When you specify local file, you need to absolute path On Docker container, your data file is mounted on Step6: And of course, you can mix local files, URLs, and list of web service queries in a same list Step7: Understand REST Principles We used modern best practices to design cyREST API. All HTTP verbs are mapped to Cytoscape resources Step8: Exercise 1 Step9: POST (Create a new resource) To create new resource (objects), you should use POST methods. URLs follow ROA standards, but you need to read API documents to understand data format for each object. Step10: DELETE (Delete a resource) Step11: PUT (Update a resource) PUT method is used to update information for existing resources. Just like POST methods, you need to know the data format to be posted. Step12: 3.3 Create networks from Python objects And this is the most powerful feature in Cytoscape REST API. You can easily convert Python objects into Cytoscape networks, tables, or Visual Styles How does this work? Cytoscape REST API sends and receives data as JSON. For networks, it uses Cytoscape.js style JSON (support for more file formats are comming!). You can programmatically generates networks by converting Python dictionary into JSON. 3.3.1 Prepare Network as Cytoscape.js JSON Let's start with the simplest network JSON Step13: Modify network dara programmatically Since it's a simple Python dictionary, it is easy to add data to the network. Let's add some nodes and edges Step14: Now, your Cytoscpae window should look like this Step15: Introduction to Cytoscape Data Model Essentially, writing your workflow as a notebook is a programming. To control Cytoscape efficiently from Notebooks, you need to understand basic data model of Cytoscape. Let me explain it as a notebook... First, let's create a data file to visualize Cytoscape data model Step16: Mode, View Model, and Presentation Model Essentially, Model in Cytoscape means networks and tables. Internally, Model can have multiple View Models. View Model State of the view. This is why you need to use views instead of view in the API Step17: Presentation Presentation is a stateless, actual graphics you see in the window. A View Model can have multiple Presentations. For now, you can assume there is always one presentation per View Model. What do you need to know as a cyREST user? CyREST API is fairly low level, and you can access all levels of Cytoscpae data structures. But if you want to use Cytoscape as a simple network visualization engine for IPython Notebook, here are some tips Step18: Exercise 2
Python Code: # HTTP Client for Python import requests # Standard JSON library import json # Basic Setup PORT_NUMBER = 1234 # This is the default port number of CyREST Explanation: VIZBI Tutorial Session Part 2: Cytoscape, IPython, Docker, and reproducible network data visualization workflows Tuesday, 3/24/2015 Lesson 1: Introduction to cyREST by Keiichiro Ono Welcome! This is an introduction to cyREST and its basic API. In this section, you will learn how to access Cytoscape through RESTful API. Prerequisites Basic knowledge of RESTful API This is a good introduction to REST Basic Python skill - only basics, such as conditional statements, loops, basic data types. Basic knowledge of Cytoscape Cytoscape data types - Networks, Tables, and Styles. System Requirments This tutorial is tested on the following platform: Client machine running Cytoscape Java SE 8 Cytoscape 3.2.1 Latest version of cyREST app Server Running IPython Notebook Docker running this image 1. Import Python Libraries and Basic Setup Libraries In this tutorial, we will use several popular Python libraries to make this workflow more realistic. Do I need to install all of them? NO. Because we are running this notebook server in Docker container. HTTP Client Since you need to access Cytoscape via RESTful API, HTTP client library is the most important tool you need to understand. In this example, we use Requests library to simplify API call code. JSON Encoding and Decoding Data will be exchanged as JSON between Cytoscape and Python code. Python has built-in support for JSON and we will use it in this workflow. Basic Setup for the API At this point, there is only one option for the cy-rest module: port number. Change Port Number By default, port number used by cy-rest module is 1234. To change this, you need set a global Cytoscape property from Edit &rarr; Preserences &rarr; Properties... and add a new property resr.port. What is happing in your machine? Mac / Windows Linux Actual Docker runtime is only available to Linux operating system and if you use Mac or Windows version of Docker, it is running on a Linux virtual machine (called boot2docker). URL to Access Cytoscape REST API We assume you are running Cytoscape desktop application and IPython Notebook server in a Docker container we provide. To access Cytoscape REST API, use the following URL: url http://IP_of_your_machine:PORT_NUMBER/v1/ where v1 is the current version number of API. Once the final release is ready, we guarantee compatibility of your scripts as long as major version number is the same. Check your machine's IP For Linux and Mac: bash ifconfig For Windows: ipconfig Viewing JSON All data exchanged between Cytoscape and other applications is in JSON. You can make the JSON data more humanreadable by using browser extensions: JSONView for Firefox JSONView for Chrome If you prefer command-line tools, jq is the best choice. End of explanation # IP address of your PHYSICAL MACHINE (NOT VM) IP = '137.110.137.158' Explanation: Don't forget to update this line! This should be your host machine's IP address. End of explanation BASE = 'http://' + IP + ':' + str(PORT_NUMBER) + '/v1/' # Header for posting data to the server as JSON HEADERS = {'Content-Type': 'application/json'} # Clean-up requests.delete(BASE + 'session') Explanation: End of explanation # Get server status res = requests.get(BASE) status_object = res.json() print(json.dumps(status_object, indent=4)) print(status_object['apiVersion']) print(status_object['memoryStatus']['usedMemory']) Explanation: 2. Test Cytoscape REST API Check the status of server First, send a simple request and check the server status. Roundtrip between JSON and Python Object Object returned from the requests contains return value of API as JSON. Let's convert it into Python object. JSON library in Python converts JSON string into simple Python object. End of explanation # Small utility function to create networks from list of URLs def create_from_list(network_list, collection_name='Yeast Collection'): payload = {'source': 'url', 'collection': collection_name} server_res = requests.post(BASE + 'networks', data=json.dumps(network_list), headers=HEADERS, params=payload) return server_res.json() # Array of data source. network_files = [ #This should be path in the LOCAL file system! 'file:////Users/kono/prog/git/vizbi-2015/tutorials/data/yeast.json', # SIF file on a web server 'http://chianti.ucsd.edu/cytoscape-data/galFiltered.sif' # And of course, you can add as many files as you need... ] # Create! print(json.dumps(create_from_list(network_files), indent=4)) Explanation: If you are comfortable with this data type conversion, you are ready to go! 3. Import Networks from various data sources There are many ways to load networks into Cytoscape from REST API: Load from files Load from web services Send Cytoscape.js style JSON directly to Cytoscape Send edgelist 3.1 Create networks from local files and URLs Let's start from a simple file loading examples. The POST method is used to create new Cytoscape objects. For example, bash POST http://localhost:1234/v1/networks means create new network(s) by specified method. If you want to create networks from files on your machine or remote servers, all you need to do is create a list of file locations and post it to Cytoscape. End of explanation # Utility function to display JSON (Pretty-printer) def pp(json_data): print(json.dumps(json_data, indent=4)) # You need KEGGScape App to load this file! queries = [ 'http://rest.kegg.jp/get/hsa00020/kgml' ] pp(create_from_list(queries, 'KEGG Metabolic Pathways')) Explanation: What is SUID? SUID is a unique identifiers for all graph objects in Cytoscape. You can access any objects in current session as long as you have its SUID. Where is my local data file? This is a bit trickey part. When you specify local file, you need to absolute path On Docker container, your data file is mounted on: /notebooks/data However, actual file is in: PATH_TO_YOUR_WORKSPACE/vizbi-2015-cytoscape-tutorial/notebooks/data Although you can see the data directory on /notebooks/data, you need to use absolute path to access actual data from Cytoscape. You may think this is a bit annoying, but actually, this is the power of container technology. You can use completely isolated environment to run your workflow. 3.2 Create networks from public RESTful web services There are many public network data services. If the service supports Cytoscape-readable file formats, you can specify the query URL as a network location. For example, the following URL calls KEGG REST API and load the TCA Cycle pathway diagram for human. You need to install KEGGScape App to Cytoscape before running the following code! End of explanation mixed = [ 'http://chianti.ucsd.edu/cytoscape-data/galFiltered.sif', 'http://www.ebi.ac.uk/Tools/webservices/psicquic/intact/webservices/current/search/query/brca1?format=xml25' ] result = create_from_list(mixed, 'Mixed Collection') pp(result) mixed1 = result[0]['networkSUID'][0] mixed1 Explanation: And of course, you can mix local files, URLs, and list of web service queries in a same list: End of explanation # Get a list of network IDs get_all_networks_url = BASE + 'networks' print(get_all_networks_url) res = requests.get(get_all_networks_url) pp(res.json()) # Pick the first network from the list above: network_suid = res.json()[0] get_network_url = BASE + 'networks/' + str(network_suid) print(get_network_url) # Get number of nodes in the network get_nodes_count_url = BASE + 'networks/' + str(network_suid) + '/nodes/count' print(get_nodes_count_url) # Get all nodes get_nodes_url = BASE + 'networks/' + str(network_suid) + '/nodes' print(get_nodes_url) # Get Node data table as CSV get_node_table_url = BASE + 'networks/' + str(network_suid) + '/tables/defaultnode.csv' print(get_node_table_url) Explanation: Understand REST Principles We used modern best practices to design cyREST API. All HTTP verbs are mapped to Cytoscape resources: | HTTP Verb | Description | |:----------:|:------------| | GET | Retrieving resources (in most cases, it is Cytoscape data objects, such as networks or tables) | | POST | Creating resources | | PUT | Changing/replacing resources or collections | | DELETE | Deleting resources | This design style is called Resource Oriented Architecture (ROA). Actually, basic idea is very simple: mapping all operations to existing HTTP verbs. It is easy to understand once you try actual examples. GET (Get a resource) End of explanation # Write your answers here... Explanation: Exercise 1: Guess URLs If a system's RESTful API is well-designed based on ROA best practices, it should be easy to guess similar functions as URLs. Display a clickable URLs for the following functions: Show number of networks in current session Show all edges in a network Show full information for a node (can be any node) Show information for all columns in the default node table Show all values in default node table under "name" column End of explanation # Add a new nodes to existing network (with time stamps) import datetime new_nodes =[ 'Node created at ' + str(datetime.datetime.now()), 'Node created at ' + str(datetime.datetime.now()) ] res = requests.post(get_nodes_url, data=json.dumps(new_nodes), headers=HEADERS) new_node_ids = res.json() pp(new_node_ids) Explanation: POST (Create a new resource) To create new resource (objects), you should use POST methods. URLs follow ROA standards, but you need to read API documents to understand data format for each object. End of explanation # Delete all nodes requests.delete(BASE + 'networks/' + str(mixed1) + '/nodes') # Delete a network requests.delete(BASE + 'networks/' + str(mixed1)) Explanation: DELETE (Delete a resource) End of explanation # Update a node name new_values = [ { 'SUID': new_node_ids[0]['SUID'], 'value' : 'updated 1' }, { 'SUID': new_node_ids[1]['SUID'], 'value' : 'updated 2' } ] requests.put(BASE + 'networks/' + str(network_suid) + '/tables/defaultnode/columns/name', data=json.dumps(new_values), headers=HEADERS) Explanation: PUT (Update a resource) PUT method is used to update information for existing resources. Just like POST methods, you need to know the data format to be posted. End of explanation # Start from a clean slate: remove all networks from current session # requests.delete(BASE + 'networks') # Manually generates JSON as a Python dictionary def create_network(): network = { 'data': { 'name': 'I\'m empty!' }, 'elements': { 'nodes':[], 'edges':[] } } return network # Difine a simple utility function def postNetwork(data): url_params = { 'collection': 'My Network Colleciton' } res = requests.post(BASE + 'networks', params=url_params, data=json.dumps(data), headers=HEADERS) return res.json()['networkSUID'] # POST data to Cytoscape empty_net_1 = create_network() empty_net_1_suid = postNetwork(empty_net_1) print('Empty network has SUID ' + str(empty_net_1_suid)) Explanation: 3.3 Create networks from Python objects And this is the most powerful feature in Cytoscape REST API. You can easily convert Python objects into Cytoscape networks, tables, or Visual Styles How does this work? Cytoscape REST API sends and receives data as JSON. For networks, it uses Cytoscape.js style JSON (support for more file formats are comming!). You can programmatically generates networks by converting Python dictionary into JSON. 3.3.1 Prepare Network as Cytoscape.js JSON Let's start with the simplest network JSON: End of explanation # Create sequence of letters (A-Z) seq_letters = list(map(chr, range(ord('A'), ord('Z')+1))) print(seq_letters) # Option 1: Add nods and edges with for loops def add_nodes_edges(network): nodes = [] edges = [] for lt in seq_letters: node = { 'data': { 'id': lt } } nodes.append(node) for lt in seq_letters: edge = { 'data': { 'source': lt, 'target': 'A' } } edges.append(edge) network['elements']['nodes'] = nodes network['elements']['edges'] = edges network['data']['name'] = 'A is the hub.' # Option 2: Add nodes and edges in functional way def add_nodes_edges_functional(network): network['elements']['nodes'] = list(map(lambda x: {'data': { 'id': x }}, seq_letters)) network['elements']['edges'] = list(map(lambda x: {'data': { 'source': x, 'target': 'A' }}, seq_letters)) network['data']['name'] = 'A is the hub (Functional Way)' # Uncomment this if you want to see the actual JSON object # print(json.dumps(empty_network, indent=4)) net1 = create_network() net2 = create_network() add_nodes_edges_functional(net1) add_nodes_edges(net2) networks = [net1, net2] def visualize(net): suid = postNetwork(net) net['data']['SUID'] = suid # Apply layout and Visual Style requests.get(BASE + 'apply/layouts/force-directed/' + str(suid)) requests.get(BASE + 'apply/styles/Directed/' + str(suid)) for net in networks: visualize(net) Explanation: Modify network dara programmatically Since it's a simple Python dictionary, it is easy to add data to the network. Let's add some nodes and edges: End of explanation from IPython.display import Image Image(url=BASE+'networks/' + str(net1['data']['SUID'])+ '/views/first.png', embed=True) Explanation: Now, your Cytoscpae window should look like this: Embed images in IPython Notebook cyRest has function to generate PNG image directly from current network view. Let's try to see the result in this notebook. End of explanation %%writefile data/model.sif Model parent_of ViewModel_1 Model parent_of ViewModel_2 Model parent_of ViewModel_3 ViewModel_1 parent_of Presentation_A ViewModel_1 parent_of Presentation_B ViewModel_2 parent_of Presentation_C ViewModel_3 parent_of Presentation_D ViewModel_3 parent_of Presentation_E ViewModel_3 parent_of Presentation_F model = [ 'file:////Users/kono/prog/git/vizbi-2015/tutorials/data/model.sif' ] # Create! res = create_from_list(model) model_suid = res[0]['networkSUID'][0] requests.get(BASE + 'apply/layouts/force-directed/' + str(model_suid)) Image(url=BASE+'networks/' + str(model_suid)+ '/views/first.png', embed=True) Explanation: Introduction to Cytoscape Data Model Essentially, writing your workflow as a notebook is a programming. To control Cytoscape efficiently from Notebooks, you need to understand basic data model of Cytoscape. Let me explain it as a notebook... First, let's create a data file to visualize Cytoscape data model End of explanation view_url = BASE + 'networks/' + str(model_suid) + '/views/first' print('You can access (default) network view from this URL: ' + view_url) Explanation: Mode, View Model, and Presentation Model Essentially, Model in Cytoscape means networks and tables. Internally, Model can have multiple View Models. View Model State of the view. This is why you need to use views instead of view in the API: /v1/networks/SUID/views However, Cytoscape 3.2.x has only one rendering engine for now, and end-users do not have access to this feature. Until Cytoscape Desktop supports multiple renderers, best practice is just use one view per model. To access the default view, there is a utility method first: End of explanation data_str = '' n = 0 while n <100: data_str = data_str + str(n) + '\t' + str(n+1) + '\n' n = n + 1 # Join the first and last nodes data_str = data_str + '100\t0\n' # print(data_str) # You can create multiple networks by running simple for loop: for i in range(5): res = requests.post(BASE + 'networks?format=edgelist&collection=Ring', data=data_str, headers=HEADERS) circle_suid = res.json()['networkSUID'] requests.get(BASE + 'apply/layouts/circular/' + str(circle_suid)) Image(url=BASE+'networks/' + str(circle_suid) + '/views/first.png', embed=True) Explanation: Presentation Presentation is a stateless, actual graphics you see in the window. A View Model can have multiple Presentations. For now, you can assume there is always one presentation per View Model. What do you need to know as a cyREST user? CyREST API is fairly low level, and you can access all levels of Cytoscpae data structures. But if you want to use Cytoscape as a simple network visualization engine for IPython Notebook, here are some tips: Tip 1: Always keep SUID when you create any new object ALL Cytoscape objects, networks, nodes, egdes, and tables have a session unique ID, called SUID. When you create any new data objects in Cytoscape, it returns SUIDs. You need to keep them as Python data objects (list, dict, amp, etc.) to access them later. Tip 2: Create one view per model Until Cytoscape Desktop fully support multiple view/presentation feature, keep it simple: one view per model. Tip 3: Minimize number of API calls Of course, there is a API to add / remove / update one data object per API call, but it is extremely inefficient! 3.3.2 Prepare Network as edgelist Edgelist is a minimalistic data format for networks and it is widely used in popular libraries including NetworkX and igraph. Preparing edgelist in Python is straightforward. You just need to prepare a list of edges as string like: a b b c a c c d d f b f f g f h In Python, there are many ways to generate string like this. Here is a naive approach: End of explanation # Write your code here... # import g = nx.Graph() g.add_edge(1, 2, interaction='itr1', score=0.1) cyjs = util.from_networkx(g) Explanation: Exercise 2: Create a network from a simple edge list file Edge list is a human-editable text file to represent a graph structure. Using the sample data abobe (edge list example in 3.3.2), create a new network in Cytoscape from the edge list and visualize it just like the ring network above. Hint: Use Magic! End of explanation
2,432
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step6: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing Step8: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step10: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step12: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU Step15: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below Step18: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. Step21: Encoding Implement encoding_layer() to create a Encoder RNN layer Step24: Decoding - Training Create a training decoding layer Step27: Decoding - Inference Create inference decoder Step30: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note Step33: Build the Neural Network Apply the functions you implemented above to Step34: Neural Network Training Hyperparameters Tune the following parameters Step36: Build the Graph Build the graph using the neural network you implemented. Step40: Batch and pad the source and target sequences Step43: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. Step45: Save Parameters Save the batch_size and save_path parameters for inference. Step47: Checkpoint Step50: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. Step52: Translate This will translate translate_sentence from English to French.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) # TODO: Implement Function source_ids = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\n')] target_ids = [[target_vocab_to_int[word] for word in (line + ' <EOS>').split()] for line in target_text.split('\n')] #print('source text: \n', source_words[:50], '\n\n') #print('target text: \n', target_words[:50], '\n\n') return source_ids, target_ids DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation def model_inputs(): Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) # TODO: Implement Function inputs = tf.placeholder(tf.int32, shape = (None, None), name = 'input') targets = tf.placeholder(tf.int32, shape = (None, None), name = 'targets') learning_rate = tf.placeholder(tf.float32, name = 'learning_rate') keep_prob = tf.placeholder(tf.float32, name = 'keep_prob') tgt_seq_length = tf.placeholder(tf.int32, (None,), name = 'target_sequence_length') src_seq_length = tf.placeholder(tf.int32, (None,), name = 'source_sequence_length') max_tgt_seq_length = tf.reduce_max(tgt_seq_length, name = 'max_target_sequence_length') return inputs, targets, learning_rate, keep_prob, tgt_seq_length, max_tgt_seq_length, src_seq_length DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) End of explanation def process_decoder_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data # TODO: Implement Function ending = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_encoding_input(process_decoder_input) Explanation: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) # TODO: Implement Function enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) # RNN cell def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer = tf.random_uniform_initializer(-0.1, 0.1, seed = 2)) return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob = keep_prob) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length = source_sequence_length, dtype = tf.float32) return enc_output, enc_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn() End of explanation def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id # TODO: Implement Function training_helper = tf.contrib.seq2seq.TrainingHelper(inputs = dec_embed_input, sequence_length = target_sequence_length, time_major = False) training_decoder = tf.contrib.seq2seq.BasicDecoder(cell = dec_cell, helper = training_helper, initial_state = encoder_state, output_layer = output_layer) dec_outputs = tf.contrib.seq2seq.dynamic_decode(decoder = training_decoder, impute_finished = True, maximum_iterations = max_summary_length)[0] #train_logits = output_layer(dec_outputs) return dec_outputs DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) Explanation: Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id # TODO: Implement Function # tile the start tokens for inference helper start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype = tf.int32), [batch_size], name = 'start_tokens') inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding = dec_embeddings, start_tokens = start_tokens, end_token = end_of_sequence_id) inference_decoder = tf.contrib.seq2seq.BasicDecoder(cell = dec_cell, helper = inference_helper, initial_state = encoder_state, output_layer = output_layer) decoder_outputs = tf.contrib.seq2seq.dynamic_decode(decoder = inference_decoder, impute_finished = True, maximum_iterations = max_target_sequence_length)[0] return decoder_outputs DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) Explanation: Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) # TODO: Implement Function # Embed the target sequences dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # Construct decoder LSTM cell def make_cell(rnn_size): cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer = tf.random_uniform_initializer(-0.1, 0.1, seed = 2)) return cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # Create output layer output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev = 0.1)) # Use decoding_layer_train to get training logits with tf.variable_scope("decode"): train_logits = decoding_layer_train(encoder_state = encoder_state, dec_cell = dec_cell, dec_embed_input = dec_embed_input, target_sequence_length = target_sequence_length, max_summary_length = max_target_sequence_length, output_layer = output_layer, keep_prob = keep_prob) # end with # Use decoding_layer_infer to get logits at inference time with tf.variable_scope("decode", reuse = True): inference_logits = decoding_layer_infer(encoder_state = encoder_state, dec_cell = dec_cell, dec_embeddings = dec_embeddings, start_of_sequence_id = target_vocab_to_int['<GO>'], end_of_sequence_id = target_vocab_to_int['<EOS>'], max_target_sequence_length = max_target_sequence_length, vocab_size = target_vocab_size, output_layer = output_layer, batch_size = batch_size, keep_prob = keep_prob) # end with return train_logits, inference_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) # TODO: Implement Function # Encode input using encoding_layer _, enc_state = encoding_layer( rnn_inputs = input_data, rnn_size = rnn_size, num_layers = num_layers, keep_prob = keep_prob, source_sequence_length = source_sequence_length, source_vocab_size = source_vocab_size, encoding_embedding_size = enc_embedding_size) # Process target data using process_decoder_input dec_input = process_decoder_input(target_data = target_data, target_vocab_to_int = target_vocab_to_int, batch_size = batch_size) # decode the encoded input using decoding_layer dec_output_train, dec_output_infer = decoding_layer( dec_input = dec_input, encoder_state = enc_state, target_sequence_length = target_sequence_length, max_target_sequence_length = max_target_sentence_length, rnn_size = rnn_size, num_layers = num_layers, target_vocab_to_int = target_vocab_to_int, target_vocab_size = target_vocab_size, batch_size = batch_size, keep_prob = keep_prob, decoding_embedding_size = dec_embedding_size) return dec_output_train, dec_output_infer DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) Explanation: Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function. Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function. End of explanation # Number of Epochs epochs = 20 # Batch Size batch_size = 128 # RNN Size rnn_size = 64 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.4 display_step = 64 Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability Set display_step to state how many steps between each debug output statement End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL def pad_sentence_batch(sentence_batch, pad_int): Pad sentences with <PAD> so that each sentence of a batch has the same length max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): Batch targets, sources, and the lengths of their sentences together for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths Explanation: Batch and pad the source and target sequences End of explanation DON'T MODIFY ANYTHING IN THIS CELL def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() Explanation: Checkpoint End of explanation def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids # TODO: Implement Function # convert to lowercase sentence = sentence.lower() # convert words to ids, using vocab_to_int sequence = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split()] return sequence DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation translate_sentence = 'he saw a old yellow truck .' DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) Explanation: Translate This will translate translate_sentence from English to French. End of explanation
2,433
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents <p><div class="lev1 toc-item"><a href="#Control-Flow" data-toc-modified-id="Control-Flow-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Control Flow</a></div><div class="lev2 toc-item"><a href="#Multiple-Conditions" data-toc-modified-id="Multiple-Conditions-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Multiple Conditions</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Exercise</a></div><div class="lev2 toc-item"><a href="#Adding-your-own-input" data-toc-modified-id="Adding-your-own-input-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Adding your own input</a></div><div class="lev1 toc-item"><a href="#Loops" data-toc-modified-id="Loops-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Loops</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-21"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Exercise</a></div><div class="lev1 toc-item"><a href="#Range-of-Values" data-toc-modified-id="Range-of-Values-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Range of Values</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-31"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Exercise</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-32"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Exercise</a></div><div class="lev1 toc-item"><a href="#Become-a-Control-Freak" data-toc-modified-id="Become-a-Control-Freak-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Become a Control Freak</a></div><div class="lev2 toc-item"><a href="#Break" data-toc-modified-id="Break-41"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Break</a></div><div class="lev2 toc-item"><a href="#Continue" data-toc-modified-id="Continue-42"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Continue</a></div><div class="lev1 toc-item"><a href="#List-Comprehension" data-toc-modified-id="List-Comprehension-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>List Comprehension</a></div><div class="lev2 toc-item"><a href="#Dictionary-Comprehension" data-toc-modified-id="Dictionary-Comprehension-51"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>Dictionary Comprehension</a></div> # Control Flow Now that we have some basic skills, it's important for us to define the conditions in which they can be executed. This is where control comes into play. And as before, very easy topic! In plain English Step1: Multiple Conditions So now we can deal with a scenario where there are two possible decisions to be made. What about more than two decision? Say hello to "elif"! Step2: Exercise Write some code to check if you are old enough to buy a bottle of wine. You need to be 18 or over, but if your State is Texas, you need to be 25 or over. Step3: Adding your own input How about adding your own input and checking against that? This doesn't come in too handy in a data science environment since you typically have a well defined dataset already. Nevertheless, this is important to know. Step4: Loops Time to supercharge our Python usage. Loops are in some ways, the basis for automation. Check if a condition is true, then execute a step, and keep executing it till the condition is no longer true. Step5: When using dictionaries, you can iterate through keys, values or both. Step6: Exercise Print the names of the people in the dictionary 'data' Print the name of the people who have 'incubees' Print the name, and net worth of people with a net worth higher than 500,000 Print the names of people without a board seat Enter your responses in the fields below. This is solved for you if you scroll down, but you can't cheat yourself! Step7: Range of Values We often need to define a range of values for our program to iterate over. Step8: In a defined range, the lower number is inclusive, and upper number is exclusive. So 0 to 10 would include 0 but exclude 10. So if we need a specific range, we can use this knowledge to our advantage. Step9: We can also specify a range without explicitly defining an upper or lower range, in which case, Python does it's magic Step10: We can also use the range function to perform mathematical tricks. Step11: Or to check for certain other conditions or properties, or to define how many times an activity will be performed. Step12: Exercise Print all numbers from 1 to 20 Step13: Exercise Print the square of the first 10 natural numbers. Step14: Become a Control Freak And now, it's time to become a master of control! A data scientist needs absolute control over loops, stopping when defined conditions are met, or carrying on till a solution if found. <img src="images/break.jpg"> Break Step15: Continue Break's cousin is called Continue. If a certain condition is met, carry on. Step16: List Comprehension Remember lists? Now here's a way to power through a large list in one line! As a Data Scientist, you will need to write a lot of code very efficiently, especially in the data exploration stage. The more experiments you can run to understand your data, the better it is. This is also a very useful tool in transforming one list (or dictionary) into another list. Let's begin by some simple examples First, we will write a program to generate the squares of the first 10 natural numbers, using a standard for loop. Next, we will contrast that with the List Comprehension approach. Step17: So far, so good! Step18: How's that for speed?! Here's the format for List Comprehensions, in English. ListName = [Expected_Result_or_Operation for Item in a given range]<br> print the ListName Step19: List comprehensions are very useful when dealing with an existing list. Let's see some examples. Step20: How about calculating the areas of circles, given a list of radii? That too in just one line. Step21: Dictionary Comprehension Let's get back to our dictionary named Data. Dictionary Comprehension can be a very efficient way to extract information out of them. Especially when you have thousands or millions of records. Step22: We can also use dictionary comprehension to create new dictionaries
Python Code: collection = [1,2,3,4,5] len(collection) if len(collection) == 5: print("Woohoo!") collection[1] if collection[0] % 2 == 0: print("Divisible") else: print("Not Divisible") Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Control-Flow" data-toc-modified-id="Control-Flow-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Control Flow</a></div><div class="lev2 toc-item"><a href="#Multiple-Conditions" data-toc-modified-id="Multiple-Conditions-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Multiple Conditions</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Exercise</a></div><div class="lev2 toc-item"><a href="#Adding-your-own-input" data-toc-modified-id="Adding-your-own-input-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Adding your own input</a></div><div class="lev1 toc-item"><a href="#Loops" data-toc-modified-id="Loops-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Loops</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-21"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Exercise</a></div><div class="lev1 toc-item"><a href="#Range-of-Values" data-toc-modified-id="Range-of-Values-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Range of Values</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-31"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Exercise</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-32"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Exercise</a></div><div class="lev1 toc-item"><a href="#Become-a-Control-Freak" data-toc-modified-id="Become-a-Control-Freak-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Become a Control Freak</a></div><div class="lev2 toc-item"><a href="#Break" data-toc-modified-id="Break-41"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Break</a></div><div class="lev2 toc-item"><a href="#Continue" data-toc-modified-id="Continue-42"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Continue</a></div><div class="lev1 toc-item"><a href="#List-Comprehension" data-toc-modified-id="List-Comprehension-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>List Comprehension</a></div><div class="lev2 toc-item"><a href="#Dictionary-Comprehension" data-toc-modified-id="Dictionary-Comprehension-51"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>Dictionary Comprehension</a></div> # Control Flow Now that we have some basic skills, it's important for us to define the conditions in which they can be executed. This is where control comes into play. And as before, very easy topic! In plain English: * If condition x is True: * Execute statement A * Else: * Execute statement B And that's all there is to it. Now we just need to learn the Python-way of expressing the above phrases. End of explanation collection = [1,2,3,4,5] if collection[0] == 0: print ("Zero!") elif collection[0] == 100: print ("Hundred!") else: print("Not Zero or Hundred") x = ["George", "Barack", "Donald"] test = "Richard" if test in x: print(test, "has been found.") else: print(test, "was not found. Let me add him to the list." ) x.append(test) print(x) Explanation: Multiple Conditions So now we can deal with a scenario where there are two possible decisions to be made. What about more than two decision? Say hello to "elif"! End of explanation # Your code here Explanation: Exercise Write some code to check if you are old enough to buy a bottle of wine. You need to be 18 or over, but if your State is Texas, you need to be 25 or over. End of explanation age = int(input("Please enter your age:")) if age < 18: print("You cannot vote or buy alcohol.") elif age < 21: print("You can vote, but can't buy alcohol.") else: print("You can vote to buy alcohol. ;) ") mr_prez = ["Bill", "George", "Barack", "Donald"] name = input("Enter your name:") # Don't need to specify str type(name) if name in mr_prez: print("You share your name with a President.") else: print("You too can be president some day.") Explanation: Adding your own input How about adding your own input and checking against that? This doesn't come in too handy in a data science environment since you typically have a well defined dataset already. Nevertheless, this is important to know. End of explanation numbers = [1,2,3,4,5,6,7,8,9,10] for number in numbers: if number % 2 == 0: print("Divisible by 2.") else: print("Not divisible by 2.") numbers = {1,2,3,4,5,6,7,8,9,10} for num in numbers: if num%3 == 0: print("Divisible by 3.") else: print("Not divisible by 3.") Explanation: Loops Time to supercharge our Python usage. Loops are in some ways, the basis for automation. Check if a condition is true, then execute a step, and keep executing it till the condition is no longer true. End of explanation groceries = {"Milk":2.5, "Tea": 4, "Biscuits": 3.5, "Sugar":1} print(groceries.keys()) print(groceries.values()) # item here refers to the the key in set name groceries for a in groceries.keys(): print(a) for price in groceries.values(): print(price) for (key, val) in groceries.items(): print(key,val) groceries.items() groceries.keys() groceries.values() Explanation: When using dictionaries, you can iterate through keys, values or both. End of explanation data = { "Richard": { "Title": "CEO", "Employees": ["Dinesh", "Gilfoyle", "Jared"], "Awards": ["Techcrunch Disrupt"], "Previous Firm": "Hooli", "Board Seat":1, "Net Worth": 100000 }, "Jared": { "Real_Name": "Donald", "Title": "CFO", "Previous Firm": "Hooli", "Board Seat":1, "Net Worth": 500 }, "Erlich": { "Title": "Visionary", "Previous Firm": "Aviato", "Current Firm": "Bachmannity", "Incubees": ["Richard", "Dinesh", "Gilfoyle", "Nelson", "Jian Yang"], "Board Seat": 1, "Net Worth": 5000000 }, "Nelson": { "Title": "Co-Founder", "Current Firm": "Bachmannity", "Previous Firm": "Hooli", "Board Seat": 0, "Net Worth": 10000000 }, } # Name of people in the dictionary data.keys() # Alternate way to get the name of the people in the dictionary for name in data.keys(): print(name) # Name of people who have incubees for name in data.items(): if "Incubees" in name[1]: print (name[0]) # Name and networth of people with a networth greater 500000 for name in data.items(): if "Net Worth" in name[1] and name[1]["Net Worth"]>500000: print (name[0], name[1]["Net Worth"]) # Name of people who don't have a board seat for name in data.items(): if "Board Seat" in name[1] and name[1]["Board Seat"] == 0: print (name[0]) Explanation: Exercise Print the names of the people in the dictionary 'data' Print the name of the people who have 'incubees' Print the name, and net worth of people with a net worth higher than 500,000 Print the names of people without a board seat Enter your responses in the fields below. This is solved for you if you scroll down, but you can't cheat yourself! End of explanation # Generate a list on the fly nums = list(range(10)) print(nums) Explanation: Range of Values We often need to define a range of values for our program to iterate over. End of explanation nums = list(range(1,11)) print(nums) Explanation: In a defined range, the lower number is inclusive, and upper number is exclusive. So 0 to 10 would include 0 but exclude 10. So if we need a specific range, we can use this knowledge to our advantage. End of explanation nums = list(range(10)) print(nums) Explanation: We can also specify a range without explicitly defining an upper or lower range, in which case, Python does it's magic: range will be 0 to one less than the number specified. End of explanation for i in range(1,6): print("The square of",i,"is:",i**2) Explanation: We can also use the range function to perform mathematical tricks. End of explanation for i in range(1,10): print("*"*i) Explanation: Or to check for certain other conditions or properties, or to define how many times an activity will be performed. End of explanation # Your Code Here Explanation: Exercise Print all numbers from 1 to 20 End of explanation # Your Code Here Explanation: Exercise Print the square of the first 10 natural numbers. End of explanation for i in range(1,100): print("The square of",i,"is:",i**2) if i >= 5: break print("Broken") Explanation: Become a Control Freak And now, it's time to become a master of control! A data scientist needs absolute control over loops, stopping when defined conditions are met, or carrying on till a solution if found. <img src="images/break.jpg"> Break End of explanation letters = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j"] for letter in letters: print("Currently testing letter", letter) if letter == "e": print("I plead the 5th!") continue print( letter) Explanation: Continue Break's cousin is called Continue. If a certain condition is met, carry on. End of explanation # Here is a standard for loop numList = [] for num in range(1,11): numList.append(num**2) print (numList) Explanation: List Comprehension Remember lists? Now here's a way to power through a large list in one line! As a Data Scientist, you will need to write a lot of code very efficiently, especially in the data exploration stage. The more experiments you can run to understand your data, the better it is. This is also a very useful tool in transforming one list (or dictionary) into another list. Let's begin by some simple examples First, we will write a program to generate the squares of the first 10 natural numbers, using a standard for loop. Next, we will contrast that with the List Comprehension approach. End of explanation # Now for List Comprehension sqList = [num**2 for num in range(1,11)] print(sqList) [num**2 for num in range(1,11)] Explanation: So far, so good! End of explanation cubeList = [num**3 for num in range(6)] print(cubeList) Explanation: How's that for speed?! Here's the format for List Comprehensions, in English. ListName = [Expected_Result_or_Operation for Item in a given range]<br> print the ListName End of explanation nums = [1,2,3,4,5,6,7,8,9,10] # For every n in the list named nums, I want an n my_list1 = [n for n in nums] print(my_list1) # For every n in the list named nums, I want n to be squared my_list2 = [n**2 for n in nums] print(my_list2) # For every n in the list named nums, I want n, only if it is even my_list3 = [n for n in nums if n%2 == 0] print(my_list3) Explanation: List comprehensions are very useful when dealing with an existing list. Let's see some examples. End of explanation radius = [1.0, 2.0, 3.0, 4.0, 5.0] import math # Area of Circle = Pi * (radius**2) area = [round((r**2)*math.pi,2) for r in radius] print(area) Explanation: How about calculating the areas of circles, given a list of radii? That too in just one line. End of explanation data = { "Richard": { "Title": "CEO", "Employees": ["Dinesh", "Gilfoyle", "Jared"], "Awards": ["Techcrunch Disrupt"], "Previous Firm": "Hooli", "Board Seat":1, "Net Worth": 100000 }, "Jared": { "Real_Name": "Donald", "Title": "CFO", "Previous Firm": "Hooli", "Board Seat":1, "Net Worth": 500 }, "Erlich": { "Title": "Visionary", "Previous Firm": "Aviato", "Current Firm": "Bachmannity", "Incubees": ["Richard", "Dinesh", "Gilfoyle", "Nelson", "Jian Yang"], "Board Seat": 1, "Net Worth": 5000000 }, "Nelson": { "Title": "Co-Founder", "Current Firm": "Bachmannity", "Previous Firm": "Hooli", "Board Seat": 0, "Net Worth": 10000000 }, } # Print all details for people who have incubees [(k,v) for k, v in data.items() if "Incubees" in v ] for name in data.items(): if "Net Worth" in name[1] and name[1]["Net Worth"]>500000: print (name[0], name[1]["Net Worth"]) high_nw = [(name[0], name[1]["Net Worth"]) for name in data.items() if "Net Worth" in name[1] and name[1]["Net Worth"]>500000] print(high_nw) type(high_nw) type(high_nw[0]) Explanation: Dictionary Comprehension Let's get back to our dictionary named Data. Dictionary Comprehension can be a very efficient way to extract information out of them. Especially when you have thousands or millions of records. End of explanation name = ['George HW', 'Bill', 'George', 'Barack', 'Donald', 'Bugs'] surname = ['Bush', 'Clinton', 'Bush Jr', 'Obama', 'Trump', 'Bunny'] full_names = {n:s for n,s in zip(name,surname)} full_names # What if we want to exclude certain values? full_names = {n:s for n,s in zip(name, surname) if n!='Bugs'} print(full_names) Explanation: We can also use dictionary comprehension to create new dictionaries End of explanation
2,434
Given the following text description, write Python code to implement the functionality described below step by step Description: Embeddings for Weather Data An embedding is a low-dimensional, vector representation of a (typically) high-dimensional feature which maintains the semantic meaning of the feature in a such a way that similar features are close in the embedding space. In this notebook, we use autoencoders to create embeddings for HRRR images. We can then use the embeddings to search for "similar" weather patterns. Step1: Reading HRRR data and converting to TensorFlow Records HRRR data comes in a Grib2 files on Cloud Storage. Step4: We have to choose one of the following Step5: Write a Beam pipeline Step6: Read the written TF Records Step7: Create autoencoder in Keras Step8: Train the autoencoder Step9: Run at scale Step10: <img src="dataflow_2019.png" /> Step11: Try out the autoencoder Load the Keras model, and try out the autoencoder functionality. Step12: Try decoding Step13: Let's decode the first row. First, we create a decoder Step14: Then, we invoke decoder.predict() to reconstruct the image from the 50 numbers in the embedding. Step15: What does the original look like? Let's pull the original HRRR Grib file from this time stamp Step16: Searching for similar images Suppose we want to find an image similar to the above image. Step17: This makes a lot of sense. The image from the previous/next hour is the most similar. Then, images from +/- 2 hours ... What if we want to find the most similar image that is not within +/- 1 day? Since we have only 1 year of data, we are not going to great analogs, possibly, but let's see what we get. Step18: Really, Jan. 2 in 2019 had similar weather to Sep 20? Let's see ... Step19: What about July 1, 2019, which is next on the list? Step20: Both these have weather in approximately the same places. On Jan 1, things are more widespread than in July. The date we searched for (May) is somewhere in between the Jan and July weather scenarios ... To make the search work better, we should use smaller tiles (not the whole country, but perhaps 500kmx500km tiles). Then, we'll get mesoscale phenomena. Embeddings for interpolation ... Can we use the embeddings for interpolating between images? Recall that the sqdist between subsequent hours is about 0.5 What happens if we use the image at t-1 and t+1 to get t=0? What's the error? Step22: Clustering the embeddings If the differences between images are meaningful, then it makes sense that we could cluster the images using just the embeddings. Let's do K-Means clustering into 5 categories and visualize the five centroids.
Python Code: !sudo apt-get -y --quiet install libeccodes0 %pip install -q cfgrib xarray pydot import apache_beam as beam print(beam.__version__) PROJECT='ai-analytics-solutions' BUCKET='{}-kfpdemo'.format(PROJECT) Explanation: Embeddings for Weather Data An embedding is a low-dimensional, vector representation of a (typically) high-dimensional feature which maintains the semantic meaning of the feature in a such a way that similar features are close in the embedding space. In this notebook, we use autoencoders to create embeddings for HRRR images. We can then use the embeddings to search for "similar" weather patterns. End of explanation !gsutil ls -l gs://high-resolution-rapid-refresh/hrrr.20200811/conus/hrrr.*.wrfsfcf00* FILENAME="gs://high-resolution-rapid-refresh/hrrr.20200811/conus/hrrr.t18z.wrfsfcf06.grib2" # derecho in the Midwest !gsutil ls -l {FILENAME} import xarray as xr import tensorflow as tf import tempfile import cfgrib with tempfile.TemporaryDirectory() as tmpdirname: TMPFILE="{}/read_grib".format(tmpdirname) tf.io.gfile.copy(FILENAME, TMPFILE, overwrite=True) ds = cfgrib.open_datasets(TMPFILE) print(ds) Explanation: Reading HRRR data and converting to TensorFlow Records HRRR data comes in a Grib2 files on Cloud Storage. End of explanation import xarray as xr import tensorflow as tf import tempfile import cfgrib import numpy as np refc = 0 with tempfile.TemporaryDirectory() as tmpdirname: TMPFILE="{}/read_grib".format(tmpdirname) tf.io.gfile.copy(FILENAME, TMPFILE, overwrite=True) #ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'surface', 'stepType': 'instant'}}) #ds.data_vars['prate'].plot() # crain, prate ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'unknown', 'stepType': 'instant'}}) #ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'atmosphere', 'stepType': 'instant'}}) refc = ds.data_vars['refc'] refc.plot() print(np.array([refc.sizes['y'], refc.sizes['x']])) print(refc.time.data) print(refc.valid_time.data) print(str(refc.time.data)[:19]) import numpy as np def _array_feature(value, min_value, max_value): Wrapper for inserting ndarray float features into Example proto. value = np.nan_to_num(value.flatten()) # nan, -inf, +inf to numbers value = np.clip(value, min_value, max_value) # clip to valid return tf.train.Feature(float_list=tf.train.FloatList(value=value)) def create_tfrecord(filename): with tempfile.TemporaryDirectory() as tmpdirname: TMPFILE="{}/read_grib".format(tmpdirname) tf.io.gfile.copy(filename, TMPFILE, overwrite=True) ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'unknown', 'stepType': 'instant'}}) # create a TF Record with the raw data tfexample = tf.train.Example( features=tf.train.Features( feature={ 'ref': _array_feature(ds.data_vars['refc'].data, min_value=0, max_value=60), })) return tfexample.SerializeToString() s = create_tfrecord(FILENAME) print(len(s), s[:16]) from datetime import datetime, timedelta def generate_filenames(startdate: str, enddate: str): start_dt = datetime.strptime(startdate, '%Y%m%d') end_dt = datetime.strptime(enddate, '%Y%m%d') dt = start_dt while dt <= end_dt: # gs://high-resolution-rapid-refresh/hrrr.20200811/conus/hrrr.t04z.wrfsfcf00.grib2 f = '{}/hrrr.{:4}{:02}{:02}/conus/hrrr.t{:02}z.wrfsfcf00.grib2'.format( 'gs://high-resolution-rapid-refresh', dt.year, dt.month, dt.day, dt.hour) dt = dt + timedelta(hours=1) yield f def generate_shuffled_filenames(startdate: str, enddate: str): shuffle the files so that a batch of records doesn't contain highly correlated entries filenames = [f for f in generate_filenames(startdate, enddate)] np.random.shuffle(filenames) return filenames print(generate_shuffled_filenames('20190915', '20190917')) Explanation: We have to choose one of the following: filter_by_keys={'typeOfLevel': 'unknown'} filter_by_keys={'typeOfLevel': 'cloudTop'} filter_by_keys={'typeOfLevel': 'surface'} filter_by_keys={'typeOfLevel': 'heightAboveGround'} filter_by_keys={'typeOfLevel': 'isothermal'} filter_by_keys={'typeOfLevel': 'isobaricInhPa'} filter_by_keys={'typeOfLevel': 'pressureFromGroundLayer'} filter_by_keys={'typeOfLevel': 'sigmaLayer'} filter_by_keys={'typeOfLevel': 'meanSea'} filter_by_keys={'typeOfLevel': 'heightAboveGroundLayer'} filter_by_keys={'typeOfLevel': 'sigma'} filter_by_keys={'typeOfLevel': 'depthBelowLand'} filter_by_keys={'typeOfLevel': 'isobaricLayer'} filter_by_keys={'typeOfLevel': 'cloudBase'} filter_by_keys={'typeOfLevel': 'nominalTop'} filter_by_keys={'typeOfLevel': 'isothermZero'} filter_by_keys={'typeOfLevel': 'adiabaticCondensation'} End of explanation %run -m wxsearch.hrrr_to_tfrecord -- --startdate 20190915 --enddate 20190916 --outdir gs://{BUCKET}/wxsearch/data/2019 --project {PROJECT} # --outdir tmp Explanation: Write a Beam pipeline End of explanation # try reading what was written out import tensorflow as tf def parse_tfrecord(example_data): parsed = tf.io.parse_single_example(example_data, { 'size': tf.io.VarLenFeature(tf.int64), 'ref': tf.io.VarLenFeature(tf.float32), 'time': tf.io.FixedLenFeature([], tf.string), 'valid_time': tf.io.FixedLenFeature([], tf.string) }) parsed['size'] = tf.sparse.to_dense(parsed['size']) parsed['ref'] = tf.reshape(tf.sparse.to_dense(parsed['ref']), (1059, 1799))/60. # 0 to 1 return parsed def read_dataset(pattern): filenames = tf.io.gfile.glob(pattern) ds = tf.data.TFRecordDataset(filenames, compression_type=None, buffer_size=None, num_parallel_reads=None) return ds.prefetch(tf.data.experimental.AUTOTUNE).map(parse_tfrecord) ds = read_dataset('gs://{}/wxsearch/data/2019/tfrecord-00000-*'.format(BUCKET)) for refc in ds.take(1): print(repr(refc)) Explanation: Read the written TF Records End of explanation ## A model without the intermediate Dense layer, so that end result is effectively tiled ## We use more filters to represent the tiles import tensorflow as tf def create_model(nlayers=4, poolsize=4, numfilters=5, num_dense=0): input_img = tf.keras.Input(shape=(1059, 1799, 1), name='refc_input') x = tf.keras.layers.Cropping2D(cropping=((17, 18),(4, 3)), name='cropped')(input_img) last_pool_layer = None for layerno in range(nlayers): x = tf.keras.layers.Conv2D(2**(layerno + numfilters), poolsize, activation='relu', padding='same', name='encoder_conv_{}'.format(layerno))(x) last_pool_layer = tf.keras.layers.MaxPooling2D(poolsize, padding='same', name='encoder_pool_{}'.format(layerno)) x = last_pool_layer(x) output_shape = last_pool_layer.output_shape[1:] if num_dense == 0: # flatten to create the embedding x = tf.keras.layers.Flatten(name='refc_embedding')(x) embed_size = output_shape[0] * output_shape[1] * output_shape[2] if embed_size > 1024: print("Embedding size={} is too large".format(embed_size)) return None, embed_size else: # flatten, send through dense layer to create the embedding x = tf.keras.layers.Flatten(name='encoder_flatten')(x) x = tf.keras.layers.Dense(num_dense, name='refc_embedding')(x) x = tf.keras.layers.Dense(output_shape[0] * output_shape[1] * output_shape[2], name='decoder_dense')(x) embed_size = num_dense x = tf.keras.layers.Reshape(output_shape, name='decoder_reshape')(x) for layerno in range(nlayers): x = tf.keras.layers.Conv2D(2**(nlayers-layerno-1 + numfilters), poolsize, activation='relu', padding='same', name='decoder_conv_{}'.format(layerno))(x) x = tf.keras.layers.UpSampling2D(poolsize, name='decoder_upsamp_{}'.format(layerno))(x) before_padding_layer = tf.keras.layers.Conv2D(1, 3, activation='relu', padding='same', name='before_padding') x = before_padding_layer(x) htdiff = 1059 - before_padding_layer.output_shape[1] wddiff = 1799 - before_padding_layer.output_shape[2] if htdiff < 0 or wddiff < 0: print("Invalid architecture: htdiff={} wddiff={}".format(htdiff, wddiff)) return None, 9999 decoded = tf.keras.layers.ZeroPadding2D(padding=((htdiff//2,htdiff - htdiff//2), (wddiff//2,wddiff - wddiff//2)), name='refc_reconstructed')(x) autoencoder = tf.keras.Model(input_img, decoded, name='autoencoder') autoencoder.compile(optimizer='adam', loss=tf.keras.losses.LogCosh()) #loss='mse') if autoencoder.count_params() > 1000*1000: # 1 million print("Autoencoder too large: {} params".format(autoencoder.count_params())) return None, autoencoder.count_params() return autoencoder, embed_size autoencoder, sz = create_model(4, 4, 4, 50) if autoencoder: print(sz, autoencoder.count_params()) autoencoder.summary() Explanation: Create autoencoder in Keras End of explanation def input_and_label(rec): return rec['ref'], rec['ref'] ds = read_dataset('gs://{}/wxsearch/data/2019/tfrecord-00000-*'.format(BUCKET)).map(input_and_label).batch(2).repeat() checkpoint = tf.keras.callbacks.ModelCheckpoint('tmp/checkpoints') history = autoencoder.fit(ds, steps_per_epoch=1, epochs=3, shuffle=True, callbacks=[checkpoint]) print(history) autoencoder.save('tmp/savedmodel') from matplotlib import pyplot as plt plt.plot(history.history['loss']); %run -m wxsearch.train_autoencoder -- --input gs://{BUCKET}/wxsearch/data/2019/tfrecord-00000-* --outdir gs://{BUCKET}/wxsearch/trained --project {PROJECT} Explanation: Train the autoencoder End of explanation %run -m wxsearch.hrrr_to_tfrecord -- --startdate 20190101 --enddate 20200101 --outdir gs://{BUCKET}/wxsearch/data/2019 --project {PROJECT} Explanation: Run at scale End of explanation %%writefile train.yaml trainingInput: scaleTier: CUSTOM masterType: n1-highmem-2 masterConfig: acceleratorConfig: count: 2 type: NVIDIA_TESLA_K80 runtimeVersion: '2.2' pythonVersion: '3.7' scheduling: maxWaitTime: 3600s %%bash PROJECT=$(gcloud config get-value project) echo ${PROJECT} BUCKET="ai-analytics-solutions-kfpdemo" PACKAGE_PATH="${PWD}/wxsearch" now=$(date +"%Y%m%d_%H%M%S") JOB_NAME="wxsearch_$now" MODULE_NAME="wxsearch.train_autoencoder" JOB_DIR="gs://${BUCKET}/wxsearch/train/jobdir" REGION="us-central1" # 9000 images in dataset gcloud ai-platform jobs submit training $JOB_NAME \ --package-path $PACKAGE_PATH \ --module-name $MODULE_NAME \ --job-dir $JOB_DIR \ --region $REGION \ --config train.yaml \ -- \ --input gs://${BUCKET}/wxsearch/data/2019/tfrecord-* \ --outdir gs://${BUCKET}/wxsearch/trained \ --project ${PROJECT} \ --batch_size 4 --num_steps 10000 --num_checkpoints 10 Explanation: <img src="dataflow_2019.png" /> End of explanation import tensorflow as tf model = tf.keras.models.load_model('gs://ai-analytics-solutions-kfpdemo/wxsearch/trained/savedmodel') embed_output = model.get_layer('refc_embedding').output embedder = tf.keras.Model(model.input, embed_output, name='embedder') print(embedder.summary()) import tensorflow as tf PROJECT='ai-analytics-solutions' BUCKET='{}-kfpdemo'.format(PROJECT) print(BUCKET) def parse_tfrecord(example_data): parsed = tf.io.parse_single_example(example_data, { 'size': tf.io.VarLenFeature(tf.int64), 'ref': tf.io.VarLenFeature(tf.float32), 'time': tf.io.FixedLenFeature([], tf.string), 'valid_time': tf.io.FixedLenFeature([], tf.string) }) parsed['size'] = tf.sparse.to_dense(parsed['size']) parsed['ref'] = tf.reshape(tf.sparse.to_dense(parsed['ref']), (1059, 1799))/60. # 0 to 1 return parsed def read_dataset(pattern): filenames = tf.io.gfile.glob(pattern) ds = tf.data.TFRecordDataset(filenames, compression_type=None, buffer_size=None, num_parallel_reads=None) return ds.prefetch(tf.data.experimental.AUTOTUNE).map(parse_tfrecord) ds = read_dataset('gs://{}/wxsearch/data/2019/tfrecord-00000-*'.format(BUCKET)) for rec in ds.take(1): print(rec['ref']) refc = tf.expand_dims(tf.expand_dims(rec['ref'], 0), -1) x = embedder.predict(refc) print(tf.squeeze(x, axis=0)) print(tf.__version__) !gsutil ls gs://ai-analytics-solutions-kfpdemo/wxsearch/data/2019/ %run -m wxsearch.compute_embedding -- --output_table {PROJECT}:advdata.wxembed --savedmodel gs://{BUCKET}/wxsearch/trained/savedmodel --input gs://{BUCKET}/wxsearch/data/2019/tfrecord-* --outdir gs://{BUCKET}/wxsearch/tmp --project {PROJECT} Explanation: Try out the autoencoder Load the Keras model, and try out the autoencoder functionality. End of explanation %%bigquery df SELECT * FROM advdata.wxembed df.head(n=5) Explanation: Try decoding End of explanation import tensorflow as tf def create_decoder(model_dir): model = tf.keras.models.load_model(model_dir) decoder_input = tf.keras.Input([50], name='embed_input') embed_seen = False x = decoder_input for layer in model.layers: if embed_seen: x = layer(x) elif layer.name == 'refc_embedding': embed_seen = True decoder = tf.keras.Model(decoder_input, x, name='decoder') print(decoder.summary()) return decoder decoder = create_decoder('gs://ai-analytics-solutions-kfpdemo/wxsearch/trained/savedmodel') Explanation: Let's decode the first row. First, we create a decoder End of explanation import tensorflow as tf import numpy as np embed = tf.reshape( tf.convert_to_tensor(df['ref'].values[0], dtype=tf.float32), [-1, 50]) outimg = decoder.predict(embed).squeeze() * 60 print(len(df['ref'].values[0])) print(np.max(outimg)) import matplotlib.pyplot as plt plt.imshow(outimg, origin='lower'); Explanation: Then, we invoke decoder.predict() to reconstruct the image from the 50 numbers in the embedding. End of explanation import pandas as pd dt = pd.Timestamp(df['time'].values[0]) # gs://high-resolution-rapid-refresh/hrrr.20200811/conus/hrrr.t04z.wrfsfcf00.grib2 FILENAME = '{}/hrrr.{:4}{:02}{:02}/conus/hrrr.t{:02}z.wrfsfcf00.grib2'.format( 'gs://high-resolution-rapid-refresh', dt.year, dt.month, dt.day, dt.hour) print(FILENAME) import xarray as xr import tensorflow as tf import tempfile import cfgrib import numpy as np refc = 0 with tempfile.TemporaryDirectory() as tmpdirname: TMPFILE="{}/read_grib".format(tmpdirname) tf.io.gfile.copy(FILENAME, TMPFILE, overwrite=True) #ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'surface', 'stepType': 'instant'}}) #ds.data_vars['prate'].plot() # crain, prate ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'unknown', 'stepType': 'instant'}}) #ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'atmosphere', 'stepType': 'instant'}}) refc = ds.data_vars['refc'] refc.plot() Explanation: What does the original look like? Let's pull the original HRRR Grib file from this time stamp End of explanation %%bigquery WITH ref1 AS ( SELECT time AS ref1_time, ref1_value, ref1_offset FROM `ai-analytics-solutions.advdata.wxembed`, UNNEST(ref) AS ref1_value WITH OFFSET AS ref1_offset WHERE time = '2019-09-20 05:00:00 UTC' ) SELECT time, SUM( (ref1_value - ref[OFFSET(ref1_offset)]) * (ref1_value - ref[OFFSET(ref1_offset)]) ) AS sqdist FROM ref1, `ai-analytics-solutions.advdata.wxembed` GROUP BY 1 ORDER By sqdist ASC LIMIT 5 Explanation: Searching for similar images Suppose we want to find an image similar to the above image. End of explanation %%bigquery df WITH ref1 AS ( SELECT time AS ref1_time, ref1_value, ref1_offset FROM `ai-analytics-solutions.advdata.wxembed`, UNNEST(ref) AS ref1_value WITH OFFSET AS ref1_offset WHERE time = '2019-09-20 05:00:00 UTC' ) SELECT time, SUM( (ref1_value - ref[OFFSET(ref1_offset)]) * (ref1_value - ref[OFFSET(ref1_offset)]) ) AS sqdist FROM ref1, `ai-analytics-solutions.advdata.wxembed` WHERE time NOT BETWEEN '2019-09-19' AND '2019-09-21' GROUP BY 1 ORDER By sqdist ASC LIMIT 5 df Explanation: This makes a lot of sense. The image from the previous/next hour is the most similar. Then, images from +/- 2 hours ... What if we want to find the most similar image that is not within +/- 1 day? Since we have only 1 year of data, we are not going to great analogs, possibly, but let's see what we get. End of explanation import pandas as pd dt = pd.Timestamp(df['time'].values[0]) # gs://high-resolution-rapid-refresh/hrrr.20200811/conus/hrrr.t04z.wrfsfcf00.grib2 FILENAME = '{}/hrrr.{:4}{:02}{:02}/conus/hrrr.t{:02}z.wrfsfcf00.grib2'.format( 'gs://high-resolution-rapid-refresh', dt.year, dt.month, dt.day, dt.hour) print(FILENAME) import xarray as xr import tensorflow as tf import tempfile import cfgrib import numpy as np refc = 0 with tempfile.TemporaryDirectory() as tmpdirname: TMPFILE="{}/read_grib".format(tmpdirname) tf.io.gfile.copy(FILENAME, TMPFILE, overwrite=True) #ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'surface', 'stepType': 'instant'}}) #ds.data_vars['prate'].plot() # crain, prate ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'unknown', 'stepType': 'instant'}}) #ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'atmosphere', 'stepType': 'instant'}}) refc = ds.data_vars['refc'] refc.plot() Explanation: Really, Jan. 2 in 2019 had similar weather to Sep 20? Let's see ... End of explanation import pandas as pd dt = pd.Timestamp(df['time'].values[2]) # gs://high-resolution-rapid-refresh/hrrr.20200811/conus/hrrr.t04z.wrfsfcf00.grib2 FILENAME = '{}/hrrr.{:4}{:02}{:02}/conus/hrrr.t{:02}z.wrfsfcf00.grib2'.format( 'gs://high-resolution-rapid-refresh', dt.year, dt.month, dt.day, dt.hour) print(FILENAME) import xarray as xr import tensorflow as tf import tempfile import cfgrib import numpy as np refc = 0 with tempfile.TemporaryDirectory() as tmpdirname: TMPFILE="{}/read_grib".format(tmpdirname) tf.io.gfile.copy(FILENAME, TMPFILE, overwrite=True) #ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'surface', 'stepType': 'instant'}}) #ds.data_vars['prate'].plot() # crain, prate ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'unknown', 'stepType': 'instant'}}) #ds = xr.open_dataset(TMPFILE, engine='cfgrib', backend_kwargs={'filter_by_keys': {'typeOfLevel': 'atmosphere', 'stepType': 'instant'}}) refc = ds.data_vars['refc'] refc.plot() Explanation: What about July 1, 2019, which is next on the list? End of explanation %%bigquery WITH refl1 AS ( SELECT ref1_value, idx FROM `ai-analytics-solutions.advdata.wxembed`, UNNEST(ref) AS ref1_value WITH OFFSET AS idx WHERE time = '2019-09-20 05:00:00 UTC' ), refl2 AS ( SELECT ref2_value, idx FROM `ai-analytics-solutions.advdata.wxembed`, UNNEST(ref) AS ref2_value WITH OFFSET AS idx WHERE time = '2019-09-20 06:00:00 UTC' ), refl3 AS ( SELECT ref3_value, idx FROM `ai-analytics-solutions.advdata.wxembed`, UNNEST(ref) AS ref3_value WITH OFFSET AS idx WHERE time = '2019-09-20 07:00:00 UTC' ) -- SELECT idx, ref1_value, ref2_value, ref3_value, ABS( ref2_value - (ref1_value + ref3_value)/2 ) AS diff SELECT SUM( (ref2_value - (ref1_value + ref3_value)/2) * (ref2_value - (ref1_value + ref3_value)/2) ) AS sqdist FROM refl1 JOIN refl2 USING (idx) JOIN refl3 USING (idx) -- ORDER BY idx ASC Explanation: Both these have weather in approximately the same places. On Jan 1, things are more widespread than in July. The date we searched for (May) is somewhere in between the Jan and July weather scenarios ... To make the search work better, we should use smaller tiles (not the whole country, but perhaps 500kmx500km tiles). Then, we'll get mesoscale phenomena. Embeddings for interpolation ... Can we use the embeddings for interpolating between images? Recall that the sqdist between subsequent hours is about 0.5 What happens if we use the image at t-1 and t+1 to get t=0? What's the error? End of explanation # Unfortunately, BigQueryML does not accept arrays as input # so we convert it into a struct. Generate the boilerplate code ... def create_array_to_struct(N): sql = ( CREATE TEMPORARY FUNCTION arr_to_input(arr ARRAY<FLOAT64>) RETURNS STRUCT< + ', '.join(["u{} FLOAT64".format(idx+1) for idx in range(N)]) + ">\n" + "AS (STRUCT(\n" + ', '.join(["arr[OFFSET({})]".format(idx) for idx in range(N)]) + "\n));" ) return sql print(create_array_to_struct(50)) %%bigquery -- Unfortunately, BigQueryML does not accept arrays as input, so we convert it into a struct CREATE TEMPORARY FUNCTION arr_to_input(arr ARRAY<FLOAT64>) RETURNS STRUCT<u1 FLOAT64, u2 FLOAT64, u3 FLOAT64, u4 FLOAT64, u5 FLOAT64, u6 FLOAT64, u7 FLOAT64, u8 FLOAT64, u9 FLOAT64, u10 FLOAT64, u11 FLOAT64, u12 FLOAT64, u13 FLOAT64, u14 FLOAT64, u15 FLOAT64, u16 FLOAT64, u17 FLOAT64, u18 FLOAT64, u19 FLOAT64, u20 FLOAT64, u21 FLOAT64, u22 FLOAT64, u23 FLOAT64, u24 FLOAT64, u25 FLOAT64, u26 FLOAT64, u27 FLOAT64, u28 FLOAT64, u29 FLOAT64, u30 FLOAT64, u31 FLOAT64, u32 FLOAT64, u33 FLOAT64, u34 FLOAT64, u35 FLOAT64, u36 FLOAT64, u37 FLOAT64, u38 FLOAT64, u39 FLOAT64, u40 FLOAT64, u41 FLOAT64, u42 FLOAT64, u43 FLOAT64, u44 FLOAT64, u45 FLOAT64, u46 FLOAT64, u47 FLOAT64, u48 FLOAT64, u49 FLOAT64, u50 FLOAT64> AS ( STRUCT( arr[OFFSET(0)], arr[OFFSET(1)], arr[OFFSET(2)], arr[OFFSET(3)], arr[OFFSET(4)] , arr[OFFSET(5)], arr[OFFSET(6)], arr[OFFSET(7)], arr[OFFSET(8)], arr[OFFSET(9)] , arr[OFFSET(10)], arr[OFFSET(11)], arr[OFFSET(12)], arr[OFFSET(13)], arr[OFFSET(14)] , arr[OFFSET(15)], arr[OFFSET(16)], arr[OFFSET(17)], arr[OFFSET(18)], arr[OFFSET(19)] , arr[OFFSET(20)], arr[OFFSET(21)], arr[OFFSET(22)], arr[OFFSET(23)], arr[OFFSET(24)] , arr[OFFSET(25)], arr[OFFSET(26)], arr[OFFSET(27)], arr[OFFSET(28)], arr[OFFSET(29)] , arr[OFFSET(30)], arr[OFFSET(31)], arr[OFFSET(32)], arr[OFFSET(33)], arr[OFFSET(34)] , arr[OFFSET(35)], arr[OFFSET(36)], arr[OFFSET(37)], arr[OFFSET(38)], arr[OFFSET(39)] , arr[OFFSET(40)], arr[OFFSET(41)], arr[OFFSET(42)], arr[OFFSET(43)], arr[OFFSET(44)] , arr[OFFSET(45)], arr[OFFSET(46)], arr[OFFSET(47)], arr[OFFSET(48)], arr[OFFSET(49)] )); CREATE OR REPLACE MODEL advdata.hrrr_clusters OPTIONS(model_type='kmeans', num_clusters=5, KMEANS_INIT_METHOD='KMEANS++') AS SELECT arr_to_input(ref) AS ref FROM `ai-analytics-solutions.advdata.wxembed` %%bigquery SELECT * FROM ML.CENTROIDS(MODEL advdata.hrrr_clusters) LIMIT 5 %%bigquery df SELECT centroid_id, CAST(REPLACE(feature, 'ref_u', '') AS INT64) AS feature, numerical_value FROM ML.CENTROIDS(MODEL advdata.hrrr_clusters) df.head() df[df['centroid_id']==1].sort_values(by='feature')['numerical_value'].values import tensorflow as tf def create_decoder(model_dir): model = tf.keras.models.load_model(model_dir) decoder_input = tf.keras.Input([50], name='embed_input') embed_seen = False x = decoder_input for layer in model.layers: if embed_seen: x = layer(x) elif layer.name == 'refc_embedding': embed_seen = True decoder = tf.keras.Model(decoder_input, x, name='decoder') print(decoder.summary()) return decoder decoder = create_decoder('gs://ai-analytics-solutions-kfpdemo/wxsearch/trained/savedmodel') import tensorflow as tf import numpy as np import matplotlib.pyplot as plt f, axarr = plt.subplots(3,2, figsize=(15,15)) for cid in range(1,6): embed = df[df['centroid_id']==cid].sort_values(by='feature')['numerical_value'].values embed = tf.reshape( tf.convert_to_tensor(embed, dtype=tf.float32), [-1, 50]) outimg = decoder.predict(embed).squeeze() * 60 print(np.max(outimg)) axarr[ (cid-1)//2, (cid-1)%2].imshow(outimg, origin='lower'); Explanation: Clustering the embeddings If the differences between images are meaningful, then it makes sense that we could cluster the images using just the embeddings. Let's do K-Means clustering into 5 categories and visualize the five centroids. End of explanation
2,435
Given the following text description, write Python code to implement the functionality described below step by step Description: First Steps with Python Source Step1: Variable amount of parameters Step2: Variable amount of named parameters Step3: Regular expressions Step4: Sets Step5: JSON/Pickle serialization Using JSON.<br/> Properties of JSON serialization Step6: write JSON to and from a file Step7: Using cPickle, Python's proprietary object serialization method. With pickle, objects can be serialized.<br/> Properties of pickle serialization
Python Code: sentence = 'the quick brown fox jumps over the lazy dog' words = sentence.split() word_lengths = [len(word) for word in words if 'the' != word] print(word_lengths) Explanation: First Steps with Python Source: learnpython.org List comprehensions End of explanation def foo(first, second, third, *therest): print('First: %s' % first) print('Second: %s' % second, end=' or ') print('Second: {}'.format(second)) # more modern approach print('Third: %s' % third) print('And all the rest... %s' % list(therest)) return print(foo(1,2,3,4,5)) Explanation: Variable amount of parameters End of explanation def bar(first, second, third, **options): print('Options is a variable of {}.'.format(type(options))) if 'sum' == options.get('action'): print('The sum is: %d' % (first + second + third)) if 'first' == options.get('number'): return first result = bar(1, 2, 3, action='sum', number='first') print('Result: %d' % result) Explanation: Variable amount of named parameters End of explanation myExp = r'^(From|To|Cc).*[email protected]' import re pattern = re.compile(myExp) result = re.match(pattern, 'From [email protected] and some more') if result: print(result) print('Whole result:', result.group(0), sep='\t') print('First part:', result.group(1), sep='\t') Explanation: Regular expressions End of explanation a = set(('Jake', 'John', 'Eric')) # generate a set from a tuple b = set(['John', 'Jill']) # or generate a set from a list print(a.intersection(b)) # in both sets print(a.difference(b)) # in a but not in b print(a.symmetric_difference(b)) # distinct print(a.union(b)) # joined set Explanation: Sets End of explanation import json json_string = json.dumps([1, 2, 3, 'a', 'b', 'c']) print(json_string) print(json.loads(json_string)) json_string = json.dumps([1, 2, 3, 'a', 'b', 'c'], indent=2, sort_keys=True, separators=(',', ':')) print(json_string) Explanation: JSON/Pickle serialization Using JSON.<br/> Properties of JSON serialization: json = {binary: false, humanReadable: true, pythonSpecific: false, serializeCustomClasses: false} End of explanation writeFp = open('config.json', 'w') json.dump({'b':1, 'a':2}, writeFp, sort_keys=True) writeFp.close() readFp = open('config.json', 'r') for line in readFp: print(line) readFp.close() # separators without spaces reduce json file size print(json.dumps({'b':1, 'a':2}, sort_keys=True, separators=(',', ':'))) Explanation: write JSON to and from a file End of explanation import pickle # or cPickle for a faster implementation pickled_string = pickle.dumps([1, 2, 3, 'a', 'b', 'c']) print(pickle.loads(pickled_string), end='\n\n') class MyTestClass: def say(self): return 'hello' pickled_string = pickle.dumps(MyTestClass()) print('Pickled:\t{}\nUnpickled:\t{}\nInstance call:\t{}'.format( pickled_string, pickle.loads(pickled_string), pickle.loads(pickled_string).say() ) ) Explanation: Using cPickle, Python's proprietary object serialization method. With pickle, objects can be serialized.<br/> Properties of pickle serialization: pickle = {binary: true, humanReadable: false, pythonSpecific: true, serializeCustomClasses: true} End of explanation
2,436
Given the following text description, write Python code to implement the functionality described below step by step Description: Examples The core function of this package is design_matrices(). It returns an object of class DesignMatrices that contains information about the response, common effects and group specific effects. Step1: We can use both functions taht are loaded in the top environment as well as non-syntactic names passed within ``. Specification of group specific effects is much like what you have in R package lme4. Step2: $Z$ matrix can be subsetted by passing the name of the group specific term. Step3: Reference class example This feature is taken from current Bambi behavior (you don't find it in Patsy or formulaic)
Python Code: import pandas as pd import numpy as np from formulae import design_matrices np.random.seed(1234) SIZE = 20 CNT = 20 data = pd.DataFrame( { 'x': np.random.normal(size=SIZE), 'y': np.random.normal(size=SIZE), 'z': np.random.normal(size=SIZE), '$2#abc': np.random.normal(size=SIZE), 'g1': np.random.choice(['a', 'b', 'c'], SIZE), 'g2': np.random.choice(['YES', 'NO'], SIZE) } ) Explanation: Examples The core function of this package is design_matrices(). It returns an object of class DesignMatrices that contains information about the response, common effects and group specific effects. End of explanation design = design_matrices("y ~ np.exp(x) + `$2#abc` + (z|g1)", data) print(design.response) print(design.response.design_vector) print(design.common) print(design.common.design_matrix) print(design.common['$2#abc']) print(design.group) print(design.group.design_matrix) # note it is a sparse matrix Explanation: We can use both functions taht are loaded in the top environment as well as non-syntactic names passed within ``. Specification of group specific effects is much like what you have in R package lme4. End of explanation print(design.group['z|g1']) Explanation: $Z$ matrix can be subsetted by passing the name of the group specific term. End of explanation design = design_matrices('g2[YES] ~ x', data) print(design.response) print(design.response.design_vector) Explanation: Reference class example This feature is taken from current Bambi behavior (you don't find it in Patsy or formulaic) End of explanation
2,437
Given the following text description, write Python code to implement the functionality described below step by step Description: Common Error Messages Hi guys, in this lecture we shall be looking at a couple of Python's error messages you are likely to see when writing scripts. We shall also cover a few fixes for said problems. Syntax Error Syntax Error's occur when you have written something that Python violates the grammatical rules of Python. Common causes are Step1: Typo's Typo's are probably the main cause of syntax errors, check that you haven't missed things like brackets, colons, comma's and so on. A few examples... Step2: Name Errors Name errors occur when something has not been defined. Common causes are Step3: "==" is not "=" For experienced users, this is merely another type of typo. However, for beginners this error is sometimes more indicative of a more serious and fundamental misunderstanding. Essentially the error here would be confusing assignment ("=") with asking if a equals b ("=="). If you do not understand this crucial distinction I'd strongly recommend revisiting the lectures on assignment and logic. Here is a simple example Step4: Issues with Scoping As I've mentioned in previous lectures, Python works in code "blocks" and such blocks have their own space to play with. Within that space variables can be defined and those variables are not affected (or even known) by other parts of the code. Common fixes include changing indentation levels, or saving variables to other names spaces. For example Step5: Type Errors Suppose you have an object of Type ‘X’ (Int, str, ...) and some sort of operation ‘Y’ (multiplication, concatenation). The Type error happens when operation ‘Y’ is not compatible with object Type ‘X’. For example Step6: Common causes Step7: So what this code is trying to do is simple; the user enters two numbers(A, B) we add them up and then return True if the result is a perfect square. However, the code didn't work, and the trace-back is flagging an error with the ‘is_square’ function. In truth however, the problem happened much further back in the code, the is_square function is not our problem. Arguably we have a problem with our addition function, for although it works, it also, thanks to operator overloading, works on strings and numbers alike. Instead of receiving an error at this juncture we instead send 'junk' to the is_square function. For example, if a and b are 10 and 6 then add(a, b) should return integer 16, not string 106. But we can go back even further in our analysis and ask Step8: Okay so what happened here? Well, the short answer is that strings in Python support indexing BUT strings are also an immutable data-type (google it). Thus, we can't just change the value at index-1 like we can with lists. The mistake here is assuming that because we can index into strings we can also change individual values but that is simply not the case. Oversights... Oversights are for the most part just bigger typo's. Oversights happen when you have a bit of code that is generally correct but you missed some minor detail. On the bright side, these issues are usually quick to fix. For example Step9: The code above intends to take a list and for each item multiply that item by itself, [1,2,3] ---> [1, 4, 9]. Although probably not the best approach this code is generally correct but for an easy to fix oversight; we can't call range on a list! What we actually meant to do was call range on the length of the list. Like so Step10: One fixed oversight later, the code works. Index Errors Index Errors occur when you are trying to index into an object but the index value you have chosen is outside the accepted range. As a quick recap, the accepted range is -length to length -1. For example, if my list has 10 items then I'll receive an index error if I try hand in a number outside the range -10 to 9. Step12: Of course, the above example is trivial and the fix is obvious. In practice however, your index errors are highly unlikely to be as simple as this. A much more realistic example would be something like creating a game where a character moves through a map; whenever he tries to move outside of the map you get an index error. For example
Python Code: 3ds = 100 # cannot start names with numbers. To fix: three_d_s = 100, or nintendo3ds = 100 list = [1,2,3] # "list" is a special keyword in Python, cannot use it as a name. To fix: a_list = [1,2,3] Explanation: Common Error Messages Hi guys, in this lecture we shall be looking at a couple of Python's error messages you are likely to see when writing scripts. We shall also cover a few fixes for said problems. Syntax Error Syntax Error's occur when you have written something that Python violates the grammatical rules of Python. Common causes are: Bad Names Typos (e.g missing colons, brackets, etc) Bad variable names... End of explanation lst = [1 2 3 4] # No comma's between items. Fix is: lst = [1, 2, 3, 4] 10 + 12) * (4 + 3) # missing brackets. Fix is: (10 + 12) * (4 + 3) Explanation: Typo's Typo's are probably the main cause of syntax errors, check that you haven't missed things like brackets, colons, comma's and so on. A few examples... End of explanation greeting = hello # Fix is: greeting = "hello" Explanation: Name Errors Name errors occur when something has not been defined. Common causes are: Typo's Confusing scope Forgetting quote marks when dealing with strings Confusing == with = Typo's When it comes to names, its easy to define it somewhere and then when you try to call it you misspell it or something. Also remember Python is case-sensitive (e.g "a" != "A"). In these cases the the fix is obvious, go in and correct the typo! Strings without quotes are Names.... As the title says, strings that are not encased in quotation marks are not strings. When Python sees "Hello" Python knows that is a string, when it sees Hello Python looks for a variable named Hello. End of explanation a == 8 # NameError; a is not defined! print(a) # The Fix: a = 8 print(a) Explanation: "==" is not "=" For experienced users, this is merely another type of typo. However, for beginners this error is sometimes more indicative of a more serious and fundamental misunderstanding. Essentially the error here would be confusing assignment ("=") with asking if a equals b ("=="). If you do not understand this crucial distinction I'd strongly recommend revisiting the lectures on assignment and logic. Here is a simple example: End of explanation def func(): t = 35 print(t) # Possible Solutions: # Fix 1: Change indentation level of the print statement. def func(): t = 35 print(t) # Fix 2: Save 't' in another namespace. def func(): t = 35 return t t = func() print(t) Explanation: Issues with Scoping As I've mentioned in previous lectures, Python works in code "blocks" and such blocks have their own space to play with. Within that space variables can be defined and those variables are not affected (or even known) by other parts of the code. Common fixes include changing indentation levels, or saving variables to other names spaces. For example: End of explanation 10 / 2 # works! "abc" / "de" # error! Explanation: Type Errors Suppose you have an object of Type ‘X’ (Int, str, ...) and some sort of operation ‘Y’ (multiplication, concatenation). The Type error happens when operation ‘Y’ is not compatible with object Type ‘X’. For example: A / B makes sense when A and B are floats/integers, but Python does not know what it is to divide a list by a list nor does it understand what you want to do when you try to divide the string "cat" with the set '{1, 2, 3}' or something. In such cases you get a type error. End of explanation # Building a calculator ! print("Welcome to my amazing calculation machine 1.0!", "Give me two numbers and I add them together and tell you if the result is a perfect square", sep="\n") a = input("please enter a number ") b = input("and another number ") def add(a, b): return a + b def is_square(x): import math return math.sqrt(x).isinteger() x = add(a, b) print(is_square(x)) Explanation: Common causes: Another part of the code is misbehaving! Misunderstanding properties of data-types and/or how thier methods work. Oversights... Problems elsewhere... If you receive a type error, in may cases the direct cause usually isn't the problem, rather, the problem happened much earlier and you are just finding out about it now. What I mean is, everyone knows you cannot divide strings by strings and so its unlikely you wrote a piece of code to do just that. Rather, some other bit of code returned strings instead of integers and that mistake gets passed on to the next function. For example... End of explanation a_list = [1,2,3,4] print(a_list[-1]) # so far so good. a_list[-1] = 99 # Seems legit. print(a_list) # And now with strings... a_string = "abcde" print(a_string[-1]) # still working... a_string[-1] = "zztop" # Oh noes! a Type Error print(a_list) Explanation: So what this code is trying to do is simple; the user enters two numbers(A, B) we add them up and then return True if the result is a perfect square. However, the code didn't work, and the trace-back is flagging an error with the ‘is_square’ function. In truth however, the problem happened much further back in the code, the is_square function is not our problem. Arguably we have a problem with our addition function, for although it works, it also, thanks to operator overloading, works on strings and numbers alike. Instead of receiving an error at this juncture we instead send 'junk' to the is_square function. For example, if a and b are 10 and 6 then add(a, b) should return integer 16, not string 106. But we can go back even further in our analysis and ask: "Why did our addition function receive strings as input in the first place?" The actual source of this error is forgetting that the input function returns strings, and we did not convert those strings to integers. This error then trickled all the way through the rest of the program until we finally receive a type error far removed from the actual problem. The solution: a = int(input(“{text...}”)) The moral of the story here is that when you receive type errors the source of the problem often isn't the bit of code that raised the error. I’d recommend writing print statements at different parts of the code to see if everything is giving the correct output. Misunderstanding Properties... Another source of Type errors occurs when you don't fully understand the properties of a particular data-type. Or maybe you misunderstand how a particular object method works. Or maybe you err because you don't understand how something is implemented within Python (at the low-level). The usual remedy for this sort of error is documentation and/or google. Here's a simple example: End of explanation l = [1,2,3] # n*n for each n in list... for n in range(l): l[n] *= l[n] # *= 2 is shorthand for l[n] = l[n]*2 Explanation: Okay so what happened here? Well, the short answer is that strings in Python support indexing BUT strings are also an immutable data-type (google it). Thus, we can't just change the value at index-1 like we can with lists. The mistake here is assuming that because we can index into strings we can also change individual values but that is simply not the case. Oversights... Oversights are for the most part just bigger typo's. Oversights happen when you have a bit of code that is generally correct but you missed some minor detail. On the bright side, these issues are usually quick to fix. For example: End of explanation l = [1,2,3] for n in range(len(l)): l[n] *= l[n] print(l) Explanation: The code above intends to take a list and for each item multiply that item by itself, [1,2,3] ---> [1, 4, 9]. Although probably not the best approach this code is generally correct but for an easy to fix oversight; we can't call range on a list! What we actually meant to do was call range on the length of the list. Like so: End of explanation lst = [0] * 10 print(lst[-10]) # Works! print(lst[-11]) # Fails Explanation: One fixed oversight later, the code works. Index Errors Index Errors occur when you are trying to index into an object but the index value you have chosen is outside the accepted range. As a quick recap, the accepted range is -length to length -1. For example, if my list has 10 items then I'll receive an index error if I try hand in a number outside the range -10 to 9. End of explanation def character_movement(x, y): where (x,y) is the position on a 2-d plane return [("start", (x, y)), ("left", (x -1, y)),("right", (x + 1, y)), ("up", (x, y - 1)), ("down", (x, y + 1))] the_map = [[0, 0, 0], [0, 0, 0], [0, 0, 0]] parrot_starting_position = (2, 2) print(character_movement(*parrot_starting_position)) # *args is "unpacking". Google it :) # moving "down" from position 2,2 is 2,3. But 2,3 is out of bounds! the_map[2][3] # IndexError! Explanation: Of course, the above example is trivial and the fix is obvious. In practice however, your index errors are highly unlikely to be as simple as this. A much more realistic example would be something like creating a game where a character moves through a map; whenever he tries to move outside of the map you get an index error. For example: End of explanation
2,438
Given the following text description, write Python code to implement the functionality described below step by step Description: Point collocation Point collection method is a broad term, as it covers multiple variation, but in a nutshell all consist of the following steps Step1: The number of Sobol samples to use at each order is arbitrary, but for compare, we select them to be the same as the Gauss nodes Step2: Evaluating model solver Like in the case of problem formulation again, evaluation is straight forward Step3: Select polynomial expansion Unlike pseudo spectral projection, the polynomial in point collocations are not required to be orthogonal. But stability wise, orthogonal polynomials have still been shown to work well. This can be achieved by using the chaospy.generate_expansion() function Step4: Solve the linear regression problem With all samples $Q_1, ..., Q_N$, model evaluations $U_1, ..., U_N$ and polynomial expansion $\Phi_1, ..., \Phi_M$, we can put everything together to solve the equations Step5: Descriptive statistics The expected value and variance is calculated as follows Step6: Error analysis It is hard to assess how well these models are doing from the final estimation alone. They look about the same. So to compare results, we do error analysis. To do so, we use the reference analytical solution and error function as defined in problem formulation. Step7: The analysis can be performed as follows
Python Code: from pseudo_spectral_projection import gauss_quads gauss_nodes = [nodes for nodes, _ in gauss_quads] Explanation: Point collocation Point collection method is a broad term, as it covers multiple variation, but in a nutshell all consist of the following steps: Generate samples $Q_1=(\alpha_1, \beta_1), \dots, Q_N=(\alpha_N, \beta_N)$ that corresponds to your uncertain parameters. Evaluate model solver $U_1=u(t, \alpha_1, \beta_1), \dots, U_N=u(t, \alpha_N, \beta_N)$ for each sample. Select a polynomial expansion $\Phi_1, \dots, \Phi_M$. Solve linear regression problem: $U_n = \sum_m c_m(t)\ \Phi_m(\alpha_n, \beta_n)$ with respect for $c_1, \dots, c_M$. Construct model approximation $u(t, \alpha, \beta) = \sum_m c_m(t)\ \Phi_n(\alpha, \beta)$ Perform model analysis on approximation $u(t, \alpha, \beta)$ as a proxy for the real model. Let us go through the steps in more detail. Generating samples Unlike both Monte Carlo integration and pseudo-spectral projection, point collocation method does not assume that the samples follows any particular form. Though traditionally they are selected to be random, quasi-random, nodes from quadrature integration, or a subset of the three. For this case, we select the sample to follow the Sobol samples from Monte Carlo integration, and optimal quadrature nodes from pseudo-spectral projection: End of explanation from monte_carlo_integration import sobol_samples sobol_nodes = [sobol_samples[:, :nodes.shape[1]] for nodes in gauss_nodes] from matplotlib import pyplot pyplot.rc("figure", figsize=[12, 4]) pyplot.subplot(121) pyplot.scatter(*gauss_nodes[4]) pyplot.title("Gauss quadrature nodes") pyplot.subplot(122) pyplot.scatter(*sobol_nodes[4]) pyplot.title("Sobol nodes") pyplot.show() Explanation: The number of Sobol samples to use at each order is arbitrary, but for compare, we select them to be the same as the Gauss nodes: End of explanation import numpy from problem_formulation import model_solver gauss_evals = [ numpy.array([model_solver(node) for node in nodes.T]) for nodes in gauss_nodes ] sobol_evals = [ numpy.array([model_solver(node) for node in nodes.T]) for nodes in sobol_nodes ] from problem_formulation import coordinates pyplot.subplot(121) pyplot.plot(coordinates, gauss_evals[4].T, alpha=0.3) pyplot.title("Gauss evaluations") pyplot.subplot(122) pyplot.plot(coordinates, sobol_evals[4].T, alpha=0.3) pyplot.title("Sobol evaluations") pyplot.show() Explanation: Evaluating model solver Like in the case of problem formulation again, evaluation is straight forward: End of explanation import chaospy from problem_formulation import joint expansions = [chaospy.generate_expansion(order, joint) for order in range(1, 10)] expansions[0].round(10) Explanation: Select polynomial expansion Unlike pseudo spectral projection, the polynomial in point collocations are not required to be orthogonal. But stability wise, orthogonal polynomials have still been shown to work well. This can be achieved by using the chaospy.generate_expansion() function: End of explanation gauss_model_approx = [ chaospy.fit_regression(expansion, samples, evals) for expansion, samples, evals in zip(expansions, gauss_nodes, gauss_evals) ] sobol_model_approx = [ chaospy.fit_regression(expansion, samples, evals) for expansion, samples, evals in zip(expansions, sobol_nodes, sobol_evals) ] pyplot.subplot(121) model_approx = gauss_model_approx[4] evals = model_approx(*gauss_nodes[1]) pyplot.plot(coordinates, evals, alpha=0.3) pyplot.title("Gaussian approximation") pyplot.subplot(122) model_approx = sobol_model_approx[1] evals = model_approx(*sobol_nodes[1]) pyplot.plot(coordinates, evals, alpha=0.3) pyplot.title("Sobol approximation") pyplot.show() Explanation: Solve the linear regression problem With all samples $Q_1, ..., Q_N$, model evaluations $U_1, ..., U_N$ and polynomial expansion $\Phi_1, ..., \Phi_M$, we can put everything together to solve the equations: $$ U_n = \sum_{m=1}^M c_m(t)\ \Phi_m(Q_n) \qquad n = 1, ..., N $$ with respect to the coefficients $c_1, ..., c_M$. This can be done using the helper function chaospy.fit_regression(): End of explanation expected = chaospy.E(gauss_model_approx[-2], joint) std = chaospy.Std(gauss_model_approx[-2], joint) expected[:4].round(4), std[:4].round(4) pyplot.rc("figure", figsize=[6, 4]) pyplot.xlabel("coordinates") pyplot.ylabel("model approximation") pyplot.fill_between( coordinates, expected-2*std, expected+2*std, alpha=0.3) pyplot.plot(coordinates, expected) pyplot.show() Explanation: Descriptive statistics The expected value and variance is calculated as follows: End of explanation from problem_formulation import error_in_mean, error_in_variance error_in_mean(expected), error_in_variance(std**2) Explanation: Error analysis It is hard to assess how well these models are doing from the final estimation alone. They look about the same. So to compare results, we do error analysis. To do so, we use the reference analytical solution and error function as defined in problem formulation. End of explanation sizes = [nodes.shape[1] for nodes in gauss_nodes] eps_gauss_mean = [ error_in_mean(chaospy.E(model, joint)) for model in gauss_model_approx ] eps_gauss_var = [ error_in_variance(chaospy.Var(model, joint)) for model in gauss_model_approx ] eps_sobol_mean = [ error_in_mean(chaospy.E(model, joint)) for model in sobol_model_approx ] eps_sobol_var = [ error_in_variance(chaospy.Var(model, joint)) for model in sobol_model_approx ] pyplot.rc("figure", figsize=[12, 4]) pyplot.subplot(121) pyplot.title("Error in mean") pyplot.loglog(sizes, eps_gauss_mean, "-", label="Gaussian") pyplot.loglog(sizes, eps_sobol_mean, "--", label="Sobol") pyplot.legend() pyplot.subplot(122) pyplot.title("Error in variance") pyplot.loglog(sizes, eps_gauss_var, "-", label="Gaussian") pyplot.loglog(sizes, eps_sobol_var, "--", label="Sobol") pyplot.show() Explanation: The analysis can be performed as follows: End of explanation
2,439
Given the following text description, write Python code to implement the functionality described below step by step Description: Using Named Entity Recognition and Classifiers to Extract Entities from Peer-Reviewed Journals The overwhelming amount of unstructured text data available today from traditional media sources as well as newer ones, like social media, provides a rich source of information if the data can be structured. Named entity extraction forms a core subtask to build knowledge from semi-structured and unstructured text sources<sup><a href="#fn1" id="ref1">1</a></sup>. Some of the first researchers working to extract information from unstructured texts recognized the importance of “units of information” like names, including person, organization and location names, and numeric expressions including time, date, money and percent expressions. They coined the term “Named Entity” in 1996 to represent these. Considering recent increases in computing power and decreases in the costs of data storage, data scientists and developers can build large knowledge bases that contain millions of entities and hundreds of millions of facts about them. These knowledge bases are key contributors to intelligent computer behavior<sup><a href="#fn2" id="ref2">2</a></sup>. Not surprisingly, named entity extraction operates at the core of several popular technologies such as smart assistants (Siri, Google Now), machine reading, and deep interpretation of natural language<sup><a href="#fn3" id="ref3">3</a></sup>. This post explores how to perform named entity extraction, formally known as “Named Entity Recognition and Classification (NERC). In addition, the article surveys open-source NERC tools that work with Python and compares the results obtained using them against hand-labeled data. The specific steps include Step1: <br>A total of 253 files exist in the directory. Opening one of these reveals that our data is in PDF format and it's semi-structured (follows journal article format with separate sections for "abstract" and "title"). While PDFs provide an easily readable presentation of data, they are extremely difficult to work with in data analysis. In your work, if you have an option to get to data before conversion to a PDF format, be sure to take that option.<br><br> Creating a Custom NLTK Corpus We used several Python tools to ingest our data including Step2: <br> We now have a semi-structured dataset in a format that we can query and analyze the different pieces of data. Let's see how many words (including stop words) we have in our entire corpus. <br><br> Step3: <br>The NLTK book has an excellent section on processing raw text and unicode issues. It provides a helpful discussion of some problems you may encounter. Using Regular Expressions to Extract Specific Sections <br> To begin our exploration of regular expressions (aka "regex"), it's important to point out some good resources for those new to the topic. An excellent resource may be found in Videos 1-3, Week 4, Getting and Cleaning Data, Data Science Specialization Track from Johns Hopkins University. Additional resources appear in the Appendix. As a simple example, let’s extract titles from the first 26 documents. Step4: <br>This code extracts the titles, but some author names get caught up in the extraction as well. For simplicity, let's focus on wrangling the data to use the NERC tools on two sections of the paper Step5: <br> The above code also makes use of the nltk.word_tokenize tool to create the "word per reference" statistic (takes time to run). Let's test the “references” extraction function and look at the output by obtaining the first 10 entries of the dictionary created by the function. This dictionary holds all the extracted data and various calculations. Step6: The tabulate module is a great tool to visualize descriptive outputs in table format. Step7: <br> Open Source NERC Tools Step8: <br>In this next block of code, we will apply the NLTK standard chunker, Stanford Named Entity Recognizer, and Polyglot extractor to our corpus. For each NERC tool, I created functions (available in the Appendix) to extract entities and return classes of objects in different lists. If you are following along, you should have run all the code blocks in the Appendix. If not, go there and do it now. The functions (in appendix) are Step9: <br>We pass our data, the “top” and “references” section of the two documents of interest, into the functions created with each NERC tool and build a nested dictionary of the extracted entities—author names, locations, and organization names. This code may take a bit of time to run (30 secs to a minute). <br><br> Step10: <br> We will focus specifically on the "persons" entity extractions from the “top” section of the documents to estimate performance. However, a similar exercise is possible with the extractions of “organizations” entity extractions or “locations” entity extractions too, as well as from the “references” section. To get a better look at how each NERC tool performed on the named person entities, we will use the Pandas dataframe.Pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. The dataframe provides a visual comparison of the extractions from each NERC tool and the hand-labeled extractions. Just a few lines of code accomplish the task Step11: <br> The above dataframe illustrates the mixed results from the NERC tools. NLTK Standard NERC appears to have extracted 3 false positives while the Stanford NERC missed 3 true positives and the Polyglot NERC extracted all but one true positive (partially extracted; returned first name only). Let's calculate some key performance metrics Step12: <br>Now let's pass our values into the function to calculate the performance metrics Step13: Note to Ben from Selma - I think there might be a mistake in the table for the Polyglot NERC. Missing a 1 in the lower left maybe? The basic metrics above reveal some quick takeaways about each tool based on the specific extraction task. The NLTK Standard Chunker has perfect accuracy and recall but lacks in precision. It successfully extracted all the authors for the document, but also extracted 3 false entities. NLTK's chunker would serve well in an entity extraction pipeline where the data scientist is concerned with identifying all possible entities The Stanford NER tool is very precise (specificity vs sensitivity). The entities it extracts were 100% accurate, but it failed to identify half of the true entities. The Stanford NER tool would be best used when a data scientist wanted to extract only those entities that have a high likelihood of being named entities, suggesting an unconscious acceptance of leaving behind some information. The Polyglot Named Entity Recognizer identified five named entities exactly, but only partially identified the sixth (first name returned only). The data scientist looking for a balance between sensitivity and specificity would likely use Polyglot, as it will balance extracting the 100% accurate entities and those which may not necessarily be a named entity. A Simple Ensemble Classifier In our discussion above, we notice the varying levels of performance by the different NERC tools. Using the idea that combining the outputs from various classifiers in an ensemble method can improve the reliability of classifications, we can improve the performance of our named entity extractor tools by creating an ensemble classifier. Each NERC tool had at least 3 named persons that were true positives, but no two NERC tools had the same false positive or false negative. Our ensemble classifier "voting" rule is very simple Step14: To get a visual comparison of the extractions for each tool and the ensemble set side by side, we return to our dataframe from earlier. In this case, we use the concat operation in pandas to append the new ensemble set to the dataframe. Our code to accomplish the task is Step15: And we get a look at the performance metrics to see if we push our scores up in all categories Step16: <br>Exactly as expected, we see improved performance across all performance metric scores and in the end get a perfect extraction of all named persons from this document. Before we go ANY further, the idea of moving from "okay" to "perfect" is unrealistic. Moreover, this is a very small sample and only intended to show the application of an ensemble method. Applying this method to other sections of the journal articles will not lead to a perfect extraction, but it will indeed improve the performance of the extraction considerably. Getting Your Data in Open File Format A good rule for any data analytics project is to store the results or output in an open file format. Why? An open file format is a published specification for storing digital data, usually maintained by a standards organization, and which can be used and implemented by anyone. I selected JavaScript Object Notation(JSON), which is an open standard format that uses human-readable text to transmit data objects consisting of attribute–value pairs. We take our list of persons from the ensemble results, store it as a Python dictionary, and then convert it to JSON. Alternatively, we could use the dumps function from the json module to return dictionaries, and ensure we get the open file format at every step. In this way, other data scientists or users could pick and choose what portions of code to use in their projects. Here is our code to accomplish the task Step17: Conclusion We covered the entire data science pipeline in a natural language processing job that compared the performance of three different NERC tools. A core task in this pipeline involved ingesting plaintext into an NLTK corpus so that we could easily retrieve and manipulate the corpus. Finally, we used the results from the various NERC tools to create a simplistic ensemble classifier that improved the overall performance. The techniques in this post can be applied to other domains, larger datasets or any other corpus. Everything I used in this post (with the exception of the Regular expression resource from Coursera) was not taught in a classroom or structured learning experience. It all came from online resources, posts from others, and books (that includes learning how to code in Python). If you have the motivation, you can do it. Throughout the article, there are hyperlinks to resources and reading materials for reference, but here is a central list Step18: Function to pull from top section of document Step19: Function to build list of named entity classes from Standard NLTK Chunker Step20: Function to get lists of entities from Stanford NER Step21: Function to pull Keywords section only Step22: Function to pull Abstract only
Python Code: ############################################## # Administrative code: Import what we need ############################################## import os import time from os import walk ############################################### # Set the Path ############################################## path = os.path.abspath(os.getcwd()) # Path to directory where KDD files are TESTDIR = os.path.normpath(os.path.join(os.path.expanduser("~"),"Desktop","KDD_15","docs")) # Establish an empty list to append filenames as we iterate over the directory with filenames files = [] ############################################### # Code to iterate over files in directory ############################################## '''Iterate over the directory of filenames and add to list. Inspection shows our target filenames begin with 'p' and end with 'pdf' ''' for dirName, subdirList, fileList in os.walk(TESTDIR): for fileName in fileList: if fileName.startswith('p') and fileName.endswith('.pdf'): files.append(fileName) end_time = time.time() ############################################### # Output ###############################################print print len(files) # Print the number of files print #print '[%s]' % ', '.join(map(str, files)) # print the list of filenames Explanation: Using Named Entity Recognition and Classifiers to Extract Entities from Peer-Reviewed Journals The overwhelming amount of unstructured text data available today from traditional media sources as well as newer ones, like social media, provides a rich source of information if the data can be structured. Named entity extraction forms a core subtask to build knowledge from semi-structured and unstructured text sources<sup><a href="#fn1" id="ref1">1</a></sup>. Some of the first researchers working to extract information from unstructured texts recognized the importance of “units of information” like names, including person, organization and location names, and numeric expressions including time, date, money and percent expressions. They coined the term “Named Entity” in 1996 to represent these. Considering recent increases in computing power and decreases in the costs of data storage, data scientists and developers can build large knowledge bases that contain millions of entities and hundreds of millions of facts about them. These knowledge bases are key contributors to intelligent computer behavior<sup><a href="#fn2" id="ref2">2</a></sup>. Not surprisingly, named entity extraction operates at the core of several popular technologies such as smart assistants (Siri, Google Now), machine reading, and deep interpretation of natural language<sup><a href="#fn3" id="ref3">3</a></sup>. This post explores how to perform named entity extraction, formally known as “Named Entity Recognition and Classification (NERC). In addition, the article surveys open-source NERC tools that work with Python and compares the results obtained using them against hand-labeled data. The specific steps include: preparing semi-structured natural language data for ingestion using regular expressions; creating a custom corpus in the Natural Language Toolkit; using a suite of open source NERC tools to extract entities and store them in JSON format; comparing the performance of the NERC tools, and implementing a simplistic ensemble classifier. The information extraction concepts and tools in this article constitute a first step in the overall process of structuring unstructured data. They can be used to perform more complex natural language processing to derive unique insights from large collections of unstructured data. <br> Environment Set Up To recreate the work in this article, use Anaconda, which is an easy-to-install, free, enterprise-ready Python distribution for data analytics, processing, and scientific computing (reference). With a few lines of code, you can have all the dependencies used in this post with the exception of one function (email extractor). Install Anaconda Download the namedentity_requirements.yml (remember where you saved it on your computer) Follow the "Use Environment from file" instructions on Anaconda's website. If you use an alternative method to set up a virtual environment, make sure you have all the files installed from the yml file. The one dependency not in the yml file is the email extractor. Cut and paste the function from this website, save it to a .py file, and make sure it is in your sys.path or environment path. If you are running this as an iPython notebook, stop here. Go to the Appendix and run all of the blocks of code before continuing. Data Source The proceedings from the Knowledge Discovery and Data Mining (KDD) conferences in New York City (2014) and Sydney, Australia (2015) serve as our source of unstructured text and contain over 230 peer reviewed journal articles and keynote speaker abstracts on data mining, knowledge discovery, big data, data science and their applications. The full conference proceedings can be purchased for $60 at the Association for Computing Machinery's Digital Library (includes ACM membership). This post will work with a dataset that is equivalent to the combined conference proceedings and takes the semi-structured data that is in the form of PDF journal articles and abstracts, extracts text from these files, and adds structure to the data to facilitate follow-on analysis. Interested parties looking for a free option can use the beautifulsoup and requestslibraries to scrape the ACM website for KDD 2015 conference data. Initial Data Exploration Visual inspection reveals that the target filenames begin with a “p” and end with “pdf.” As a first step, we determine the number of files and the naming conventions by using a loop to iterate over the files in the directory and printing out the filenames. Each filename also gets saved to a list, and the length of the list tells us the total number of files in the dataset. End of explanation ############################################### # Importing what we need ############################################### import string import unicodedata import subprocess import nltk import os, os.path import re ############################################### # Create the directory we will write the .txt files to after stripping text ############################################### # path where KDD journal files exist on disk or cloud drive access corpuspath = os.path.normpath(os.path.expanduser('~/Desktop/KDD_corpus/')) if not os.path.exists(corpuspath): os.mkdir(corpuspath) ############################################### # Core code to iterate over files in the directory ############################################### # We start from the code to iterate over the files %timeit for dirName, subdirList, fileList in os.walk(TESTDIR): for fileName in fileList: if fileName.startswith('p') and fileName.endswith('.pdf'): if os.path.exists(os.path.normpath(os.path.join(corpuspath,fileName.split(".")[0]+".txt"))): pass else: ############################################### # This code strips the text from the PDFs ############################################### try: document = filter(lambda x: x in string.printable, unicodedata.normalize('NFKD', (unicode(subprocess.check_output(['pdf2txt.py',str(os.path.normpath(os.path.join(TESTDIR,fileName)))]), errors='ignore'))).encode('ascii','ignore').decode('unicode_escape').encode('ascii','ignore')) except UnicodeDecodeError: document = unicodedata.normalize('NFKD', unicode(subprocess.check_output(['pdf2txt.py',str(os.path.normpath(os.path.join(TESTDIR,fileName)))]),errors='ignore')).encode('ascii','ignore') if len(document)<300: pass else: # used this for assistance http://stackoverflow.com/questions/2967194/open-in-python-does-not-create-a-file-if-it-doesnt-exist if not os.path.exists(os.path.normpath(os.path.join(corpuspath,fileName.split(".")[0]+".txt"))): file = open(os.path.normpath(os.path.join(corpuspath,fileName.split(".")[0]+".txt")), 'w+') file.write(document) else: pass # This code builds our custom corpus. The corpus path is a path to where we saved all of our .txt files of stripped text kddcorpus= nltk.corpus.PlaintextCorpusReader(corpuspath, '.*\.txt') Explanation: <br>A total of 253 files exist in the directory. Opening one of these reveals that our data is in PDF format and it's semi-structured (follows journal article format with separate sections for "abstract" and "title"). While PDFs provide an easily readable presentation of data, they are extremely difficult to work with in data analysis. In your work, if you have an option to get to data before conversion to a PDF format, be sure to take that option.<br><br> Creating a Custom NLTK Corpus We used several Python tools to ingest our data including: pdfminer, subprocess, nltk, string, and unicodedata. Pdfminer contains a command line tool called “pdf2txt.py” that extracts text contents from a PDF file (visit the pdfminer homepage for download instructions). Subprocess, a standard library module, allows us to invoke the “pdf2txt.py” command line tool within our code. The Natural Language Tool Kit, or NLTK, serves as one of Python’s leading platforms to analyze natural language data. The string module provides variable substitutions and value formatting to strip non-printable characters from the output of the text extracted from our journal article PDFs. Finally, the unicodedata library allows Latin Unicode characters to degrade gracefully into ASCII. This is an important feature because some Unicode characters won’t extract nicely. Our task begins by iterating over the files in the directory with names that begin with 'p' and end with 'pdf.' This time, however, we will strip the text from the pdf file, write the .txt file to a newly created directory, and use the fileName variable to name the files we write to disk. Keep in mind that this task may take a few minutes depending on the processing power of your computer. Next, we use the simple instructions from Section 1.9, Chapter 2 of NLTK's Book to build a custom corpus. Having our target documents loaded as an NLTK corpus brings the power of NLTK to our analysis goals. Here's the code to accomplish what's discussed above: End of explanation # Mapping, setting count to zero for start wordcount = 0 #Iterating over list and files and counting length for fileid in kddcorpus.fileids(): wordcount += len(kddcorpus.words(fileid)) print wordcount Explanation: <br> We now have a semi-structured dataset in a format that we can query and analyze the different pieces of data. Let's see how many words (including stop words) we have in our entire corpus. <br><br> End of explanation # Using metacharacters vice literal matches p=re.compile('^(.*)([\s]){2}[A-z]+[\s]+[\s]?.+') for fileid in kddcorpus.fileids()[:25]: print re.search('^(.*)[\s]+[\s]?(.*)?',kddcorpus.raw(fileid)).group(1).strip()+" "+re.search('^(.*)[\s]+[\s]?(.*)?',kddcorpus.raw(fileid)).group(2).strip() Explanation: <br>The NLTK book has an excellent section on processing raw text and unicode issues. It provides a helpful discussion of some problems you may encounter. Using Regular Expressions to Extract Specific Sections <br> To begin our exploration of regular expressions (aka "regex"), it's important to point out some good resources for those new to the topic. An excellent resource may be found in Videos 1-3, Week 4, Getting and Cleaning Data, Data Science Specialization Track from Johns Hopkins University. Additional resources appear in the Appendix. As a simple example, let’s extract titles from the first 26 documents. End of explanation # Code to pull the references section only, store a character count, number of references, and "word per reference" calculation def refpull(docnum=None,section='references',full = False): # Establish an empty dictionary to hold values ans={} # Establish an empty list to hold document ids that don't make the cut (i.e. missing reference section or different format) # This comes in handy when you are trying to improve your code to catch outliers failids = [] section = section.lower() # Admin code to set default values and raise an exception if there's human error on input if docnum is None and full == False: raise BaseException("Enter target file to extract data from") if docnum is None and full == True: # Setting the target document and the text we will extract from text=kddcorpus.raw(docnum) # This first condtional is for pulling the target section for ALL documents in the corpus if full == True: # Iterate over the corpus to get the id; this is possible from loading our docs into a custom NLTK corpus for fileid in kddcorpus.fileids(): text = kddcorpus.raw(fileid) # These lines of code build our regular expression. # In the other functions for abstract or keywords, you see how I use this technique to create different regex arugments if section == "references": section1=["REFERENCES"] # Just in case, making sure our target string is empty before we pass data into it; just a check target = "" #We now build our lists iteratively to build our regex for sect in section1: # We embed exceptions to remove the possibility of our code stopping; we pass failed passes into a list try: # our machine built regex part1= "(?<="+sect+")(.+)" p=re.compile(part1) target=p.search(re.sub('[\s]'," ",text)).group(1) # Conditoin to make sure we don't get any empty string if len(target) > 50: # calculate the number of references in a journal; finds digits between [] in references section only try: refnum = len(re.findall('\[(\d){1,3}\]',target))+1 except: print "This file does not appear to have a references section" pass #These are all our values; we build a nested dictonary and store the calculated values ans[str(fileid)]={} ans[str(fileid)]["references"]=target.strip() ans[str(fileid)]["charcount"]=len(target) ans[str(fileid)]["refcount"]= refnum ans[str(fileid)]["wordperRef"]=round(float(len(nltk.word_tokenize(text)))/float(refnum)) #print [fileid,len(target),len(text), refnum, len(nltk.word_tokenize(text))/refnum] break else: pass except AttributeError: failids.append(fileid) pass return ans return failids # This is to perform the same operations on just one document; same functionality as above. else: ans = {} failids=[] text = kddcorpus.raw(docnum) if section == "references": section1=["REFERENCES"] target = "" for sect in section1: try: part1= "(?<="+sect+")(.+)" p=re.compile(part1) target=p.search(re.sub('[\s]'," ",text)).group(1) if len(target) > 50: # calculate the number of references in a journal; finds digits between [] in references section only try: refnum = len(re.findall('\[(\d){1,3}\]',target))+1 except: print "This file does not appear to have a references section" pass ans[str(docnum)]={} ans[str(docnum)]["references"]=target.strip() ans[str(docnum)]["charcount"]=len(target) ans[str(docnum)]["refcount"]= refnum ans[str(docnum)]["wordperRef"]=float(len(nltk.word_tokenize(text)))/float(refnum) #print [fileid,len(target),len(text), refnum, len(nltk.word_tokenize(text))/refnum] break else: pass except AttributeError: failids.append(docnum) pass return ans return failids Explanation: <br>This code extracts the titles, but some author names get caught up in the extraction as well. For simplicity, let's focus on wrangling the data to use the NERC tools on two sections of the paper: the “top” section and the “references” section. The “top” section includes the names of authors and schools. This section represents all of the text above the article’s abstract. The “references” section appears at the end of the article. The regex tools of choice to extract sections are the positive lookbehind and positive lookahead expressions. We build two functions designed to extract the “top” and “references” sections of each document. First a few words about the data. When working with natural language, one should always be prepared to deal with irregularities in the data set. This corpus is no exception. It comes from a top-notch data mining organization, but human error and a lack of standardization makes its way into the picture. For example, in one paper the header section is entitled “Categories and Subject Descriptors,” while in another the title is “Categories & Subject Descriptors.” While that may seem like a small difference, these types of differences cause significant problems. There are also some documents that will be missing sections altogether, i.e. keynote speaker documents do not contain a “references” section. When encountering similar issues in your work, you must decide whether to account for these differences or ignore them. I worked to include as much of the 253-document corpus as possible. In addition to extracting the relevant sections of the documents, our two functions will obtain a character count for each section, extract emails, count the number of references and store that value, calculate a word per reference count, and store all the above data as a nested dictionary with filenames as the key. For simplicity, we show below the code to extract the “references” section and include the function for extracting the “top” section in the Appendix. <br> End of explanation # call our function, setting "full=True" extracts ALL references in corpus test = refpull(full=True) # To get a quick glimpse, I use the example from this page: http://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict import itertools import collections man = collections.OrderedDict(test) x = itertools.islice(man.items(), 0, 10) Explanation: <br> The above code also makes use of the nltk.word_tokenize tool to create the "word per reference" statistic (takes time to run). Let's test the “references” extraction function and look at the output by obtaining the first 10 entries of the dictionary created by the function. This dictionary holds all the extracted data and various calculations. End of explanation from tabulate import tabulate # A quick list comprehension to follow the example on the tabulate pypi page table = [[key,value['charcount'],value['refcount'], value['wordperRef']] for key,value in x] # print the pretty table; we invoke the "header" argument and assign custom header!!!! print tabulate(table,headers=["filename","Character Count", "Number of references","Words per Reference"]) Explanation: The tabulate module is a great tool to visualize descriptive outputs in table format. End of explanation # We need the top and references sections from p19.txt and p29.txt p19={'top': toppull("p19.txt")['p19.txt']['top'], 'references':refpull("p19.txt")['p19.txt']['references']} p29={'top': toppull("p29.txt")['p29.txt']['top'], 'references':refpull("p29.txt")['p29.txt']['references']} Explanation: <br> Open Source NERC Tools: NLTK, Stanford NER and Polyglot Now that we have a method to obtain the corpus from the “top” and “references” sections of each article in the dataset, we are ready to perform the named entity extractions. In this post, we examine three popular, open source NERC tools. The tools are NLTK, Stanford NER, and Polyglot. A brief description of each follows. NLTK has a chunk package that uses NLTK’s recommended named entity chunker to chunk the given list of tagged tokens. A string is tokenized and tagged with parts of speech (POS) tags. The NLTK chunker then identifies non-overlapping groups and assigns them to an entity class. You can read more about NLTK's chunking capabilities in the NLTK book. Standard's Named Entity Recognizer, often called Stanford NER, is a Java implementation of linear chain Conditional Random Field (CRF) sequence models functioning as a Named Entity Recognizer. Named Entity Recognition (NER) labels sequences of words in a text that are the names of things, such as person and company names, or gene and protein names. NLTK contains an interface to Stanford NER written by Nitin Madnani. Details for using the Stanford NER tool are on the NLTK page and the required jar files can be downloaded here. Polyglot is a natural language pipeline that supports massive multilingual (i.e. language) applications. It supports tokenization in 165 languages, language detection in 196 languages, named entity recognition in 40 languages, part of speech tagging in 16 languages, sentiment analysis in 136 languages, word embeddings in 137 languages, morphological analysis in 135 languages, and transliteration in 69 languages. It is a powerhouse tool for natural language processing. We will use the named entity recognition feature for English language in this exercise. Polyglot is available via pypi. We can now test how well these open source NERC tools extract entities from the “top” and “reference” sections of our corpus. For two documents, I hand labeled authors, organizations, and locations from the “top” section of the article (section before the abstract) and the list of all authors from the “references” section. I also created a combined list of the authors, joining the lists from the “top” and “references” sections. Hand labeling is a time consuming and tedious process. For just the two (2) documents, this involved 295 cut-and-pastes of names or organizations. The annotated list appears in the Appendix. An easy test for the accuracy of a NERC tool is to compare the entities extracted by the tools to the hand-labeled extractions. Before beginning, we take advantage of the NLTK functionality to obtain the “top” and “references” sections of the two documents used for the hand labeling: End of explanation def extraction(corpus): import itertools import unicodedata from polyglot.text import Text corpus=corpus # extract entities from a single string; remove whitespace characters try: e = Text(corpus).entities except: pass #e = Text(re.sub("(r'(x0)'," ","(re.sub('[\s]'," ",corpus)))).entities current_person =[] persons =[] current_org=[] organizations=[] current_loc=[] locations=[] for l in e: if l.tag == 'I-PER': for m in l: current_person.append(unicodedata.normalize('NFKD', m).encode('ascii','ignore')) else: if current_person: # if the current chunk is not empty persons.append(" ".join(current_person)) current_person = [] elif l.tag == 'I-ORG': for m in l: current_org.append(unicodedata.normalize('NFKD', m).encode('ascii','ignore')) else: if current_org: # if the current chunk is not empty organizations.append(" ".join(current_org)) current_org = [] elif l.tag == 'I-LOC': for m in l: current_loc.append(unicodedata.normalize('NFKD', m).encode('ascii','ignore')) else: if current_loc: # if the current chunk is not empty locations.append(" ".join(current_loc)) current_loc = [] results = {} results['persons']=persons results['organizations']=organizations results['locations']=locations return results Explanation: <br>In this next block of code, we will apply the NLTK standard chunker, Stanford Named Entity Recognizer, and Polyglot extractor to our corpus. For each NERC tool, I created functions (available in the Appendix) to extract entities and return classes of objects in different lists. If you are following along, you should have run all the code blocks in the Appendix. If not, go there and do it now. The functions (in appendix) are: nltktreelist - NLTK Standard Chunker get_continuous_chunks - Stanford Named Entity Recognizer extraction - Polyglot Extraction tool For illustration, the Polyglot Extraction tool function, extraction, appears below:<br><br> End of explanation ############################################### # NLTK Standard Chunker ############################################### nltkstandard_p19ents = {'top': nltktreelist(p19['top']),'references': nltktreelist(p19['references'])} nltkstandard_p29ents = {'top': nltktreelist(p29['top']),'references': nltktreelist(p29['references'])} ############################################### # Stanford NERC Tool ################################################ from nltk.tag import StanfordNERTagger, StanfordPOSTagger stner = StanfordNERTagger('/Users/linwood/stanford-corenlp-full/classifiers/english.muc.7class.distsim.crf.ser.gz', '/Users/linwood/stanford-corenlp-full/stanford-corenlp-3.5.2.jar', encoding='utf-8') stpos = StanfordPOSTagger('/Users/linwood/stanford-postagger-full/models/english-bidirectional-distsim.tagger','/Users/linwood/stanford-postagger-full/stanford-postagger.jar') stan_p19ents = {'top': get_continuous_chunks(p19['top']), 'references': get_continuous_chunks(p19['references'])} stan_p29ents = {'top': get_continuous_chunks(p29['top']), 'references': get_continuous_chunks(p29['references'])} ############################################### # Polyglot NERC Tool ############################################### poly_p19ents = {'top': extraction(p19['top']), 'references': extraction(p19['references'])} poly_p29ents = {'top': extraction(p29['top']), 'references': extraction(p29['references'])} Explanation: <br>We pass our data, the “top” and “references” section of the two documents of interest, into the functions created with each NERC tool and build a nested dictionary of the extracted entities—author names, locations, and organization names. This code may take a bit of time to run (30 secs to a minute). <br><br> End of explanation ################################################################# # Administrative code, importing necessary library or module ################################################################# import pandas as pd ################################################################# # Create pandas series for each NERC tool entity extraction group ################################################################# df1 = pd.Series(poly_p19ents['top']['persons'], index=None, dtype=None, name='Polyglot NERC Authors', copy=False, fastpath=False) df2=pd.Series(stan_p19ents['top']['persons'], index=None, dtype=None, name='Stanford NERC Authors', copy=False, fastpath=False) df3=pd.Series(nltkstandard_p19ents['top']['persons'], index=None, dtype=None, name='NLTKStandard NERC Authors', copy=False, fastpath=False) df4 = pd.Series(p19pdf_authors, index=None, dtype=None, name='Hand-labeled True Authors', copy=False, fastpath=False) met = pd.concat([df4,df3,df2,df1], axis=1).fillna('') met Explanation: <br> We will focus specifically on the "persons" entity extractions from the “top” section of the documents to estimate performance. However, a similar exercise is possible with the extractions of “organizations” entity extractions or “locations” entity extractions too, as well as from the “references” section. To get a better look at how each NERC tool performed on the named person entities, we will use the Pandas dataframe.Pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. The dataframe provides a visual comparison of the extractions from each NERC tool and the hand-labeled extractions. Just a few lines of code accomplish the task: End of explanation # Calculations and logic from http://www.kdnuggets.com/faq/precision-recall.html def metrics(truth,run): truth = truth run = run TP = float(len(set(run) & set(truth))) if float(len(run)) >= float(TP): FP = len(run) - TP else: FP = TP - len(run) TN = 0 if len(truth) >= len(run): FN = len(truth) - len(run) else: FN = 0 accuracy = (float(TP)+float(TN))/float(len(truth)) recall = (float(TP))/float(len(truth)) precision = float(TP)/(float(FP)+float(TP)) print "The accuracy is %r" % accuracy print "The recall is %r" % recall print "The precision is %r" % precision d = {'Predicted Negative': [TN,FN], 'Predicted Positive': [FP,TP]} metricsdf = pd.DataFrame(d, index=['Negative Cases','Positive Cases']) return metricsdf Explanation: <br> The above dataframe illustrates the mixed results from the NERC tools. NLTK Standard NERC appears to have extracted 3 false positives while the Stanford NERC missed 3 true positives and the Polyglot NERC extracted all but one true positive (partially extracted; returned first name only). Let's calculate some key performance metrics:<br> 1. TN or True Negative: case was negative and predicted negative <br> 2. TP or True Positive: case was positive and predicted positive <br> 3. FN or False Negative: case was positive but predicted negative <br> 4. FP or False Positive: case was negative but predicted positive<br> The following function calculates the above metrics for the three NERC tools: End of explanation print print str1 = "NLTK Standard NERC Tool Metrics" print str1.center(40, ' ') print print metrics(p19pdf_authors,nltkstandard_p19ents['top']['persons']) print print str2 = "Stanford NERC Tool Metrics" print str2.center(40, ' ') print print metrics(p19pdf_authors, stan_p19ents['top']['persons']) print print str3 = "Polyglot NERC Tool Metrics" print str3.center(40, ' ') print print metrics(p19pdf_authors,poly_p19ents['top']['persons']) Explanation: <br>Now let's pass our values into the function to calculate the performance metrics:<br><br> End of explanation # Create intersection of true authors from NLTK standard output a =set(sorted(nltkstandard_p19ents['top']['persons'])) & set(p19pdf_authors) # Create intersection of true authors from Stanford NER output b =set(sorted(stan_p19ents['top']['persons'])) & set(p19pdf_authors) # Create intersection of true authors from Polyglot output c = set(sorted(poly_p19ents['top']['persons'])) & set(p19pdf_authors) # Create union of all true positives from each NERC output (a.union(b)).union(c) Explanation: Note to Ben from Selma - I think there might be a mistake in the table for the Polyglot NERC. Missing a 1 in the lower left maybe? The basic metrics above reveal some quick takeaways about each tool based on the specific extraction task. The NLTK Standard Chunker has perfect accuracy and recall but lacks in precision. It successfully extracted all the authors for the document, but also extracted 3 false entities. NLTK's chunker would serve well in an entity extraction pipeline where the data scientist is concerned with identifying all possible entities The Stanford NER tool is very precise (specificity vs sensitivity). The entities it extracts were 100% accurate, but it failed to identify half of the true entities. The Stanford NER tool would be best used when a data scientist wanted to extract only those entities that have a high likelihood of being named entities, suggesting an unconscious acceptance of leaving behind some information. The Polyglot Named Entity Recognizer identified five named entities exactly, but only partially identified the sixth (first name returned only). The data scientist looking for a balance between sensitivity and specificity would likely use Polyglot, as it will balance extracting the 100% accurate entities and those which may not necessarily be a named entity. A Simple Ensemble Classifier In our discussion above, we notice the varying levels of performance by the different NERC tools. Using the idea that combining the outputs from various classifiers in an ensemble method can improve the reliability of classifications, we can improve the performance of our named entity extractor tools by creating an ensemble classifier. Each NERC tool had at least 3 named persons that were true positives, but no two NERC tools had the same false positive or false negative. Our ensemble classifier "voting" rule is very simple: “Return all named entities that exist in at least two of the true positive named entity result sets from our NERC tools. We implement this rule using the set module. We first do an intersection operation of the NERC results vs the hand labeled entities to get our "true positive" set. Here is our code to accomplish the task: End of explanation dfensemble = pd.Series(list((a.union(b)).union(c)), index=None, dtype=None, name='Ensemble Method Authors', copy=False, fastpath=False) met = pd.concat([df4,dfensemble,df3,df2,df1], axis=1).fillna('') met Explanation: To get a visual comparison of the extractions for each tool and the ensemble set side by side, we return to our dataframe from earlier. In this case, we use the concat operation in pandas to append the new ensemble set to the dataframe. Our code to accomplish the task is: End of explanation print print str = "Ensemble NERC Metrics" print str.center(40, ' ') print print metrics(p19pdf_authors,list((a.union(b)).union(c))) Explanation: And we get a look at the performance metrics to see if we push our scores up in all categories: End of explanation import json # Add ensemble results for author to the nested python dictionary p19['authors']= list((a.union(b)).union(c)) # covert nested dictionary to json for open data storage # json can be stored in mongodb or any other disk store output = json.dumps(p19, ensure_ascii=False,indent=3) # print out the authors section we just created in our json print json.dumps(json.loads(output)['authors'],indent=3) # uncomment to see full json output #print json.dumps((json.loads(output)),indent=3) Explanation: <br>Exactly as expected, we see improved performance across all performance metric scores and in the end get a perfect extraction of all named persons from this document. Before we go ANY further, the idea of moving from "okay" to "perfect" is unrealistic. Moreover, this is a very small sample and only intended to show the application of an ensemble method. Applying this method to other sections of the journal articles will not lead to a perfect extraction, but it will indeed improve the performance of the extraction considerably. Getting Your Data in Open File Format A good rule for any data analytics project is to store the results or output in an open file format. Why? An open file format is a published specification for storing digital data, usually maintained by a standards organization, and which can be used and implemented by anyone. I selected JavaScript Object Notation(JSON), which is an open standard format that uses human-readable text to transmit data objects consisting of attribute–value pairs. We take our list of persons from the ensemble results, store it as a Python dictionary, and then convert it to JSON. Alternatively, we could use the dumps function from the json module to return dictionaries, and ensure we get the open file format at every step. In this way, other data scientists or users could pick and choose what portions of code to use in their projects. Here is our code to accomplish the task: End of explanation p19pdf_authors=['Tim Althoff','Xin Luna Dong','Kevin Murphy','Safa Alai','Van Dang','Wei Zhang'] p19pdf_author_organizations=['Computer Science Department','Stanford University','Google'] p19pdf_author_locations=['Stanford, CA','1600 Amphitheatre Parkway, Mountain View, CA 94043','Mountain View'] p19pdf_references_authors =['A. Ahmed', 'C. H. Teo', 'S. Vishwanathan','A. Smola','J. Allan', 'R. Gupta', 'V. Khandelwal', 'D. Graus', 'M.-H. Peetz', 'D. Odijk', 'O. de Rooij', 'M. de Rijke','T. Huet', 'J. Biega', 'F. M. Suchanek','H. Ji', 'T. Cassidy', 'Q. Li','S. Tamang', 'A. Kannan', 'S. Baker', 'K. Ramnath', 'J. Fiss', 'D. Lin', 'L. Vanderwende', 'R. Ansary', 'A. Kapoor', 'Q. Ke', 'M. Uyttendaele', 'S. M. Katz','A. Krause','D. Golovin','J. Leskovec', 'A. Krause', 'C. Guestrin', 'C. Faloutsos', 'J. VanBriesen','N. Glance','J. Li','C. Cardie','J. Li','C. Cardie','C.-Y. Lin','H. Lin','J. A. Bilmes' 'X. Ling','D. S. Weld', 'A. Mazeika', 'T. Tylenda','G. Weikum','M. Minoux', 'G. L. Nemhauser', 'L. A. Wolsey', 'M. L. Fisher','R. Qian','D. Shahaf', 'C. Guestrin','E. Horvitz','T. Althoff', 'X. L. Dong', 'K. Murphy', 'S. Alai', 'V. Dang','W. Zhang','R. A. Baeza-Yates', 'B. Ribeiro-Neto', 'D. Shahaf', 'J. Yang', 'C. Suen', 'J. Jacobs', 'H. Wang', 'J. Leskovec', 'W. Shen', 'J. Wang', 'J. Han','D. Bamman', 'N. Smith','K. Bollacker', 'C. Evans', 'P. Paritosh', 'T. Sturge', 'J. Taylor', 'R. Sipos', 'A. Swaminathan', 'P. Shivaswamy', 'T. Joachims','K. Sprck Jones','G. Calinescu', 'C. Chekuri', 'M. Pl','J. Vondrk', 'F. M. Suchanek', 'G. Kasneci','G. Weikum', 'J. Carbonell' ,'J. Goldstein','B. Carterette', 'P. N. Bennett', 'D. M. Chickering', 'S. T. Dumais','A. Dasgupta', 'R. Kumar','S. Ravi','Q. X. Do', 'W. Lu', 'D. Roth','X. Dong', 'E. Gabrilovich', 'G. Heitz', 'W. Horn', 'N. Lao', 'K. Murphy', 'T. Strohmann', 'S. Sun','W. Zhang', 'M. Dubinko', 'R. Kumar', 'J. Magnani', 'J. Novak', 'P. Raghavan','A. Tomkins', 'U. Feige','F. M. Suchanek','N. Preda','R. Swan','J. Allan', 'T. Tran', 'A. Ceroni', 'M. Georgescu', 'K. D. Naini', 'M. Fisichella', 'T. A. Tuan', 'S. Elbassuoni', 'N. Preda','G. Weikum','Y. Wang', 'M. Zhu', 'L. Qu', 'M. Spaniol', 'G. Weikum', 'G. Weikum', 'N. Ntarmos', 'M. Spaniol', 'P. Triantallou', 'A. A. Benczr', 'S. Kirkpatrick', 'P. Rigaux','M. Williamson', 'X. W. Zhao', 'Y. Guo', 'R. Yan', 'Y. He','X. Li'] p19pdf_allauthors=['Tim Althoff','Xin Luna Dong','Kevin Murphy','Safa Alai','Van Dang','Wei Zhang','A. Ahmed', 'C. H. Teo', 'S. Vishwanathan','A. Smola','J. Allan', 'R. Gupta', 'V. Khandelwal', 'D. Graus', 'M.-H. Peetz', 'D. Odijk', 'O. de Rooij', 'M. de Rijke','T. Huet', 'J. Biega', 'F. M. Suchanek','H. Ji', 'T. Cassidy', 'Q. Li','S. Tamang', 'A. Kannan', 'S. Baker', 'K. Ramnath', 'J. Fiss', 'D. Lin', 'L. Vanderwende', 'R. Ansary', 'A. Kapoor', 'Q. Ke', 'M. Uyttendaele', 'S. M. Katz','A. Krause','D. Golovin','J. Leskovec', 'A. Krause', 'C. Guestrin', 'C. Faloutsos', 'J. VanBriesen','N. Glance','J. Li','C. Cardie','J. Li','C. Cardie','C.-Y. Lin','H. Lin','J. A. Bilmes' 'X. Ling','D. S. Weld', 'A. Mazeika', 'T. Tylenda','G. Weikum','M. Minoux', 'G. L. Nemhauser', 'L. A. Wolsey', 'M. L. Fisher','R. Qian','D. Shahaf', 'C. Guestrin','E. Horvitz','T. Althoff', 'X. L. Dong', 'K. Murphy', 'S. Alai', 'V. Dang','W. Zhang','R. A. Baeza-Yates', 'B. Ribeiro-Neto', 'D. Shahaf', 'J. Yang', 'C. Suen', 'J. Jacobs', 'H. Wang', 'J. Leskovec', 'W. Shen', 'J. Wang', 'J. Han','D. Bamman', 'N. Smith','K. Bollacker', 'C. Evans', 'P. Paritosh', 'T. Sturge', 'J. Taylor', 'R. Sipos', 'A. Swaminathan', 'P. Shivaswamy', 'T. Joachims','K. Sprck Jones','G. Calinescu', 'C. Chekuri', 'M. Pl','J. Vondrk', 'F. M. Suchanek', 'G. Kasneci','G. Weikum', 'J. Carbonell' ,'J. Goldstein','B. Carterette', 'P. N. Bennett', 'D. M. Chickering', 'S. T. Dumais','A. Dasgupta', 'R. Kumar','S. Ravi','Q. X. Do', 'W. Lu', 'D. Roth','X. Dong', 'E. Gabrilovich', 'G. Heitz', 'W. Horn', 'N. Lao', 'K. Murphy', 'T. Strohmann', 'S. Sun','W. Zhang', 'M. Dubinko', 'R. Kumar', 'J. Magnani', 'J. Novak', 'P. Raghavan','A. Tomkins', 'U. Feige','F. M. Suchanek','N. Preda','R. Swan','J. Allan', 'T. Tran', 'A. Ceroni', 'M. Georgescu', 'K. D. Naini', 'M. Fisichella', 'T. A. Tuan', 'S. Elbassuoni', 'N. Preda','G. Weikum','Y. Wang', 'M. Zhu', 'L. Qu', 'M. Spaniol', 'G. Weikum', 'G. Weikum', 'N. Ntarmos', 'M. Spaniol', 'P. Triantallou', 'A. A. Benczr', 'S. Kirkpatrick', 'P. Rigaux','M. Williamson', 'X. W. Zhao', 'Y. Guo', 'R. Yan', 'Y. He','X. Li'] Explanation: Conclusion We covered the entire data science pipeline in a natural language processing job that compared the performance of three different NERC tools. A core task in this pipeline involved ingesting plaintext into an NLTK corpus so that we could easily retrieve and manipulate the corpus. Finally, we used the results from the various NERC tools to create a simplistic ensemble classifier that improved the overall performance. The techniques in this post can be applied to other domains, larger datasets or any other corpus. Everything I used in this post (with the exception of the Regular expression resource from Coursera) was not taught in a classroom or structured learning experience. It all came from online resources, posts from others, and books (that includes learning how to code in Python). If you have the motivation, you can do it. Throughout the article, there are hyperlinks to resources and reading materials for reference, but here is a central list: Requirements to run this code in iPython notebook or on your machine Natural Language Toolkit Book (free online resource) and the NLTK Standard Chunker and a post on how to use the chunker Polyglot natural language pipeline for massive muliligual applications and the journal article describing the word classification model Stanford Named Entity Recognizer and the NLTK interface to the Stanford NER and a post on how to use the interface Python Pandas is a must have tool for anyone who does analysis in Python. The best book I've used to date is Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython Intuitive description and examples of Python's standard library set module Discussion of ensemble classifiers Nice module to print tables in standard python output called tablulate Regular expression training (more examples in earlier sections) Python library to extract text from PDF and post on available Python tools to extract text from a PDF ACM Digital Library to purchase journal articles to completely recreate this exercise My quick web scrap code to pull back abstracts and authors from KDD 2015; can apply this same analysis to web acquired dataset If you liked this post, make sure to go to the blog home page and click the Subscribe button so that you don't miss any of our future posts. We're also always looking for blog contributors, so if you have data science skills and want to get some exposure, apply here. References <sup id="fn1">1. [(2014). Text Mining and its Business Applications - CodeProject. Retrieved December 26, 2015, from http://www.codeproject.com/Articles/822379/Text-Mining-and-its-Business-Applications.]<a href="#ref1" title="Jump back to footnote 1 in the text.">↩</a></sup> <sup id="fn2">2. [Suchanek, F., & Weikum, G. (2013). Knowledge harvesting in the big-data era. Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data. ACM.]<a href="#ref2" title="Jump back to footnote 2 in the text.">↩</a></sup> <sup id ="fn3">3. [Nadeau, D., & Sekine, S. (2007). A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1), 3-26.]<a href="#ref3" title = "Jump back to footnote 3 in the text">↩</a></sup> <sup id ="fn4">4. [Ojeda, Tony, Sean Patrick Murphy, Benjamin Bengfort, and Abhijit Dasgupta. Practical Data Science Cookbook: 89 Hands-on Recipes to Help You Complete Real-world Data Science Projects in R and Python. N.p.: n.p., n.d. Print.]<a href="#ref4" title = "Jump back to footnote 4 in the text">↩</a></sup> Appendix Create all the functions in the appendix before running this code in a notebook. Hand labeled entities from two journal articles End of explanation # attempting function with gold top section...Normal case done def toppull(docnum=None,section='top',full = False): from emailextractor import file_to_str, get_emails # paste code to .py file from following link and save within your environment path to call it: https://gist.github.com/dideler/5219706 ans={} failids = [] section = section.lower() if docnum is None and full == False: raise BaseException("Enter target file to extract data from") if docnum is None and full == True: text=kddcorpus.raw(docnum).lower() # to return output from entire corpus if full == True: if section == 'top': section = ["ABSTRACT","Abstract","Bio","Panel Summary"] for fileid in kddcorpus.fileids(): text = kddcorpus.raw(fileid) for sect in section: try: part1="(.+)(?="+sect+")" #print "re.compile"+"("+part1+")" p=re.compile(part1) target = p.search(re.sub('[\s]'," ", text)).group() #print docnum,len(target),len(text) emails = tuple(get_emails(target)) ans[str(fileid)]={} ans[str(fileid)]["top"]=target.strip() ans[str(fileid)]["charcount"]=len(target) ans[str(fileid)]["emails"]=emails #print [fileid,len(target),len(text)] break except AttributeError: failids.append(fileid) pass return ans return failids # to return output from one document else: ans = {} failids=[] text = kddcorpus.raw(docnum) if section == "top": section = ["ABSTRACT","Abstract","Bio","Panel Summary"] text = kddcorpus.raw(docnum) for sect in section: try: part1="(.+)(?="+sect+")" #print "re.compile"+"("+part1+")" p=re.compile(part1) target = p.search(re.sub('[\s]'," ", text)).group() #print docnum,len(target),len(text) emails = tuple(get_emails(target)) ans[str(docnum)]={} ans[str(docnum)]["top"]=target.strip() ans[str(docnum)]["charcount"]=len(target) ans[str(docnum)]["emails"]=emails #print [fileid,len(target),len(text)] break except AttributeError: failids.append(fileid) pass return ans return failids Explanation: Function to pull from top section of document End of explanation def nltktreelist(text): from operator import itemgetter text = text persons = [] organizations = [] locations =[] genpurp = [] for l in nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize(text))): if isinstance(l,nltk.tree.Tree): if l.label() == 'PERSON': if len(l)== 1: if l[0][0] in persons: pass else: persons.append(l[0][0]) else: if " ".join(map(itemgetter(0), l)) in persons: pass else: persons.append(" ".join(map(itemgetter(0), l)).strip("*")) for o in nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize(text))): if isinstance(o,nltk.tree.Tree): if o.label() == 'ORGANIZATION': if len(o)== 1: if o[0][0] in organizations: pass else: organizations.append(o[0][0]) else: if " ".join(map(itemgetter(0), o)) in organizations: pass else: organizations.append(" ".join(map(itemgetter(0), o)).strip("*")) for o in nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize(text))): if isinstance(o,nltk.tree.Tree): if o.label() == 'LOCATION': if len(o)== 1: if o[0][0] in locations: pass else: locations.append(o[0][0]) else: if " ".join(map(itemgetter(0), o)) in locations: pass else: locations.append(" ".join(map(itemgetter(0), o)).strip("*")) for e in nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize(text))): if isinstance(o,nltk.tree.Tree): if o.label() == 'GPE': if len(o)== 1: if o[0][0] in genpurp: pass else: genpurp.append(o[0][0]) else: if " ".join(map(itemgetter(0), o)) in genpurp: pass else: genpurp.append(" ".join(map(itemgetter(0), o)).strip("*")) results = {} results['persons']=persons results['organizations']=organizations results['locations']=locations results['genpurp'] = genpurp return results Explanation: Function to build list of named entity classes from Standard NLTK Chunker End of explanation def get_continuous_chunks(string): string = string continuous_chunk = [] current_chunk = [] for token, tag in stner.tag(string.split()): if tag != "O": current_chunk.append((token, tag)) else: if current_chunk: # if the current chunk is not empty continuous_chunk.append(current_chunk) current_chunk = [] # Flush the final current_chunk into the continuous_chunk, if any. if current_chunk: continuous_chunk.append(current_chunk) named_entities = continuous_chunk named_entities_str = [" ".join([token for token, tag in ne]) for ne in named_entities] named_entities_str_tag = [(" ".join([token for token, tag in ne]), ne[0][1]) for ne in named_entities] persons = [] for l in [l.split(",") for l,m in named_entities_str_tag if m == "PERSON"]: for m in l: for n in m.strip().split(","): if len(n)>0: persons.append(n.strip("*")) organizations = [] for l in [l.split(",") for l,m in named_entities_str_tag if m == "ORGANIZATION"]: for m in l: for n in m.strip().split(","): n.strip("*") if len(n)>0: organizations.append(n.strip("*")) locations = [] for l in [l.split(",") for l,m in named_entities_str_tag if m == "LOCATION"]: for m in l: for n in m.strip().split(","): if len(n)>0: locations.append(n.strip("*")) dates = [] for l in [l.split(",") for l,m in named_entities_str_tag if m == "DATE"]: for m in l: for n in m.strip().split(","): if len(n)>0: dates.append(n.strip("*")) money = [] for l in [l.split(",") for l,m in named_entities_str_tag if m == "MONEY"]: for m in l: for n in m.strip().split(","): if len(n)>0: money.append(n.strip("*")) time = [] for l in [l.split(",") for l,m in named_entities_str_tag if m == "TIME"]: for m in l: for n in m.strip().split(","): if len(n)>0: money.append(n.strip("*")) percent = [] for l in [l.split(",") for l,m in named_entities_str_tag if m == "PERCENT"]: for m in l: for n in m.strip().split(","): if len(n)>0: money.append(n.strip("*")) entities={} entities['persons']= persons entities['organizations']= organizations entities['locations']= locations #entities['dates']= dates #entities['money']= money #entities['time']= time #entities['percent']= percent return entities Explanation: Function to get lists of entities from Stanford NER End of explanation # attempting function with gold keywords.... def keypull(docnum=None,section='keywords',full = False): ans={} failids = [] section = section.lower() if docnum is None and full == False: raise BaseException("Enter target file to extract data from") if docnum is None and full == True: text=kddcorpus.raw(docnum).lower() # to return output from entire corpus if full == True: for fileid in kddcorpus.fileids(): text = kddcorpus.raw(fileid).lower() if section == "keywords": section1="keywords" target = "" section2=["1. introduction ","1. introd ","1. motivation","(1. tutorial )"," permission to make "," permission to make","( permission to make digital )"," bio ","abstract: ","1.motivation" ] part1= "(?<="+str(section1)+")(.+)" for sect in section2: try: part2 = "(?="+str(sect)+")" p=re.compile(part1+part2) target=p.search(re.sub('[\s]'," ",text)).group(1) if len(target) >50: if len(target) > 300: target = target[:200] else: target = target ans[str(fileid)]={} ans[str(fileid)]["keywords"]=target.strip() ans[str(fileid)]["charcount"]=len(target) #print [fileid,len(target),len(text)] break else: if len(target)==0: failids.append(fileid) pass except AttributeError: failids.append(fileid) pass set(failids) return ans # to return output from one document else: ans = {} text=kddcorpus.raw(docnum).lower() if full == False: if section == "keywords": section1="keywords" target = "" section2=["1. introduction ","1. introd ","1. motivation","permission to make ","1.motivation" ] part1= "(?<="+str(section1)+")(.+)" for sect in section2: try: part2 = "(?="+str(sect)+")" p=re.compile(part1+part2) target=p.search(re.sub('[\s]'," ",text)).group(1) if len(target) >50: if len(target) > 300: target = target[:200] else: target = target ans[docnum]={} ans[docnum]["keywords"]=target.strip() ans[docnum]["charcount"]=len(target) break except: pass return ans return failids Explanation: Function to pull Keywords section only End of explanation # attempting function with gold abstracts...Normal case done def abpull(docnum=None,section='abstract',full = False): ans={} failids = [] section = section.lower() if docnum is None and full == False: raise BaseException("Enter target file to extract data from") if docnum is None and full == True: text=kddcorpus.raw(docnum).lower() # to return output from entire corpus if full == True: for fileid in kddcorpus.fileids(): text = kddcorpus.raw(fileid) if section == "abstract": section1=["ABSTRACT", "Abstract "] target = "" section2=["Categories and Subject Descriptors","Categories & Subject Descriptors","Keywords","INTRODUCTION"] for fileid in kddcorpus.fileids(): text = kddcorpus.raw(fileid) for sect1 in section1: for sect2 in section2: part1= "(?<="+str(sect1)+")(.+)" part2 = "(?="+str(sect2)+")" p = re.compile(part1+part2) try: target=p.search(re.sub('[\s]'," ",text)).group() if len(target) > 50: ans[str(fileid)]={} ans[str(fileid)]["abstract"]=target.strip() ans[str(fileid)]["charcount"]=len(target) #print [fileid,len(target),len(text)] break else: failids.append(fileid) pass except AttributeError: pass return ans # to return output from one document else: ans = {} failids=[] text = kddcorpus.raw(docnum).lower() if section == "abstract": section1=["ABSTRACT", "Abstract "] target = "" section2=["Categories and Subject Descriptors","Categories & Subject Descriptors","Keywords","INTRODUCTION"] for sect1 in section1: for sect2 in section2: part1= "(?<="+str(sect1)+")(.+?)" part2 = "(?="+str(sect2)+"[\s]?)" p = re.compile(part1+part2) try: target=p.search(re.sub('[\s]'," ",text)).group() if len(target) > 50: ans[str(docnum)]={} ans[str(docnum)]["abstract"]=target.strip() ans[str(docnum)]["charcount"]=len(target) #print [docnum,len(target),len(text)] break else: failids.append(docnum) pass except AttributeError: pass return ans return failids Explanation: Function to pull Abstract only End of explanation
2,440
Given the following text description, write Python code to implement the functionality described below step by step Description: PR Step2: Current Darknet, and Proposed Changes Step3: 1. fastai Darknet Loss Function Darknet without LogSoftmax layer and with NLL loss Step4: fastai.conv_learner logic sets criterion to torch.nn.functional.nll_loss Step5: There is no final activation layer. The criterion will be applied to the final layer's output Step6: In this case the Learning Rate Finder 'fails' due to very small - often negative - loss values. Step7: Sometimes the LR Finder manages to produce a plot, but the results leave much to be desired Step8: Darknet with Cross Entropy loss Step9: There is no final activation layer. The criterion will be applied to the final layer's output Step10: This is the shape of plot we expect to see. Proposal Step11: fastai.conv_learner logic sets criterion to torch.nn.functional.nll_loss Step12: However cross_entropy is NLL(LogSoftmax). The final layer is a LogSoftmax activation. The NLL criterion applied to its output will produce a Cross Entropy loss function. Step13: 2. Learner Comparison Step14: A version of darknet.py with the proposed changes above is used. Step15: Comparing resnet18 from PyTorch to resnet18 from FastAI Step16: The fastai library does not alter the resnet18 model it imports from PyTorch. For comparison, the darknet53 import from fastai looks like this Step17: By contrast, the types of the initialized models Step18: The PyTorch ResNet18 model has no output activation layer. Step19: When a learner is intialized via ConvLearner.pretrained, the fastai library adds a classifier head to the model via the ConvnetBuilder class. NOTE that the definition of the model is passed in, and not a model object. Step20: the fastai library adds the necessary LogSoftmax layer to the end of the model NOTE Step21: But since the final layer is an nn.LogSoftmax, the effective loss function is Cross Entropy. NOTE that this does not happen when the learner is initalized via .from_model_data Step22: 'Strange/Normal' behavior Step23: However the current version of Darknet is not accepted by ConvLearner.pretrained at all. This makes sense, given that the model is not yet pretrained, but also suggests further work is needed to integrate the model into the library. Step25: The from_model_data method works, as seen in section 1. Misc
Python Code: %matplotlib inline %reload_ext autoreload %autoreload 2 from pathlib import Path from fastai.conv_learner import * # from fastai.models import darknet Explanation: PR: Adding LogSoftmax layer to Darknet for Cross Entropy Loss Wayne Nixalo - 2018/4/24 0. Proposed Change; Setup Dataset is the fast.ai ImageNet sampleset. Jupyter kernel restarted between ImageNet learner runs due to model size. End of explanation import torch import torch.nn as nn import torch.nn.functional as F from fastai.layers import * ### <<<------ class ConvBN(nn.Module): "convolutional layer then batchnorm" def __init__(self, ch_in, ch_out, kernel_size = 3, stride=1, padding=0): super().__init__() self.conv = nn.Conv2d(ch_in, ch_out, kernel_size=kernel_size, stride=stride, padding=padding, bias=False) self.bn = nn.BatchNorm2d(ch_out, momentum=0.01) self.relu = nn.LeakyReLU(0.1, inplace=True) def forward(self, x): return self.relu(self.bn(self.conv(x))) class DarknetBlock(nn.Module): def __init__(self, ch_in): super().__init__() ch_hid = ch_in//2 self.conv1 = ConvBN(ch_in, ch_hid, kernel_size=1, stride=1, padding=0) self.conv2 = ConvBN(ch_hid, ch_in, kernel_size=3, stride=1, padding=1) def forward(self, x): return self.conv2(self.conv1(x)) + x class Darknet(nn.Module): "Replicates the darknet classifier from the YOLOv3 paper (table 1)" def make_group_layer(self, ch_in, num_blocks, stride=1): layers = [ConvBN(ch_in,ch_in*2,stride=stride)] for i in range(num_blocks): layers.append(DarknetBlock(ch_in*2)) return layers def __init__(self, num_blocks, num_classes=1000, start_nf=32): super().__init__() nf = start_nf layers = [ConvBN(3, nf, kernel_size=3, stride=1, padding=1)] for i,nb in enumerate(num_blocks): layers += self.make_group_layer(nf, nb, stride=(1 if i==1 else 2)) nf *= 2 layers += [nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(nf, num_classes)] self.layers = nn.Sequential(*layers) def forward(self, x): return self.layers(x) ######################## ### Proposed Version ### class PR_Darknet(nn.Module): "Replicates the darknet classifier from the YOLOv3 paper (table 1)" def make_group_layer(self, ch_in, num_blocks, stride=1): layers = [ConvBN(ch_in,ch_in*2,stride=stride)] for i in range(num_blocks): layers.append(DarknetBlock(ch_in*2)) return layers def __init__(self, num_blocks, num_classes=1000, start_nf=32): super().__init__() nf = start_nf layers = [ConvBN(3, nf, kernel_size=3, stride=1, padding=1)] for i,nb in enumerate(num_blocks): layers += self.make_group_layer(nf, nb, stride=(1 if i==1 else 2)) nf *= 2 layers += [nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(nf, num_classes)] layers += [nn.LogSoftmax()] ### <<<------ self.layers = nn.Sequential(*layers) def forward(self, x): return self.layers(x) ### /Proposed Version ### ######################## def darknet_53(num_classes=1000): return Darknet([1,2,8,8,4], num_classes) def darknet_small(num_classes=1000): return Darknet([1,2,4,8,4], num_classes) def darknet_mini(num_classes=1000): return Darknet([1,2,4,4,2], num_classes, start_nf=24) def darknet_mini2(num_classes=1000): return Darknet([1,2,8,8,4], num_classes, start_nf=16) def darknet_mini3(num_classes=1000): return Darknet([1,2,4,4], num_classes) # demonstrator def PR_darknet_53(num_classes=1000): return PR_Darknet([1,2,8,8,4], num_classes) def display_head(fastai_learner, λ_name=None, show_nums=False): displays final conv block and network head. # parse if λ_name == None: λ_name='DarknetBlock' fastai_learner = fastai_learner[0] if show_nums: fastai_learner = str(fastai_learner).split('\n') n = len(fastai_learner) else: n = len(str(fastai_learner).split('\n')) j = 1 # find final conv block for i in range(n): if λ_name in str(fastai_learner[-j]): break j += 1 # print head & 'neck' for i in range(j): print(fastai_learner[i-j]) # don't mind the λ's.. l's alone look too much like 1's # It's easy to switch keyboards on a Mac or Windows (ctrl/win-space) # fn NOTE: the `learner[0]` for Darknet is the same as `learner` # for other models; hence the if/else logic to keep printouts neat # show_nums displays layer numbers - kinda PATH = Path('data/imagenet') sz = 256 bs = 32 tfms = tfms_from_stats(imagenet_stats, sz) model_data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms, val_name='train') Explanation: Current Darknet, and Proposed Changes: NOTE: from .layers import * changed to from fastai.layers import *, preventing ModuleNotFoundError. End of explanation f_model = darknet_53() learner = ConvLearner.from_model_data(f_model, model_data) Explanation: 1. fastai Darknet Loss Function Darknet without LogSoftmax layer and with NLL loss End of explanation learner.crit Explanation: fastai.conv_learner logic sets criterion to torch.nn.functional.nll_loss: End of explanation display_head(learner) Explanation: There is no final activation layer. The criterion will be applied to the final layer's output: End of explanation learner.lr_find() learner.sched.plot() Explanation: In this case the Learning Rate Finder 'fails' due to very small - often negative - loss values. End of explanation learner.lr_find() learner.sched.plot() Explanation: Sometimes the LR Finder manages to produce a plot, but the results leave much to be desired: End of explanation f_model = darknet_53() learner = ConvLearner.from_model_data(f_model, model_data, crit=F.cross_entropy) learner.crit Explanation: Darknet with Cross Entropy loss End of explanation display_head(learner) learner.lr_find() learner.sched.plot() Explanation: There is no final activation layer. The criterion will be applied to the final layer's output: End of explanation f_model = PR_darknet_53() learner = ConvLearner.from_model_data(f_model, model_data) Explanation: This is the shape of plot we expect to see. Proposal: Darknet with LogSoftmax layer and NLL loss (Cross Entropy loss) End of explanation learner.crit Explanation: fastai.conv_learner logic sets criterion to torch.nn.functional.nll_loss: End of explanation display_head(learner) learner.lr_find() learner.sched.plot() Explanation: However cross_entropy is NLL(LogSoftmax). The final layer is a LogSoftmax activation. The NLL criterion applied to its output will produce a Cross Entropy loss function. End of explanation from fastai.conv_learner import * PATH = Path('data/cifar10') sz = 64 # darknet53 architecture can't handle 32x32 small input bs = 64 tfms = tfms_from_stats(imagenet_stats, sz) model_data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms, val_name='test') Explanation: 2. Learner Comparison: ResNet18 & DarkNet53 In working on this, I found some behavior that seemed odd, but may be normal. The CIFAR-10 dataset from fast.ai will be used here. End of explanation from fastai.models import darknet Explanation: A version of darknet.py with the proposed changes above is used. End of explanation from torchvision.models import resnet18 resnet18 Explanation: Comparing resnet18 from PyTorch to resnet18 from FastAI: End of explanation darknet.darknet_53 Explanation: The fastai library does not alter the resnet18 model it imports from PyTorch. For comparison, the darknet53 import from fastai looks like this: End of explanation type(resnet18(num_classes=10)) type(darknet.darknet_53(num_classes=10)) Explanation: By contrast, the types of the initialized models: End of explanation f_model = resnet18() display_head(str(f_model), λ_name='BasicBlock', show_nums=True) Explanation: The PyTorch ResNet18 model has no output activation layer. End of explanation learner = ConvLearner.pretrained(resnet18, model_data) display_head(learner, λ_name='BasicBlock', show_nums=True) Explanation: When a learner is intialized via ConvLearner.pretrained, the fastai library adds a classifier head to the model via the ConvnetBuilder class. NOTE that the definition of the model is passed in, and not a model object. End of explanation learner.crit Explanation: the fastai library adds the necessary LogSoftmax layer to the end of the model NOTE: default constructor for resnet18 & darknet is 1000 classes (ImageNet). fastai lib finds the correct num_classes from the ModelData object. That's why the resnet18 model above has 1000 output features, and the resnet18 learner below it has the correct 10. The criterion, the loss function, of the learner is still F.nll_loss: End of explanation learner = ConvLearner.from_model_data(resnet18(num_classes=10), model_data) display_head(learner, λ_name='BasicBlock', show_nums=True) Explanation: But since the final layer is an nn.LogSoftmax, the effective loss function is Cross Entropy. NOTE that this does not happen when the learner is initalized via .from_model_data: End of explanation learner = ConvLearner.pretrained(resnet18, model_data) learner = ConvLearner.pretrained(resnet18(num_classes=10), model_data) Explanation: 'Strange/Normal' behavior: ConvLearner.pretrained will only accept model definitions, not models themselves: End of explanation learner = ConvLearner.pretrained(darknet.darknet_53, model_data) learner = ConvLearner.pretrained(darknet.darknet_53(num_classes=10), model_data) Explanation: However the current version of Darknet is not accepted by ConvLearner.pretrained at all. This makes sense, given that the model is not yet pretrained, but also suggests further work is needed to integrate the model into the library. End of explanation # Use this version of `display_head` if the other is too finicky for you. # NOTE: fastai learners other than darknet will have to be entered as: # [str(learner_or_model).split('\n')] def display_head(fastai_learner, λ_name=None): displays final conv block and network head. n = len(fastai_learner[0]) if λ_name == None: λ_name='DarknetBlock' j = 1 # find final conv block for i in range(n): if λ_name in str(fastai_learner[0][-j]): break j += 1 # print head & 'neck' for i in range(j): print(fastai_learner[0][i-j]) # display_head(learner, λ_name='BasicBlock') display_head(learner1) #darknet learner print('--------') display_head([str(learner2).split('\n')], λ_name='BasicBlock') #resnet learner print('--------') display_head([str(f_model).split('\n')], λ_name='BasicBlock') #resnet model Explanation: The from_model_data method works, as seen in section 1. Misc End of explanation
2,441
Given the following text description, write Python code to implement the functionality described below step by step Description: Storage command-line tool The Google Cloud SDK provides a set of commands for working with data stored in Cloud Storage. This notebook introduces several gsutil commands for interacting with Cloud Storage. Note that shell commands in a notebook must be prepended with a !. List available commands The gsutil command can be used to perform a wide array of tasks. Run the help command to view a list of available commands Step1: Create a storage bucket Buckets are the basic containers that hold your data. Everything that you store in Cloud Storage must be contained in a bucket. You can use buckets to organize your data and control access to your data. Start by defining a globally unique name. For more information about naming buckets, see Bucket name requirements. Step2: NOTE Step3: List buckets in a project Replace 'your-project-id' in the cell below with your project ID and run the cell to list the storage buckets in your project. Step4: The response should look like the following Step5: The response should look like the following Step6: List blobs in a bucket Step7: The response should look like the following Step8: The response should look like the following Step9: Cleaning up Delete a blob Step10: Delete a bucket The following command deletes all objects in the bucket before deleting the bucket itself.
Python Code: !gsutil help Explanation: Storage command-line tool The Google Cloud SDK provides a set of commands for working with data stored in Cloud Storage. This notebook introduces several gsutil commands for interacting with Cloud Storage. Note that shell commands in a notebook must be prepended with a !. List available commands The gsutil command can be used to perform a wide array of tasks. Run the help command to view a list of available commands: End of explanation # Replace the string below with a unique name for the new bucket bucket_name = "your-new-bucket" Explanation: Create a storage bucket Buckets are the basic containers that hold your data. Everything that you store in Cloud Storage must be contained in a bucket. You can use buckets to organize your data and control access to your data. Start by defining a globally unique name. For more information about naming buckets, see Bucket name requirements. End of explanation !gsutil mb gs://{bucket_name}/ Explanation: NOTE: In the examples below, the bucket_name and project_id variables are referenced in the commands using {} and $. If you want to avoid creating and using variables, replace these interpolated variables with literal values and remove the {} and $ characters. Next, create the new bucket with the gsutil mb command: End of explanation # Replace the string below with your project ID project_id = "your-project-id" !gsutil ls -p $project_id Explanation: List buckets in a project Replace 'your-project-id' in the cell below with your project ID and run the cell to list the storage buckets in your project. End of explanation !gsutil ls -L -b gs://{bucket_name}/ Explanation: The response should look like the following: gs://your-new-bucket/ Get bucket metadata The next cell shows how to get information on metadata of your Cloud Storage buckets. To learn more about specific bucket properties, see Bucket locations and Storage classes. End of explanation !gsutil cp resources/us-states.txt gs://{bucket_name}/ Explanation: The response should look like the following: gs://your-new-bucket/ : Storage class: MULTI_REGIONAL Location constraint: US ... Upload a local file to a bucket Objects are the individual pieces of data that you store in Cloud Storage. Objects are referred to as "blobs" in the Python client library. There is no limit on the number of objects that you can create in a bucket. An object's name is treated as a piece of object metadata in Cloud Storage. Object names can contain any combination of Unicode characters (UTF-8 encoded) and must be less than 1024 bytes in length. For more information, including how to rename an object, see the Object name requirements. End of explanation !gsutil ls -r gs://{bucket_name}/** Explanation: List blobs in a bucket End of explanation !gsutil ls -L gs://{bucket_name}/us-states.txt Explanation: The response should look like the following: gs://your-new-bucket/us-states.txt Get a blob and display metadata See Viewing and editing object metadata for more information about object metadata. End of explanation !gsutil cp gs://{bucket_name}/us-states.txt resources/downloaded-us-states.txt Explanation: The response should look like the following: gs://your-new-bucket/us-states.txt: Creation time: Fri, 08 Feb 2019 05:23:28 GMT Update time: Fri, 08 Feb 2019 05:23:28 GMT Storage class: STANDARD Content-Language: en Content-Length: 637 Content-Type: text/plain ... Download a blob to a local directory End of explanation !gsutil rm gs://{bucket_name}/us-states.txt Explanation: Cleaning up Delete a blob End of explanation !gsutil rm -r gs://{bucket_name}/ Explanation: Delete a bucket The following command deletes all objects in the bucket before deleting the bucket itself. End of explanation
2,442
Given the following text description, write Python code to implement the functionality described below step by step Description: What Do We Need From Slides 📑 Easy to create. Easy to share. Step1: Our Data 📉
Python Code: import pandas as pd import numpy as np import janitor import pandas_flavor as pf import janitor def load_data(): return pd.read_csv('https://github.com/Kokkalo4/Kaggle-SF-Salaries/raw/master/Salaries.csv')\ .replace('Not Provided', np.nan)\ .astype({"BasePay":float, "OtherPay":float}) \ .select_columns(["Id", "EmployeeName", "JobTitle", "BasePay", "OtherPay", "TotalPay","Year","Agency"])\ .head(10) Explanation: What Do We Need From Slides 📑 Easy to create. Easy to share. End of explanation df = load_data() df.tail() df.style.format({"BasePay": "${:20,.0f}", "OtherPay": "${:20,.0f}", "TotalPay": "${:20,.0f}", "TotalPayBenefits":"${:20,.0f}"})\ .format({"JobTitle": lambda x:x.lower(), "EmployeeName": lambda x:x.lower()})\ .hide_index()\ .background_gradient(cmap='Blues') df.style.format({"BasePay": "${:20,.0f}", "OtherPay": "${:20,.0f}", "TotalPay": "${:20,.0f}", "TotalPayBenefits":"${:20,.0f}"})\ .format({"JobTitle": lambda x:x.lower(), "EmployeeName": lambda x:x.lower()})\ .hide_index()\ .bar(subset=["OtherPay",], color='lightgreen')\ .bar(subset=["BasePay"], color='#ee1f5f')\ .bar(subset=["TotalPay"], color='#FFA07A') df Explanation: Our Data 📉 End of explanation
2,443
Given the following text description, write Python code to implement the functionality described below step by step Description: Electric Machinery Fundamentals 5th edition Chapter 1 (Code examples) Example 1-10 Calculate and plot the velocity of a linear motor as a function of load. Import the PyLab namespace (provides set of useful commands and constants like $\pi$) Step1: Define all the parameters Step2: Select the forces to apply to the bar Step3: Calculate the currents flowing in the motor Step4: Calculate the induced voltages on the bar Step5: Calculate the velocities of the bar Step6: Plot the velocity of the bar versus force
Python Code: %pylab notebook Explanation: Electric Machinery Fundamentals 5th edition Chapter 1 (Code examples) Example 1-10 Calculate and plot the velocity of a linear motor as a function of load. Import the PyLab namespace (provides set of useful commands and constants like $\pi$) End of explanation VB = 120.0 # Battery voltage (V) r = 0.3 # Resistance (ohms) l = 1.0 # Bar length (m) B = 0.6 # Flux density (T) Explanation: Define all the parameters: End of explanation F = arange(0,51,10) # Force (N) F # Lets print the variable to check. # Can you exaplain why "arange(0,50,10)" gives not the array below? Explanation: Select the forces to apply to the bar: End of explanation i = F / (l * B) # Current (A) Explanation: Calculate the currents flowing in the motor: End of explanation eind = VB - i * r # Induced voltage (V) Explanation: Calculate the induced voltages on the bar: End of explanation v_bar = eind / (l * B); # Velocity (m/s) Explanation: Calculate the velocities of the bar: End of explanation plot(F, v_bar); rc('text', usetex=True) # enable LaTeX commands for plot title(r'\textbf{Plot of velocity versus applied force}') xlabel(r'\textbf{Force (N)}') ylabel(r'\textbf{Velocity (m/s)}') axis([0, 50, 0, 200]) grid() Explanation: Plot the velocity of the bar versus force: End of explanation
2,444
Given the following text description, write Python code to implement the functionality described below step by step Description: Basic vectorization Vectorizing text is a fundamental concept in applying both supervised and unsupervised learning to documents. Basically, you can think of it as turning the words in a given text document into features, represented by a matrix. Rather than explicitly defining our features, as we did for the donor classification problem, we can instead take advantage of tools, called vectorizers, that turn each word into a feature best described as "The number of times Word X appears in this document". Here's an example with one bill title Step1: Think of this vector as a matrix with one row and 12 columns. The row corresponds to our document above. The columns each correspond to a word contained in that document (the first is "44277", the second is "act", etc.) The numbers correspond to the number of times each word appears in that document. You'll see that all words appear once, except the last one, "to", which appears twice. Now what happens if we add another bill and run it again? Step2: Now we've got two rows, each corresponding to a document. The columns correspond to all words contained in BOTH documents, with counts. For example, the first entry from the first column, "44277', appears once in the first document but zero times in the second. This, basically, is the concept of vectorization. Cleaning up our vectors As you might imagine, a document set with a relatively large vocabulary can result in vectors that are thousands and thousands of dimensions wide. This isn't necessarily bad, but in the interest of keeping our feature space as low-dimensional as possible, there are a few things we can do to clean them up. First is removing so-called "stop words" -- words like "and", "or", "the', etc. that appear in almost every document and therefore aren't especially useful. Scikit-learn's vectorizer objects make this easy Step3: Notice that our feature space is now a little smaller. We can use a similar trick to eliminate words that only appear a small number of times, which becomes useful when document sets get very large. Step4: This is a bad example for this document set, but it will help later -- I promise. Finally, we can also create features that comprise more than one word. These are known as N-grams, with the N being the number of words contained in the feature. Here is how you could create a feature vector of all 1-grams and 2-grams
Python Code: bill_titles = ['An act to amend Section 44277 of the Education Code, relating to teachers.'] vectorizer = CountVectorizer() features = vectorizer.fit_transform(bill_titles).toarray() print features print vectorizer.get_feature_names() Explanation: Basic vectorization Vectorizing text is a fundamental concept in applying both supervised and unsupervised learning to documents. Basically, you can think of it as turning the words in a given text document into features, represented by a matrix. Rather than explicitly defining our features, as we did for the donor classification problem, we can instead take advantage of tools, called vectorizers, that turn each word into a feature best described as "The number of times Word X appears in this document". Here's an example with one bill title: End of explanation bill_titles = ['An act to amend Section 44277 of the Education Code, relating to teachers.', 'An act relative to health care coverage'] features = vectorizer.fit_transform(bill_titles).toarray() print features print vectorizer.get_feature_names() Explanation: Think of this vector as a matrix with one row and 12 columns. The row corresponds to our document above. The columns each correspond to a word contained in that document (the first is "44277", the second is "act", etc.) The numbers correspond to the number of times each word appears in that document. You'll see that all words appear once, except the last one, "to", which appears twice. Now what happens if we add another bill and run it again? End of explanation new_vectorizer = CountVectorizer(stop_words='english') features = new_vectorizer.fit_transform(bill_titles).toarray() print features print new_vectorizer.get_feature_names() Explanation: Now we've got two rows, each corresponding to a document. The columns correspond to all words contained in BOTH documents, with counts. For example, the first entry from the first column, "44277', appears once in the first document but zero times in the second. This, basically, is the concept of vectorization. Cleaning up our vectors As you might imagine, a document set with a relatively large vocabulary can result in vectors that are thousands and thousands of dimensions wide. This isn't necessarily bad, but in the interest of keeping our feature space as low-dimensional as possible, there are a few things we can do to clean them up. First is removing so-called "stop words" -- words like "and", "or", "the', etc. that appear in almost every document and therefore aren't especially useful. Scikit-learn's vectorizer objects make this easy: End of explanation new_vectorizer = CountVectorizer(stop_words='english', min_df=2) features = new_vectorizer.fit_transform(bill_titles).toarray() print features print new_vectorizer.get_feature_names() Explanation: Notice that our feature space is now a little smaller. We can use a similar trick to eliminate words that only appear a small number of times, which becomes useful when document sets get very large. End of explanation new_vectorizer = CountVectorizer(stop_words='english', ngram_range=(1,2)) features = new_vectorizer.fit_transform(bill_titles).toarray() print features print new_vectorizer.get_feature_names() Explanation: This is a bad example for this document set, but it will help later -- I promise. Finally, we can also create features that comprise more than one word. These are known as N-grams, with the N being the number of words contained in the feature. Here is how you could create a feature vector of all 1-grams and 2-grams: End of explanation
2,445
Given the following text description, write Python code to implement the functionality described below step by step Description: Figure 1 Start by loading some boiler plate Step1: And some more specialized dependencies Step2: Configuration for this figure. Step3: Open a chest located on a remote globus endpoint and load a remote json configuration file. Step4: We want to grab all the data for the selected frame.
Python Code: %matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (10.0, 16.0) import matplotlib.pyplot as plt import numpy as np from scipy.interpolate import interp1d, InterpolatedUnivariateSpline from scipy.optimize import bisect import json from functools import partial class Foo: pass Explanation: Figure 1 Start by loading some boiler plate: matplotlib, numpy, scipy, json, functools, and a convenience class. End of explanation from chest import Chest from slict import CachedSlict from glopen import glopen, glopen_many Explanation: And some more specialized dependencies: 1. Slict provides a convenient slice-able dictionary interface 2. Chest is an out-of-core dictionary that we'll hook directly to a globus remote using... 3. glopen is an open-like context manager for remote globus files End of explanation config = Foo() config.name = "HighAspect/HA_visc/HA_visc" #config.arch_end = "alcf#dtn_mira/projects/alpha-nek" config.arch_end = "maxhutch#alpha-admin/pub/" config.frame = 1 config.lower = .25 config.upper = .75 Explanation: Configuration for this figure. End of explanation c = Chest(path = "{:s}-results".format(config.name), open = partial(glopen, endpoint=config.arch_end), open_many = partial(glopen_many, endpoint=config.arch_end)) sc = CachedSlict(c) with glopen( "{:s}.json".format(config.name), mode='r', endpoint = config.arch_end, ) as f: params = json.load(f) Explanation: Open a chest located on a remote globus endpoint and load a remote json configuration file. End of explanation T = sc[:,'H'].keys()[config.frame] frame = sc[T,:] c.prefetch(frame.full_keys()) import yt #test = frame['t_yz'] + 1. test = np.tile(frame['t_yz'].transpose(),(1,1,1)).transpose() + 1. data = dict( density = (test, "g/cm**3") ) bbox = np.array([[params['root_mesh'][1], params['extent_mesh'][1]], [params['root_mesh'][2], params['extent_mesh'][2]], [0., 1.]]) #bbox = np.array([[params['root_mesh'][1], params['extent_mesh'][1]], # [params['root_mesh'][2], params['extent_mesh'][2]]]) ds = yt.load_uniform_grid(data, test.shape, bbox=bbox, periodicity=(False,True,False), length_unit="m") slc = yt.SlicePlot(ds, "z", "density", width=(1,16)) slc.set_buff_size((14336,448)) #slc.pan((.25,7)) slc.show() sl = ds.slice("z", 0).to_frb((1., 'm'), (128,128), height=(32.,'m')) plt.imshow(sl['density'].d) plt.show() plt.figure() plt.imshow(test[:,7000:7500,0].transpose()) plt.show() Explanation: We want to grab all the data for the selected frame. End of explanation
2,446
Given the following text description, write Python code to implement the functionality described below step by step Description: Random Walks In many situations, it is very useful to think of some sort of process that you wish to model as a succession of random steps. This can describe a wide variety of phenomena - the behavior of the stock market, models of population dynamics in ecosystems, the properties of polymers, the movement of molecules in liquids or gases, modeling neurons in the brain, or in building Google's PageRank search model. This type of modeling is known as a "random walk", and while the process being modeled can vary tremendously, the underlying process is simple. In this exercise, we are going to model such a random walk and learn about some of its behaviors! Learning goals Step1: Part 2 Step2: Part 3
Python Code: # put your code for Part 1 here. Add extra cells as necessary! Explanation: Random Walks In many situations, it is very useful to think of some sort of process that you wish to model as a succession of random steps. This can describe a wide variety of phenomena - the behavior of the stock market, models of population dynamics in ecosystems, the properties of polymers, the movement of molecules in liquids or gases, modeling neurons in the brain, or in building Google's PageRank search model. This type of modeling is known as a "random walk", and while the process being modeled can vary tremendously, the underlying process is simple. In this exercise, we are going to model such a random walk and learn about some of its behaviors! Learning goals: Model a random walk Learn about the behavior of random walks in one and two dimensions Plot both the distribution of random walks and the outcome of a single random walk Group members Put the name of your group members here! Part 1: One-dimensional random walk. Imagine that you draw a line on the floor, with a mark every foot. You start at the middle of the line (the point you have decided is the "origin"). You then flip a "fair" coin N times ("fair" means that it has equal chances of coming up heads and tails). Every time the coin comes up heads, you take one step to the right. Every time it comes up tails, you take a step to the left. Questions: After $N_{flip}$ coin flips and steps, how far are you from the origin? If you repeat this experiment $N_{trial}$ times, what will the distribution of distances from the origin be, and what is the mean distance that you go from the origin? (Note: "distance" means the absolute value of distance from the origin!) First: as a group, come up with a solution to this problem on your whiteboard. Use a flow chart, pseudo-code, diagrams, or anything else that you need to get started. Check with an instructor before you continue! Then: In pairs, write a code in the space provided below to answer these questions! End of explanation # put your code for Part 2 here. Add extra cells as necessary! Explanation: Part 2: Two-dimensional walk Now, we're going to do the same thing, but in two dimensions, x and y. This time, you will start at the origin, pick a random direction, and take a step one foot in that direction. You will then randomly pick a new direction, take a step, and so on, for a total of $N_{step}$ steps. Questions: After $N_{step}$ random steps, how far are you from the origin? If you repeat this experiment $N_{trial}$ times, what will the distribution of distances from the origin be, and what is the mean distance that you go from the origin? (Note: "distance" means the absolute value of distance from the origin!) Does the mean value differ from Part 1? For one trial, plot out the steps taken in the x-y plane. Does it look random? First: As before, come up with a solution to this problem on your whiteboard as a group. Check with an instructor before you continue! Then: In pairs, write a code in the space provided below to answer these questions! End of explanation # put your code for Part 3 here. Add extra cells as necessary! Explanation: Part 3: A different kind of random walk. If you have time, copy and paste your 1D random walk code in the cell below. This time, modify your code so that the coin toss is biased - that you are more likely to take a step in one direction than in the other (i.e., the probability of stepping to the right is $p_{step}$, of stepping to the left is $1-p_{step}$, and $p_{step} \neq 0.5$). How does the distibution of distances gone, as well as the mean distance from the origin, change as $p_{step}$ varies from 0.5? End of explanation
2,447
Given the following text description, write Python code to implement the functionality described below step by step Description: Transfer Learning Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture. <img src="assets/cnnarchitecture.jpg" width=700px> VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes. You can read more about transfer learning from the CS231n course notes. Pretrained VGGNet We'll be using a pretrained network from https Step1: Flower power Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial. Step2: ConvNet Codes Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier. Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code) Step3: Below I'm running images through the VGG network in batches. Exercise Step4: Building the Classifier Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work. Step5: Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! Exercise Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn. You can create the splitter like so Step7: If you did it right, you should see these sizes for the training sets Step9: Batches! Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data. Step10: Training Here, we'll train the network. Exercise Step11: Testing Below you see the test accuracy. You can also see the predictions returned for images. Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
Python Code: from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm vgg_dir = 'tensorflow_vgg/' # Make sure vgg exists if not isdir(vgg_dir): raise Exception("VGG directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(vgg_dir + "vgg16.npy"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar: urlretrieve( 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy', vgg_dir + 'vgg16.npy', pbar.hook) else: print("Parameter file already exists!") Explanation: Transfer Learning Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture. <img src="assets/cnnarchitecture.jpg" width=700px> VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes. You can read more about transfer learning from the CS231n course notes. Pretrained VGGNet We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash. git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell. End of explanation import tarfile dataset_folder_path = 'flower_photos' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('flower_photos.tar.gz'): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar: urlretrieve( 'http://download.tensorflow.org/example_images/flower_photos.tgz', 'flower_photos.tar.gz', pbar.hook) if not isdir(dataset_folder_path): with tarfile.open('flower_photos.tar.gz') as tar: tar.extractall() tar.close() Explanation: Flower power Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial. End of explanation import os import numpy as np import tensorflow as tf from tensorflow_vgg import vgg16 from tensorflow_vgg import utils data_dir = 'flower_photos/' contents = os.listdir(data_dir) classes = [each for each in contents if os.path.isdir(data_dir + each)] Explanation: ConvNet Codes Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier. Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code): ``` self.conv1_1 = self.conv_layer(bgr, "conv1_1") self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2") self.pool1 = self.max_pool(self.conv1_2, 'pool1') self.conv2_1 = self.conv_layer(self.pool1, "conv2_1") self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2") self.pool2 = self.max_pool(self.conv2_2, 'pool2') self.conv3_1 = self.conv_layer(self.pool2, "conv3_1") self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2") self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3") self.pool3 = self.max_pool(self.conv3_3, 'pool3') self.conv4_1 = self.conv_layer(self.pool3, "conv4_1") self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2") self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3") self.pool4 = self.max_pool(self.conv4_3, 'pool4') self.conv5_1 = self.conv_layer(self.pool4, "conv5_1") self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2") self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3") self.pool5 = self.max_pool(self.conv5_3, 'pool5') self.fc6 = self.fc_layer(self.pool5, "fc6") self.relu6 = tf.nn.relu(self.fc6) ``` So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use with tf.Session() as sess: vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer, feed_dict = {input_: images} codes = sess.run(vgg.relu6, feed_dict=feed_dict) End of explanation # Set the batch size higher if you can fit in in your GPU memory batch_size = 10 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: # TODO: Build the vgg network here vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) for each in classes: print("Starting {} images".format(each)) class_path = data_dir + each files = os.listdir(class_path) for ii, file in enumerate(files, 1): # Add images to the current batch # utils.load_image crops the input images for us, from the center img = utils.load_image(os.path.join(class_path, file)) batch.append(img.reshape((1, 224, 224, 3))) labels.append(each) # Running the batch through the network to get the codes if ii % batch_size == 0 or ii == len(files): # Image batch to pass to VGG network images = np.concatenate(batch) # TODO: Get the values from the relu6 layer of the VGG network feed_dict = {input_: images} codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict) # Here I'm building an array of the codes if codes is None: codes = codes_batch else: codes = np.concatenate((codes, codes_batch)) # Reset to start building the next batch batch = [] print('{} images processed'.format(ii)) # write codes to file with open('codes', 'w') as f: codes.tofile(f) # write labels to file import csv with open('labels', 'w') as f: writer = csv.writer(f, delimiter='\n') writer.writerow(labels) Explanation: Below I'm running images through the VGG network in batches. Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values). End of explanation # read codes and labels from file import csv with open('labels') as f: reader = csv.reader(f, delimiter='\n') labels = np.array([each for each in reader if len(each) > 0]).squeeze() with open('codes') as f: codes = np.fromfile(f, dtype=np.float32) codes = codes.reshape((len(labels), -1)) Explanation: Building the Classifier Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work. End of explanation from sklearn.preprocessing import LabelBinarizer lb = LabelBinarizer() labels_vecs = lb.fit_transform(labels) # Your one-hot encoded labels array here Explanation: Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels. End of explanation from sklearn.model_selection import StratifiedShuffleSplit ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) splitter = ss.split(codes, labels_vecs) train_index, test_index = list(splitter)[0] train_x, train_y = codes[train_index], labels_vecs[train_index] val_index = test_index[:len(test_index)//2] test_index = test_index[len(test_index)//2:] val_x, val_y = codes[val_index], labels_vecs[val_index] test_x, test_y = codes[test_index], labels_vecs[test_index] print("Train shapes (x, y):", train_x.shape, train_y.shape) print("Validation shapes (x, y):", val_x.shape, val_y.shape) print("Test shapes (x, y):", test_x.shape, test_y.shape) Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn. You can create the splitter like so: ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) Then split the data with splitter = ss.split(x, y) ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide. Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets. End of explanation tf.contrib.layers.fully_connected? inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]]) labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]]) # TODO: Classifier layers and operations fc = tf.contrib.layers.fully_connected(inputs_, 256) logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None) # output layer logits cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits) cost = tf.reduce_mean(cross_entropy) # cross entropy loss optimizer = tf.train.AdamOptimizer().minimize(cost) # training optimizer # Operations for validation/test accuracy predicted = tf.nn.softmax(logits) correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) Explanation: If you did it right, you should see these sizes for the training sets: Train shapes (x, y): (2936, 4096) (2936, 5) Validation shapes (x, y): (367, 4096) (367, 5) Test shapes (x, y): (367, 4096) (367, 5) Classifier layers Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network. Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost. End of explanation def get_batches(x, y, n_batches=10): Return a generator that yields batches from arrays x and y. batch_size = len(x)//n_batches for ii in range(0, n_batches*batch_size, batch_size): # If we're not on the last batch, grab data with size batch_size if ii != (n_batches-1)*batch_size: X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] # On the last batch, grab the rest of the data else: X, Y = x[ii:], y[ii:] # I love generators yield X, Y Explanation: Batches! Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data. End of explanation epochs = 10 saver = tf.train.Saver() with tf.Session() as sess: # TODO: Your training code here sess.run(tf.global_variables_initializer()) iteration = 1 for epoch in range(epochs): for ii, (x, y) in enumerate(get_batches(train_x, train_y), 1): feed = { inputs_: x, labels_: y } loss, _ = sess.run([cost, optimizer], feed_dict=feed) # print(loss) if iteration%5==0: feed = { inputs_: val_x, labels_: val_y } acc = sess.run(accuracy, feed_dict=feed) print( "Epoch: {}/{}".format(epoch+1, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss), "Val acc: {:.4f}".format(acc)) iteration += 1 saver.save(sess, "checkpoints/flowers.ckpt") Explanation: Training Here, we'll train the network. Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own! End of explanation with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: test_x, labels_: test_y} test_acc = sess.run(accuracy, feed_dict=feed) print("Test accuracy: {:.4f}".format(test_acc)) %matplotlib inline import matplotlib.pyplot as plt from scipy.ndimage import imread Explanation: Testing Below you see the test accuracy. You can also see the predictions returned for images. End of explanation test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg' test_img = imread(test_img_path) plt.imshow(test_img) # Run this cell if you don't have a vgg graph built if 'vgg' in globals(): print('"vgg" object already exists. Will not create again.') else: #create vgg with tf.Session() as sess: input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) vgg = vgg16.Vgg16() vgg.build(input_) with tf.Session() as sess: img = utils.load_image(test_img_path) img = img.reshape((1, 224, 224, 3)) feed_dict = {input_: img} code = sess.run(vgg.relu6, feed_dict=feed_dict) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: code} prediction = sess.run(predicted, feed_dict=feed).squeeze() plt.imshow(test_img) plt.barh(np.arange(5), prediction) _ = plt.yticks(np.arange(5), lb.classes_) Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. End of explanation
2,448
Given the following text description, write Python code to implement the functionality described below step by step Description: Data validation using TFX Pipeline and TensorFlow Data Validation Learning Objectives Understand the data types, distributions, and other information (e.g., mean value, or number of uniques) about each feature. Generate a preliminary schema that describes the data. Identify anomalies and missing values in the data with respect to given schema. Introduction In this notebook, we will create and run TFX pipelines to validate input data and create an ML model. This notebook is based on the TFX pipeline we built in Simple TFX Pipeline Tutorial. If you have not read that tutorial yet, you should read it before proceeding with this notebook. In this notebook, we will create two TFX pipelines. First, we will create a pipeline to analyze the dataset and generate a preliminary schema of the given dataset. This pipeline will include two new components, StatisticsGen and SchemaGen. Once we have a proper schema of the data, we will create a pipeline to train an ML classification model based on the pipeline from the previous tutorial. In this pipeline, we will use the schema from the first pipeline and a new component, ExampleValidator, to validate the input data. The three new components, StatisticsGen, SchemaGen and ExampleValidator, are TFX components for data analysis and validation, and they are implemented using the TensorFlow Data Validation library. Please see Understanding TFX Pipelines to learn more about various concepts in TFX. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Install TFX Step1: Restart the kernel (Kernel > Restart kernel > Restart). Please ignore any incompatibility warnings and errors. Check the TensorFlow and TFX versions. Step2: Set up variables There are some variables used to define a pipeline. You can customize these variables as you want. By default all output from the pipeline will be generated under the current directory. Step3: Prepare example data We will download the example dataset for use in our TFX pipeline. The dataset we are using is Palmer Penguins dataset which is also used in other TFX examples. There are four numeric features in this dataset Step4: Take a quick look at the CSV file. Step6: You should be able to see five feature columns. species is one of 0, 1 or 2, and all other features should have values between 0 and 1. We will create a TFX pipeline to analyze this dataset. Generate a preliminary schema TFX pipelines are defined using Python APIs. We will create a pipeline to generate a schema from the input examples automatically. This schema can be reviewed by a human and adjusted as needed. Once the schema is finalized it can be used for training and example validation in later tasks. In addition to CsvExampleGen which is used in Simple TFX Pipeline Tutorial, we will use StatisticsGen and SchemaGen Step7: Run the pipeline We will use LocalDagRunner as in the previous tutorial. Step10: You should see INFO Step11: Now we can examine the outputs from the pipeline execution. Step12: It is time to examine the outputs from each component. As described above, Tensorflow Data Validation(TFDV) is used in StatisticsGen and SchemaGen, and TFDV also provides visualization of the outputs from these components. In this tutorial, we will use the visualization helper methods in TFX which use TFDV internally to show the visualization. Examine the output from StatisticsGen Step13: <!-- <img class="tfo-display-only-on-site" src="images/penguin_tfdv/penguin_tfdv_statistics.png"/> --> You can see various stats for the input data. These statistics are supplied to SchemaGen to construct an initial schema of data automatically. Examine the output from SchemaGen Step14: This schema is automatically inferred from the output of StatisticsGen. You should be able to see 4 FLOAT features and 1 INT feature. Export the schema for future use We need to review and refine the generated schema. The reviewed schema needs to be persisted to be used in subsequent pipelines for ML model training. In other words, you might want to add the schema file to your version control system for actual use cases. In this tutorial, we will just copy the schema to a predefined filesystem path for simplicity. Step15: The schema file uses Protocol Buffer text format and an instance of TensorFlow Metadata Schema proto. Step19: You should be sure to review and possibly edit the schema definition as needed. In this tutorial, we will just use the generated schema unchanged. Validate input examples and train an ML model We will go back to the pipeline that we created in Simple TFX Pipeline Tutorial, to train an ML model and use the generated schema for writing the model training code. We will also add an ExampleValidator component which will look for anomalies and missing values in the incoming dataset with respect to the schema. Write model training code We need to write the model code as we did in Simple TFX Pipeline Tutorial. The model itself is the same as in the previous tutorial, but this time we will use the schema generated from the previous pipeline instead of specifying features manually. Most of the code was not changed. The only difference is that we do not need to specify the names and types of features in this file. Instead, we read them from the schema file. Step21: Now you have completed all preparation steps to build a TFX pipeline for model training. Write a pipeline definition We will add two new components, Importer and ExampleValidator. Importer brings an external file into the TFX pipeline. In this case, it is a file containing schema definition. ExampleValidator will examine the input data and validate whether all input data conforms the data schema we provided. Step22: Run the pipeline Step23: You should see INFO Step24: ExampleAnomalies from the ExampleValidator can be visualized as well.
Python Code: # Install the TensorFlow Extended library !pip install -U tfx Explanation: Data validation using TFX Pipeline and TensorFlow Data Validation Learning Objectives Understand the data types, distributions, and other information (e.g., mean value, or number of uniques) about each feature. Generate a preliminary schema that describes the data. Identify anomalies and missing values in the data with respect to given schema. Introduction In this notebook, we will create and run TFX pipelines to validate input data and create an ML model. This notebook is based on the TFX pipeline we built in Simple TFX Pipeline Tutorial. If you have not read that tutorial yet, you should read it before proceeding with this notebook. In this notebook, we will create two TFX pipelines. First, we will create a pipeline to analyze the dataset and generate a preliminary schema of the given dataset. This pipeline will include two new components, StatisticsGen and SchemaGen. Once we have a proper schema of the data, we will create a pipeline to train an ML classification model based on the pipeline from the previous tutorial. In this pipeline, we will use the schema from the first pipeline and a new component, ExampleValidator, to validate the input data. The three new components, StatisticsGen, SchemaGen and ExampleValidator, are TFX components for data analysis and validation, and they are implemented using the TensorFlow Data Validation library. Please see Understanding TFX Pipelines to learn more about various concepts in TFX. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Install TFX End of explanation # Load necessary libraries import tensorflow as tf print('TensorFlow version: {}'.format(tf.__version__)) from tfx import v1 as tfx print('TFX version: {}'.format(tfx.__version__)) Explanation: Restart the kernel (Kernel > Restart kernel > Restart). Please ignore any incompatibility warnings and errors. Check the TensorFlow and TFX versions. End of explanation import os # We will create two pipelines. One for schema generation and one for training. SCHEMA_PIPELINE_NAME = "penguin-tfdv-schema" PIPELINE_NAME = "penguin-tfdv" # Output directory to store artifacts generated from the pipeline. SCHEMA_PIPELINE_ROOT = os.path.join('pipelines', SCHEMA_PIPELINE_NAME) PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME) # Path to a SQLite DB file to use as an MLMD storage. SCHEMA_METADATA_PATH = os.path.join('metadata', SCHEMA_PIPELINE_NAME, 'metadata.db') METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db') # Output directory where created models from the pipeline will be exported. SERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME) from absl import logging logging.set_verbosity(logging.INFO) # Set default logging level. Explanation: Set up variables There are some variables used to define a pipeline. You can customize these variables as you want. By default all output from the pipeline will be generated under the current directory. End of explanation import urllib.request import tempfile # TODO DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory. _data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv' _data_filepath = os.path.join(DATA_ROOT, "data.csv") urllib.request.urlretrieve(_data_url, _data_filepath) Explanation: Prepare example data We will download the example dataset for use in our TFX pipeline. The dataset we are using is Palmer Penguins dataset which is also used in other TFX examples. There are four numeric features in this dataset: culmen_length_mm culmen_depth_mm flipper_length_mm body_mass_g All features were already normalized to have range [0,1]. We will build a classification model which predicts the species of penguins. Because the TFX ExampleGen component reads inputs from a directory, we need to create a directory and copy the dataset to it. End of explanation # Print the first ten lines of the file !head {_data_filepath} Explanation: Take a quick look at the CSV file. End of explanation def _create_schema_pipeline(pipeline_name: str, pipeline_root: str, data_root: str, metadata_path: str) -> tfx.dsl.Pipeline: Creates a pipeline for schema generation. # Brings data into the pipeline. example_gen = tfx.components.CsvExampleGen(input_base=data_root) # TODO # NEW: Computes statistics over data for visualization and schema generation. statistics_gen = tfx.components.StatisticsGen( examples=example_gen.outputs['examples']) # TODO # NEW: Generates schema based on the generated statistics. schema_gen = tfx.components.SchemaGen( statistics=statistics_gen.outputs['statistics'], infer_feature_shape=True) components = [ example_gen, statistics_gen, schema_gen, ] return tfx.dsl.Pipeline( pipeline_name=pipeline_name, pipeline_root=pipeline_root, metadata_connection_config=tfx.orchestration.metadata .sqlite_metadata_connection_config(metadata_path), components=components) Explanation: You should be able to see five feature columns. species is one of 0, 1 or 2, and all other features should have values between 0 and 1. We will create a TFX pipeline to analyze this dataset. Generate a preliminary schema TFX pipelines are defined using Python APIs. We will create a pipeline to generate a schema from the input examples automatically. This schema can be reviewed by a human and adjusted as needed. Once the schema is finalized it can be used for training and example validation in later tasks. In addition to CsvExampleGen which is used in Simple TFX Pipeline Tutorial, we will use StatisticsGen and SchemaGen: StatisticsGen calculates statistics for the dataset. SchemaGen examines the statistics and creates an initial data schema. See the guides for each component or TFX components tutorial to learn more on these components. Write a pipeline definition We define a function to create a TFX pipeline. A Pipeline object represents a TFX pipeline which can be run using one of pipeline orchestration systems that TFX supports. End of explanation # run the pipeline using Local TFX DAG runner tfx.orchestration.LocalDagRunner().run( _create_schema_pipeline( pipeline_name=SCHEMA_PIPELINE_NAME, pipeline_root=SCHEMA_PIPELINE_ROOT, data_root=DATA_ROOT, metadata_path=SCHEMA_METADATA_PATH)) Explanation: Run the pipeline We will use LocalDagRunner as in the previous tutorial. End of explanation from ml_metadata.proto import metadata_store_pb2 # Non-public APIs, just for showcase. from tfx.orchestration.portable.mlmd import execution_lib def get_latest_artifacts(metadata, pipeline_name, component_id): Output artifacts of the latest run of the component. context = metadata.store.get_context_by_type_and_name( 'node', f'{pipeline_name}.{component_id}') executions = metadata.store.get_executions_by_context(context.id) latest_execution = max(executions, key=lambda e:e.last_update_time_since_epoch) return execution_lib.get_artifacts_dict(metadata, latest_execution.id, [metadata_store_pb2.Event.OUTPUT]) # Non-public APIs, just for showcase. from tfx.orchestration.experimental.interactive import visualizations def visualize_artifacts(artifacts): Visualizes artifacts using standard visualization modules. for artifact in artifacts: visualization = visualizations.get_registry().get_visualization( artifact.type_name) if visualization: visualization.display(artifact) from tfx.orchestration.experimental.interactive import standard_visualizations standard_visualizations.register_standard_visualizations() Explanation: You should see INFO:absl:Component SchemaGen is finished. if the pipeline finished successfully. We will examine the output of the pipeline to understand our dataset. Review outputs of the pipeline As explained in the previous tutorial, a TFX pipeline produces two kinds of outputs, artifacts and a metadata DB(MLMD) which contains metadata of artifacts and pipeline executions. We defined the location of these outputs in the above cells. By default, artifacts are stored under the pipelines directory and metadata is stored as a sqlite database under the metadata directory. You can use MLMD APIs to locate these outputs programatically. First, we will define some utility functions to search for the output artifacts that were just produced. End of explanation # Non-public APIs, just for showcase. from tfx.orchestration.metadata import Metadata from tfx.types import standard_component_specs metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config( SCHEMA_METADATA_PATH) with Metadata(metadata_connection_config) as metadata_handler: # Find output artifacts from MLMD. stat_gen_output = get_latest_artifacts(metadata_handler, SCHEMA_PIPELINE_NAME, 'StatisticsGen') stats_artifacts = stat_gen_output[standard_component_specs.STATISTICS_KEY] schema_gen_output = get_latest_artifacts(metadata_handler, SCHEMA_PIPELINE_NAME, 'SchemaGen') schema_artifacts = schema_gen_output[standard_component_specs.SCHEMA_KEY] Explanation: Now we can examine the outputs from the pipeline execution. End of explanation # docs-infra: no-execute visualize_artifacts(stats_artifacts) Explanation: It is time to examine the outputs from each component. As described above, Tensorflow Data Validation(TFDV) is used in StatisticsGen and SchemaGen, and TFDV also provides visualization of the outputs from these components. In this tutorial, we will use the visualization helper methods in TFX which use TFDV internally to show the visualization. Examine the output from StatisticsGen End of explanation visualize_artifacts(schema_artifacts) Explanation: <!-- <img class="tfo-display-only-on-site" src="images/penguin_tfdv/penguin_tfdv_statistics.png"/> --> You can see various stats for the input data. These statistics are supplied to SchemaGen to construct an initial schema of data automatically. Examine the output from SchemaGen End of explanation import shutil _schema_filename = 'schema.pbtxt' SCHEMA_PATH = 'schema' os.makedirs(SCHEMA_PATH, exist_ok=True) _generated_path = os.path.join(schema_artifacts[0].uri, _schema_filename) # Copy the 'schema.pbtxt' file from the artifact uri to a predefined path. shutil.copy(_generated_path, SCHEMA_PATH) Explanation: This schema is automatically inferred from the output of StatisticsGen. You should be able to see 4 FLOAT features and 1 INT feature. Export the schema for future use We need to review and refine the generated schema. The reviewed schema needs to be persisted to be used in subsequent pipelines for ML model training. In other words, you might want to add the schema file to your version control system for actual use cases. In this tutorial, we will just copy the schema to a predefined filesystem path for simplicity. End of explanation print(f'Schema at {SCHEMA_PATH}-----') !cat {SCHEMA_PATH}/* Explanation: The schema file uses Protocol Buffer text format and an instance of TensorFlow Metadata Schema proto. End of explanation _trainer_module_file = 'penguin_trainer.py' %%writefile {_trainer_module_file} from typing import List from absl import logging import tensorflow as tf from tensorflow import keras from tensorflow_transform.tf_metadata import schema_utils from tfx import v1 as tfx from tfx_bsl.public import tfxio from tensorflow_metadata.proto.v0 import schema_pb2 # We don't need to specify _FEATURE_KEYS and _FEATURE_SPEC any more. # Those information can be read from the given schema file. _LABEL_KEY = 'species' _TRAIN_BATCH_SIZE = 20 _EVAL_BATCH_SIZE = 10 def _input_fn(file_pattern: List[str], data_accessor: tfx.components.DataAccessor, schema: schema_pb2.Schema, batch_size: int = 200) -> tf.data.Dataset: Generates features and label for training. Args: file_pattern: List of paths or patterns of input tfrecord files. data_accessor: DataAccessor for converting input to RecordBatch. schema: schema of the input data. batch_size: representing the number of consecutive elements of returned dataset to combine in a single batch Returns: A dataset that contains (features, indices) tuple where features is a dictionary of Tensors, and indices is a single Tensor of label indices. return data_accessor.tf_dataset_factory( file_pattern, tfxio.TensorFlowDatasetOptions( batch_size=batch_size, label_key=_LABEL_KEY), schema=schema).repeat() def _build_keras_model(schema: schema_pb2.Schema) -> tf.keras.Model: Creates a DNN Keras model for classifying penguin data. Returns: A Keras Model. # The model below is built with Functional API, please refer to # https://www.tensorflow.org/guide/keras/overview for all API options. # ++ Changed code: Uses all features in the schema except the label. feature_keys = [f.name for f in schema.feature if f.name != _LABEL_KEY] inputs = [keras.layers.Input(shape=(1,), name=f) for f in feature_keys] # ++ End of the changed code. d = keras.layers.concatenate(inputs) for _ in range(2): d = keras.layers.Dense(8, activation='relu')(d) outputs = keras.layers.Dense(3)(d) model = keras.Model(inputs=inputs, outputs=outputs) model.compile( optimizer=keras.optimizers.Adam(1e-2), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy()]) model.summary(print_fn=logging.info) return model # TFX Trainer will call this function. def run_fn(fn_args: tfx.components.FnArgs): Train the model based on given args. Args: fn_args: Holds args used to train the model as name/value pairs. # ++ Changed code: Reads in schema file passed to the Trainer component. schema = tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema_pb2.Schema()) # ++ End of the changed code. train_dataset = _input_fn( fn_args.train_files, fn_args.data_accessor, schema, batch_size=_TRAIN_BATCH_SIZE) eval_dataset = _input_fn( fn_args.eval_files, fn_args.data_accessor, schema, batch_size=_EVAL_BATCH_SIZE) model = _build_keras_model(schema) model.fit( train_dataset, steps_per_epoch=fn_args.train_steps, validation_data=eval_dataset, validation_steps=fn_args.eval_steps) # The result of the training should be saved in `fn_args.serving_model_dir` # directory. model.save(fn_args.serving_model_dir, save_format='tf') Explanation: You should be sure to review and possibly edit the schema definition as needed. In this tutorial, we will just use the generated schema unchanged. Validate input examples and train an ML model We will go back to the pipeline that we created in Simple TFX Pipeline Tutorial, to train an ML model and use the generated schema for writing the model training code. We will also add an ExampleValidator component which will look for anomalies and missing values in the incoming dataset with respect to the schema. Write model training code We need to write the model code as we did in Simple TFX Pipeline Tutorial. The model itself is the same as in the previous tutorial, but this time we will use the schema generated from the previous pipeline instead of specifying features manually. Most of the code was not changed. The only difference is that we do not need to specify the names and types of features in this file. Instead, we read them from the schema file. End of explanation def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str, schema_path: str, module_file: str, serving_model_dir: str, metadata_path: str) -> tfx.dsl.Pipeline: Creates a pipeline using predefined schema with TFX. # Brings data into the pipeline. example_gen = tfx.components.CsvExampleGen(input_base=data_root) # Computes statistics over data for visualization and example validation. statistics_gen = tfx.components.StatisticsGen( examples=example_gen.outputs['examples']) # NEW: Import the schema. schema_importer = tfx.dsl.Importer( source_uri=schema_path, artifact_type=tfx.types.standard_artifacts.Schema).with_id( 'schema_importer') # TODO # NEW: Performs anomaly detection based on statistics and data schema. example_validator = tfx.components.ExampleValidator( statistics=statistics_gen.outputs['statistics'], schema=schema_importer.outputs['result']) # Uses user-provided Python function that trains a model. trainer = tfx.components.Trainer( module_file=module_file, examples=example_gen.outputs['examples'], schema=schema_importer.outputs['result'], # Pass the imported schema. train_args=tfx.proto.TrainArgs(num_steps=100), eval_args=tfx.proto.EvalArgs(num_steps=5)) # Pushes the model to a filesystem destination. pusher = tfx.components.Pusher( model=trainer.outputs['model'], push_destination=tfx.proto.PushDestination( filesystem=tfx.proto.PushDestination.Filesystem( base_directory=serving_model_dir))) components = [ example_gen, # NEW: Following three components were added to the pipeline. statistics_gen, schema_importer, example_validator, trainer, pusher, ] return tfx.dsl.Pipeline( pipeline_name=pipeline_name, pipeline_root=pipeline_root, metadata_connection_config=tfx.orchestration.metadata .sqlite_metadata_connection_config(metadata_path), components=components) Explanation: Now you have completed all preparation steps to build a TFX pipeline for model training. Write a pipeline definition We will add two new components, Importer and ExampleValidator. Importer brings an external file into the TFX pipeline. In this case, it is a file containing schema definition. ExampleValidator will examine the input data and validate whether all input data conforms the data schema we provided. End of explanation tfx.orchestration.LocalDagRunner().run( _create_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT, data_root=DATA_ROOT, schema_path=SCHEMA_PATH, module_file=_trainer_module_file, serving_model_dir=SERVING_MODEL_DIR, metadata_path=METADATA_PATH)) Explanation: Run the pipeline End of explanation metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config( METADATA_PATH) with Metadata(metadata_connection_config) as metadata_handler: ev_output = get_latest_artifacts(metadata_handler, PIPELINE_NAME, 'ExampleValidator') anomalies_artifacts = ev_output[standard_component_specs.ANOMALIES_KEY] Explanation: You should see INFO:absl:Component Pusher is finished, if the pipeline finished successfully. Examine outputs of the pipeline We have trained the classification model for penguins, and we also have validated the input examples in the ExampleValidator component. We can analyze the output from ExampleValidator as we did with the previous pipeline. End of explanation visualize_artifacts(anomalies_artifacts) Explanation: ExampleAnomalies from the ExampleValidator can be visualized as well. End of explanation
2,449
Given the following text description, write Python code to implement the functionality described below step by step Description: E2E ML on GCP Step1: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. Step2: Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note Step3: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas Step4: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. Step5: Authenticate your Google Cloud account If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps Step6: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. Step7: Only if your bucket doesn't already exist Step8: Finally, validate access to your Cloud Storage bucket by examining its contents Step9: Service Account If you don't know your service account, try to get your service account using gcloud command by executing the second cell below. Step10: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Step11: Import TensorFlow Import the TensorFlow package into your Python environment. Step12: Import TensorFlow Transform Import the TensorFlow Transform (TFT) package into your Python environment. Step13: Import TensorFlow Data Validation Import the TensorFlow Data Validation (TFDV) package into your Python environment. Step14: Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. Step15: Set hardware accelerators You can set hardware accelerators for training and prediction. Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify Step16: Set pre-built containers Set the pre-built Docker container image for training and prediction. For the latest list, see Pre-built containers for training. For the latest list, see Pre-built containers for prediction. Step17: Set machine type Next, set the machine type to use for training and prediction. Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction. machine type n1-standard Step18: Retrieve the dataset from stage 1 Next, retrieve the dataset you created during stage 1 with the helper function find_dataset(). This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version. Step19: Load dataset's user metadata Load the user metadata for the dataset. Step20: Create and run training pipeline To train an AutoML model, you perform two steps Step21: Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters Step22: Create experiment for tracking training related metadata Setup tracking the parameters (configuration) and metrics (results) for each experiment Step23: Create a Vertex AI TensorBoard instance Create a Vertex AI TensorBoard instance to use TensorBoard in conjunction with Vertex AI Training for custom model training. Learn more about Get started with Vertex AI TensorBoard. Step24: Create the input layer for your custom model Next, you create the input layer for your custom tabular model, based on the data types of each feature. Step25: Create the binary classifier custom model Next, you create your binary classifier custom tabular model. Step26: Visualize the model architecture Next, visualize the architecture of the custom model. Step27: Save model artifacts Next, save the model artifacts to your Cloud Storage bucket Step28: Upload the local model to a Vertex AI Model resource Next, you upload your local custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource. Step29: Construct the training package Package layout Before you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout. PKG-INFO README.md setup.cfg setup.py trainer __init__.py task.py other Python scripts The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image. The file trainer/task.py is the Python script for executing the custom training job. Step30: Get feature specification for the preprocessed data Next, create the feature specification for the preprocessed data. Step33: Load the transformed data into a tf.data.Dataset Next, you load the gzip TFRecords on Cloud Storage storage into a tf.data.Dataset generator. These functions are re-used when training the custom model using Vertex Training, so you save them to the python training package. Step34: Test the model architecture with transformed input Next, test the model architecture with a sample of the transformed training input. Note Step35: Develop and test the training scripts When experimenting, one typically develops and tests the training package locally, before moving to training in the cloud. Create training script Next, you write the Python script for compiling and training the model. Step36: Train the model locally Next, test the training package locally, by training with just a few epochs Step37: Evaluate the model locally Next, test the evaluation portion of the training package Step38: Retrieve model from Vertex AI Next, create the Python script to retrieve your experimental model from Vertex AI. Step39: Create the task script for the Python training package Next, you create the task.py script for driving the training package. Some noteable steps include Step40: Test training package locally Next, test your completed training package locally with just a few epochs. Step41: Warmup training Now that you have tested the training scripts, you perform warmup training on the base model. Warmup training is used to stabilize the weight initialization. By doing so, each subsequent training and tuning of the model architecture will start with the same stabilized weight initialization. Step42: Mirrored Strategy When training on a single VM, one can either train was a single compute device or with multiple compute devices on the same VM. With Vertex AI Distributed Training you can specify both the number of compute devices for the VM instance and type of compute devices Step43: Store training script on your Cloud Storage bucket Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket. Step44: Run the custom Python package training job Next, you run the custom job to start the training job by invoking the method run(). The parameters are the same as when running a CustomTrainingJob. Note Step45: Delete a custom training job After a training job is completed, you can delete the training job with the method delete(). Prior to completion, a training job can be canceled with the method cancel(). Step46: Delete the model The method 'delete()' will delete the model. Step47: Hyperparameter tuning Next, you perform hyperparameter tuning with the training package. The training package has some additions that make the same package usable for both hyperparameter tuning, as well as local testing and full cloud training Step48: Prepare your disk specification (optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. boot_disk_type Step49: Define worker pool specification for hyperparameter tuning job Next, define the worker pool specification. Note that we plan to tune the learning rate and batch size, so you do not pass them as command-line arguments (omitted). The Vertex AI Hyperparameter Tuning service will pick values for both learning rate and batch size during trials, which it will pass along as command-line arguments. Step50: Create a custom job Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters Step51: Create a hyperparameter tuning job Use the class HyperparameterTuningJob to create a hyperparameter tuning job, with the following parameters Step52: Run the hyperparameter tuning job Use the run() method to execute the hyperparameter tuning job. Step53: Best trial Now look at which trial was the best Step54: Delete the hyperparameter tuning job The method 'delete()' will delete the hyperparameter tuning job. Step55: Save the best hyperparameter values Step56: Create and run custom training job To train a custom model, you perform two steps Step57: Run the custom Python package training job Next, you run the custom job to start the training job by invoking the method run(). The parameters are the same as when running a CustomTrainingJob. Note Step58: Delete a custom training job After a training job is completed, you can delete the training job with the method delete(). Prior to completion, a training job can be canceled with the method cancel(). Step59: Get the experiment results Next, you use the experiment name as a parameter to the method get_experiment_df() to get the results of the experiment as a pandas dataframe. Step60: Review the custom model evaluation results Next, you review the evaluation metrics builtin into the training package. Step61: Delete the TensorBoard instance Next, delete the TensorBoard instance. Step66: Add a serving function Next, you add a serving function to your model for online and batch prediction. This allows prediction requests to be sent in raw format (unpreprocessed), either as a serialized TF.Example or JSONL object. The serving function will then preprocess the prediction request into the transformed format expected by the model. Step67: Construct the serving model Now construct the serving model and store the serving model to your Cloud Storage bucket. Step68: Test the serving model locally with tf.Example data Next, test the layer interface in the serving model for tf.Example data. Step69: Test the serving model locally with JSONL data Next, test the layer interface in the serving model for JSONL data. Step70: Upload the serving model to a Vertex AI Model resource Next, you upload your serving custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource. Step71: Evaluate the serving model Next, evaluate the serving model with the evaluation (test) slices. For apples-to-apples comparison, you use the same evaluation slices for both the custom model and the AutoML model. Since your evaluation slices and metrics maybe custom, we recommend Step72: Perform custom evaluation metrics After the batch job has completed, you input the results and target labels to your custom evaluation script. For demonstration purposes, we just display the results of the batch prediction. Step73: Wait for completion of AutoML training job Next, wait for the AutoML training job to complete. Alternatively, one can set the parameter sync to True in the run() method to block until the AutoML training job is completed. Step74: Review model evaluation scores After your model training has finished, you can review the evaluation scores for it using the list_model_evaluations() method. This method will return an iterator for each evaluation slice. Step75: Compare metric results with AutoML baseline Finally, you make a decision if the current experiment produces a custom model that is better than the AutoML baseline, as follows Step76: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial
Python Code: import os # The Vertex AI Workbench Notebook product has specific requirements IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists( "/opt/deeplearning/metadata/env_version" ) # Vertex AI Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_WORKBENCH_NOTEBOOK: USER_FLAG = "--user" ONCE_ONLY = False if ONCE_ONLY: ! pip3 install -U tensorflow==2.5 $USER_FLAG ! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG ! pip3 install -U tensorflow-transform==1.2 $USER_FLAG ! pip3 install -U tensorflow-io==0.18 $USER_FLAG ! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG ! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG ! pip3 install --upgrade google-cloud-bigquery $USER_FLAG ! pip3 install --upgrade google-cloud-logging $USER_FLAG ! pip3 install --upgrade apache-beam[gcp] $USER_FLAG ! pip3 install --upgrade pyarrow $USER_FLAG ! pip3 install --upgrade cloudml-hypertune $USER_FLAG ! pip3 install --upgrade kfp $USER_FLAG ! pip3 install --upgrade torchvision $USER_FLAG ! pip3 install --upgrade rpy2 $USER_FLAG Explanation: E2E ML on GCP: MLOps stage 2 : experimentation <table align="left"> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage2/mlops_experimentation.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage2/mlops_experimentation.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage2/mlops_experimentation.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Open in Vertex AI Workbench </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. Dataset The dataset used for this tutorial is the Chicago Taxi. The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. Objective In this tutorial, you create a MLOps stage 2: experimentation process. This tutorial uses the following Vertex AI: Vertex AI Datasets Vertex AI Models Vertex AI AutoML Vertex AI Training Vertex AI TensorBoard Vertex AI Vizier Vertex AI Batch Prediction The steps performed include: Review the Dataset resource created during stage 1. Train an AutoML tabular binary classifier model in the background. Build the experimental model architecture. Construct a custom training package for the Dataset resource. Test the custom training package locally. Test the custom training package in the cloud with Vertex AI Training. Hyperparameter tune the model training with Vertex AI Vizier. Train the custom model with Vertex AI Training. Add a serving function for online/batch prediction to the custom model. Test the custom model with the serving function. Evaluate the custom model using Vertex AI Batch Prediction Wait for the AutoML training job to complete. Evaluate the AutoML model using Vertex AI Batch Prediction with the same evaluation slices as the custom model. Set the evaluation results of the AutoML model as the baseline. If the evaluation of the custom model is below baseline, continue to experiment with the custom model. If the evaluation of the custom model is above baseline, save the model as the first best model. Recommendations When doing E2E MLOps on Google Cloud for experimentation, the following best practices with structured (tabular) data are recommended: Determine a baseline evaluation using AutoML. Design and build a model architecture. Upload the untrained model architecture as a Vertex AI Model resource. Construct a training package that can be ran locally and as a Vertex AI Training job. Decompose the training package into: data, model, train and task Python modules. Obtain the location of the transformed training data from the user metadata of the Vertex AI Dataset resource. Obtain the location of the model artifacts from the Vertex AI Model resource. Include in the training package initializing a Vertex AI Experiment and corresponding run. Log hyperparameters and training parameters for the experiment. Add callbacks for early stop, TensorBoard, and hyperparameter tuning, where hyperparameter tuning is a command-line option. Test the training package locally with a small number of epochs. Test the training package with Vertex AI Training. Do hyperparameter tuning with Vertex AI Hyperparameter Tuning. Do full training of the custom model with Vertex AI Training. Log the hyperparameter values for the experiment/run. Evaluate the custom model. Single evaluation slice, same metrics as AutoML Add evaluation to the training package and return the results in a file in the Cloud Storage bucket used for training Custom evaluation slices, custom metrics Evaluate custom evaluation slices as a Vertex AI Batch Prediction for both AutoML and custom model Perform custom metrics on the results from the batch job Compare custom model metrics against the AutoML baseline If less than baseline, then continue to experiment If greater then baseline, then upload model as the new baseline and save evaluation results with the model. Installations Install one time the packages for executing the MLOps notebooks. End of explanation import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) Explanation: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID Explanation: Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud. End of explanation REGION = "[your-region]" # @param {type:"string"} if REGION == "[your-region]": REGION = "us-central1" Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions. End of explanation from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Vertex AI Workbench, then don't execute this code IS_COLAB = False if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv( "DL_ANACONDA_HOME" ): if "google.colab" in sys.modules: IS_COLAB = True from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' Explanation: Authenticate your Google Cloud account If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation ! gsutil mb -l $REGION $BUCKET_NAME Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation ! gsutil ls -al $BUCKET_NAME Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"} if ( SERVICE_ACCOUNT == "" or SERVICE_ACCOUNT is None or SERVICE_ACCOUNT == "[your-service-account]" ): # Get your service account from gcloud if not IS_COLAB: shell_output = !gcloud auth list 2>/dev/null SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip() if IS_COLAB: shell_output = ! gcloud projects describe $PROJECT_ID project_number = shell_output[-1].split(":")[1].strip().replace("'", "") SERVICE_ACCOUNT = f"{project_number}[email protected]" print("Service Account:", SERVICE_ACCOUNT) Explanation: Service Account If you don't know your service account, try to get your service account using gcloud command by executing the second cell below. End of explanation import google.cloud.aiplatform as aip Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants End of explanation import tensorflow as tf Explanation: Import TensorFlow Import the TensorFlow package into your Python environment. End of explanation import tensorflow_transform as tft Explanation: Import TensorFlow Transform Import the TensorFlow Transform (TFT) package into your Python environment. End of explanation import tensorflow_data_validation as tfdv Explanation: Import TensorFlow Data Validation Import the TensorFlow Data Validation (TFDV) package into your Python environment. End of explanation aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME) Explanation: Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. End of explanation import os if os.getenv("IS_TESTING_TRAIN_GPU"): TRAIN_GPU, TRAIN_NGPU = ( aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_TRAIN_GPU")), ) else: TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 4) if os.getenv("IS_TESTING_DEPLOY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = ( aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPLOY_GPU")), ) else: DEPLOY_GPU, DEPLOY_NGPU = (None, None) Explanation: Set hardware accelerators You can set hardware accelerators for training and prediction. Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) Otherwise specify (None, None) to use a container image to run on a CPU. Learn more about hardware accelerator support for your region. Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support. End of explanation if os.getenv("IS_TESTING_TF"): TF = os.getenv("IS_TESTING_TF") else: TF = "2.5".replace(".", "-") if TF[0] == "2": if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) if DEPLOY_GPU: DEPLOY_VERSION = "tf2-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf2-cpu.{}".format(TF) else: if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) if DEPLOY_GPU: DEPLOY_VERSION = "tf-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf-cpu.{}".format(TF) TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format( REGION.split("-")[0], TRAIN_VERSION ) DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format( REGION.split("-")[0], DEPLOY_VERSION ) print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU) print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU) Explanation: Set pre-built containers Set the pre-built Docker container image for training and prediction. For the latest list, see Pre-built containers for training. For the latest list, see Pre-built containers for prediction. End of explanation if os.getenv("IS_TESTING_TRAIN_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE) Explanation: Set machine type Next, set the machine type to use for training and prediction. Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction. machine type n1-standard: 3.75GB of memory per vCPU. n1-highmem: 6.5GB of memory per vCPU n1-highcpu: 0.9 GB of memory per vCPU vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ] Note: The following is not supported for training: standard: 2 vCPUs highcpu: 2, 4 and 8 vCPUs Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs. End of explanation def find_dataset(display_name_prefix, import_format): matches = [] datasets = aip.TabularDataset.list() for dataset in datasets: if dataset.display_name.startswith(display_name_prefix): try: if ( "bq" == import_format and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"] ): matches.append(dataset) if ( "csv" == import_format and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"] ): matches.append(dataset) except: pass create_time = None for match in matches: if create_time is None or match.create_time > create_time: create_time = match.create_time dataset = match return dataset dataset = find_dataset("Chicago Taxi", "bq") print(dataset) Explanation: Retrieve the dataset from stage 1 Next, retrieve the dataset you created during stage 1 with the helper function find_dataset(). This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version. End of explanation import json try: with tf.io.gfile.GFile( "gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r" ) as f: metadata = json.load(f) print(metadata) except: print("no metadata") Explanation: Load dataset's user metadata Load the user metadata for the dataset. End of explanation dag = aip.AutoMLTabularTrainingJob( display_name="chicago_" + TIMESTAMP, optimization_prediction_type="classification", optimization_objective="minimize-log-loss", ) print(dag) Explanation: Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLTabularTrainingJob class, with the following parameters: display_name: The human readable name for the TrainingJob resource. optimization_prediction_type: The type task to train the model for. classification: A tabuar classification model. regression: A tabular regression model. column_transformations: (Optional): Transformations to apply to the input columns optimization_objective: The optimization objective to minimize or maximize. binary classification: minimize-log-loss maximize-au-roc maximize-au-prc maximize-precision-at-recall maximize-recall-at-precision multi-class classification: minimize-log-loss regression: minimize-rmse minimize-mae minimize-rmsle The instantiated object is the DAG (directed acyclic graph) for the training pipeline. End of explanation async_model = dag.run( dataset=dataset, model_display_name="chicago_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, budget_milli_node_hours=8000, disable_early_stopping=False, target_column="tip_bin", sync=False, ) Explanation: Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for training. test_fraction_split: The percentage of the dataset to use for test (holdout data). validation_fraction_split: The percentage of the dataset to use for validation. target_column: The name of the column to train as the label. budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour). disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements. The run method when completed returns the Model resource. The execution of the training pipeline will take upto 180 minutes. End of explanation EXPERIMENT_NAME = "chicago-" + TIMESTAMP aip.init(experiment=EXPERIMENT_NAME) aip.start_run("run-1") Explanation: Create experiment for tracking training related metadata Setup tracking the parameters (configuration) and metrics (results) for each experiment: aip.init() - Create an experiment instance aip.start_run() - Track a specific run within the experiment. Learn more about Introduction to Vertex AI ML Metadata. End of explanation TENSORBOARD_DISPLAY_NAME = "chicago_" + TIMESTAMP tensorboard = aip.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME) tensorboard_resource_name = tensorboard.gca_resource.name print("TensorBoard resource name:", tensorboard_resource_name) Explanation: Create a Vertex AI TensorBoard instance Create a Vertex AI TensorBoard instance to use TensorBoard in conjunction with Vertex AI Training for custom model training. Learn more about Get started with Vertex AI TensorBoard. End of explanation from tensorflow.keras.layers import Input def create_model_inputs( numeric_features=None, categorical_features=None, embedding_features=None ): inputs = {} for feature_name in numeric_features: inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32) for feature_name in categorical_features: inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64) for feature_name in embedding_features: inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64) return inputs input_layers = create_model_inputs( numeric_features=metadata["numeric_features"], categorical_features=metadata["categorical_features"], embedding_features=metadata["embedding_features"], ) print(input_layers) Explanation: Create the input layer for your custom model Next, you create the input layer for your custom tabular model, based on the data types of each feature. End of explanation from math import sqrt from tensorflow.keras import Model, Sequential from tensorflow.keras.layers import (Activation, Concatenate, Dense, Embedding, experimental) def create_binary_classifier( input_layers, tft_output, metaparams, numeric_features, categorical_features, embedding_features, ): layers = [] for feature_name in input_layers: if feature_name in embedding_features: vocab_size = tft_output.vocabulary_size_by_name(feature_name) embedding_size = int(sqrt(vocab_size)) embedding_output = Embedding( input_dim=vocab_size + 1, output_dim=embedding_size, name=f"{feature_name}_embedding", )(input_layers[feature_name]) layers.append(embedding_output) elif feature_name in categorical_features: vocab_size = tft_output.vocabulary_size_by_name(feature_name) onehot_layer = experimental.preprocessing.CategoryEncoding( num_tokens=vocab_size, output_mode="binary", name=f"{feature_name}_onehot", )(input_layers[feature_name]) layers.append(onehot_layer) elif feature_name in numeric_features: numeric_layer = tf.expand_dims(input_layers[feature_name], -1) layers.append(numeric_layer) else: pass joined = Concatenate(name="combines_inputs")(layers) feedforward_output = Sequential( [Dense(units, activation="relu") for units in metaparams["hidden_units"]], name="feedforward_network", )(joined) logits = Dense(units=1, name="logits")(feedforward_output) pred = Activation("sigmoid")(logits) model = Model(inputs=input_layers, outputs=[pred]) return model TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"] tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR) metaparams = {"hidden_units": [128, 64]} aip.log_params(metaparams) model = create_binary_classifier( input_layers, tft_output, metaparams, numeric_features=metadata["numeric_features"], categorical_features=metadata["categorical_features"], embedding_features=metadata["embedding_features"], ) model.summary() Explanation: Create the binary classifier custom model Next, you create your binary classifier custom tabular model. End of explanation tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True) Explanation: Visualize the model architecture Next, visualize the architecture of the custom model. End of explanation MODEL_DIR = f"{BUCKET_NAME}/base_model" model.save(MODEL_DIR) Explanation: Save model artifacts Next, save the model artifacts to your Cloud Storage bucket End of explanation vertex_custom_model = aip.Model.upload( display_name="chicago_" + TIMESTAMP, artifact_uri=MODEL_DIR, serving_container_image_uri=DEPLOY_IMAGE, labels={"base_model": "1"}, sync=True, ) Explanation: Upload the local model to a Vertex AI Model resource Next, you upload your local custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource. End of explanation # Make folder for Python training script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'google-cloud-aiplatform',\n\n 'cloudml-hypertune',\n\n 'tensorflow_datasets==1.3.0',\n\n 'tensorflow==2.5',\n\n 'tensorflow_data_validation==1.2',\n\n ],\n\n packages=setuptools.find_packages())" ! echo "$setup_py" > custom/setup.py pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI" ! echo "$pkg_info" > custom/PKG-INFO # Make the training subfolder ! mkdir custom/trainer ! touch custom/trainer/__init__.py Explanation: Construct the training package Package layout Before you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout. PKG-INFO README.md setup.cfg setup.py trainer __init__.py task.py other Python scripts The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image. The file trainer/task.py is the Python script for executing the custom training job. End of explanation transform_feature_spec = tft_output.transformed_feature_spec() print(transform_feature_spec) Explanation: Get feature specification for the preprocessed data Next, create the feature specification for the preprocessed data. End of explanation %%writefile custom/trainer/data.py import tensorflow as tf def _gzip_reader_fn(filenames): Small utility returning a record reader that can read gzip'ed files. return tf.data.TFRecordDataset(filenames, compression_type="GZIP") def get_dataset(file_pattern, feature_spec, label_column, batch_size=200): Generates features and label for tuning/training. Args: file_pattern: input tfrecord file pattern. feature_spec: a dictionary of feature specifications. batch_size: representing the number of consecutive elements of returned dataset to combine in a single batch Returns: A dataset that contains (features, indices) tuple where features is a dictionary of Tensors, and indices is a single Tensor of label indices. dataset = tf.data.experimental.make_batched_features_dataset( file_pattern=file_pattern, batch_size=batch_size, features=feature_spec, label_key=label_column, reader=_gzip_reader_fn, num_epochs=1, drop_final_batch=True, ) return dataset from custom.trainer import data TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"] LABEL_COLUMN = metadata["label_column"] train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz" val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz" test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz" for input_features, target in data.get_dataset( train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3 ).take(1): for key in input_features: print( f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}" ) print(f"target: {target.numpy().tolist()}") Explanation: Load the transformed data into a tf.data.Dataset Next, you load the gzip TFRecords on Cloud Storage storage into a tf.data.Dataset generator. These functions are re-used when training the custom model using Vertex Training, so you save them to the python training package. End of explanation model(input_features) Explanation: Test the model architecture with transformed input Next, test the model architecture with a sample of the transformed training input. Note: Since the model is untrained, the predictions should be random. Since this is a binary classifier, expect the predicted results ~0.5. End of explanation %%writefile custom/trainer/train.py from trainer import data import tensorflow as tf import logging from hypertune import HyperTune def compile(model, hyperparams): ''' Compile the model ''' optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"]) loss = tf.keras.losses.BinaryCrossentropy(from_logits=False) metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")] model.compile(optimizer=optimizer,loss=loss, metrics=metrics) return model def warmup( model, hyperparams, train_data_dir, label_column, transformed_feature_spec ): ''' Warmup the initialized model weights ''' train_dataset = data.get_dataset( train_data_dir, transformed_feature_spec, label_column, batch_size=hyperparams["batch_size"], ) lr_inc = (hyperparams['end_learning_rate'] - hyperparams['start_learning_rate']) / hyperparams['num_epochs'] def scheduler(epoch, lr): if epoch == 0: return hyperparams['start_learning_rate'] return lr + lr_inc callbacks = [tf.keras.callbacks.LearningRateScheduler(scheduler)] logging.info("Model warmup started...") history = model.fit( train_dataset, epochs=hyperparams["num_epochs"], steps_per_epoch=hyperparams["steps"], callbacks=callbacks ) logging.info("Model warmup completed.") return history def train( model, hyperparams, train_data_dir, val_data_dir, label_column, transformed_feature_spec, log_dir, tuning=False ): ''' Train the model ''' train_dataset = data.get_dataset( train_data_dir, transformed_feature_spec, label_column, batch_size=hyperparams["batch_size"], ) val_dataset = data.get_dataset( val_data_dir, transformed_feature_spec, label_column, batch_size=hyperparams["batch_size"], ) early_stop = tf.keras.callbacks.EarlyStopping( monitor=hyperparams["early_stop"]["monitor"], patience=hyperparams["early_stop"]["patience"], restore_best_weights=True ) callbacks = [early_stop] if log_dir: tensorboard = tf.keras.callbacks.TensorBoard(log_dir=log_dir) callbacks = callbacks.append(tensorboard) if tuning: # Instantiate the HyperTune reporting object hpt = HyperTune() # Reporting callback class HPTCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='val_loss', metric_value=logs['val_loss'], global_step=epoch ) if not callbacks: callbacks = [] callbacks.append(HPTCallback()) logging.info("Model training started...") history = model.fit( train_dataset, epochs=hyperparams["num_epochs"], validation_data=val_dataset, callbacks=callbacks ) logging.info("Model training completed.") return history def evaluate( model, hyperparams, test_data_dir, label_column, transformed_feature_spec ): logging.info("Model evaluation started...") test_dataset = data.get_dataset( test_data_dir, transformed_feature_spec, label_column, hyperparams["batch_size"], ) evaluation_metrics = model.evaluate(test_dataset) logging.info("Model evaluation completed.") return evaluation_metrics Explanation: Develop and test the training scripts When experimenting, one typically develops and tests the training package locally, before moving to training in the cloud. Create training script Next, you write the Python script for compiling and training the model. End of explanation os.chdir("custom") import logging from trainer import train TENSORBOARD_LOG_DIR = "./logs" logging.getLogger().setLevel(logging.INFO) hyperparams = {} hyperparams["learning_rate"] = 0.01 aip.log_params(hyperparams) train.compile(model, hyperparams) warmupparams = {} warmupparams["start_learning_rate"] = 0.0001 warmupparams["end_learning_rate"] = 0.01 warmupparams["num_epochs"] = 4 warmupparams["batch_size"] = 64 warmupparams["steps"] = 50 aip.log_params(warmupparams) train.warmup( model, warmupparams, train_data_file_pattern, LABEL_COLUMN, transform_feature_spec ) trainparams = {} trainparams["num_epochs"] = 5 trainparams["batch_size"] = 64 trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5} aip.log_params(trainparams) train.train( model, trainparams, train_data_file_pattern, val_data_file_pattern, LABEL_COLUMN, transform_feature_spec, TENSORBOARD_LOG_DIR, ) os.chdir("..") Explanation: Train the model locally Next, test the training package locally, by training with just a few epochs: num_epochs: The number of epochs to pass to the training package. compile(): Compile the model for training. warmup(): Warmup the initialized model weights. train(): Train the model. End of explanation os.chdir("custom") from trainer import train evalparams = {} evalparams["batch_size"] = 64 metrics = {} metrics["loss"], metrics["acc"] = train.evaluate( model, evalparams, test_data_file_pattern, LABEL_COLUMN, transform_feature_spec ) print("ACC", metrics["acc"], "LOSS", metrics["loss"]) aip.log_metrics(metrics) os.chdir("..") Explanation: Evaluate the model locally Next, test the evaluation portion of the training package: evaluate(): Evaluate the model. End of explanation %%writefile custom/trainer/model.py import google.cloud.aiplatform as aip def get(model_id): model = aip.Model(model_id) return model Explanation: Retrieve model from Vertex AI Next, create the Python script to retrieve your experimental model from Vertex AI. End of explanation %%writefile custom/trainer/task.py import os import argparse import logging import json import tensorflow as tf import tensorflow_transform as tft from tensorflow.python.client import device_lib import google.cloud.aiplatform as aip from trainer import data from trainer import model as model_ from trainer import train try: from trainer import serving except: pass parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.') parser.add_argument('--model-id', dest='model_id', default=None, type=str, help='Vertex Model ID.') parser.add_argument('--dataset-id', dest='dataset_id', default=None, type=str, help='Vertex Dataset ID.') parser.add_argument('--lr', dest='lr', default=0.001, type=float, help='Learning rate.') parser.add_argument('--start_lr', dest='start_lr', default=0.0001, type=float, help='Starting learning rate.') parser.add_argument('--epochs', dest='epochs', default=20, type=int, help='Number of epochs.') parser.add_argument('--steps', dest='steps', default=200, type=int, help='Number of steps per epoch.') parser.add_argument('--batch_size', dest='batch_size', default=16, type=int, help='Batch size.') parser.add_argument('--distribute', dest='distribute', type=str, default='single', help='distributed training strategy') parser.add_argument('--tensorboard-log-dir', dest='tensorboard_log_dir', default=os.getenv('AIP_TENSORBOARD_LOG_DIR'), type=str, help='Output file for tensorboard logs') parser.add_argument('--experiment', dest='experiment', default=None, type=str, help='Name of experiment') parser.add_argument('--project', dest='project', default=None, type=str, help='Name of project') parser.add_argument('--run', dest='run', default=None, type=str, help='Name of run in experiment') parser.add_argument('--evaluate', dest='evaluate', default=False, type=bool, help='Whether to perform evaluation') parser.add_argument('--serving', dest='serving', default=False, type=bool, help='Whether to attach the serving function') parser.add_argument('--tuning', dest='tuning', default=False, type=bool, help='Whether to perform hyperparameter tuning') parser.add_argument('--warmup', dest='warmup', default=False, type=bool, help='Whether to perform warmup weight initialization') args = parser.parse_args() logging.getLogger().setLevel(logging.INFO) logging.info('DEVICES' + str(device_lib.list_local_devices())) # Single Machine, single compute device if args.distribute == 'single': if tf.test.is_gpu_available(): strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") else: strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0") logging.info("Single device training") # Single Machine, multiple compute device elif args.distribute == 'mirrored': strategy = tf.distribute.MirroredStrategy() logging.info("Mirrored Strategy distributed training") # Multi Machine, multiple compute device elif args.distribute == 'multiworker': strategy = tf.distribute.MultiWorkerMirroredStrategy() logging.info("Multi-worker Strategy distributed training") logging.info('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found'))) logging.info('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync)) # Initialize the run for this experiment if args.experiment: logging.info("Initialize experiment: {}".format(args.experiment)) aip.init(experiment=args.experiment, project=args.project) aip.start_run(args.run) metadata = {} def get_data(): ''' Get the preprocessed training data ''' global train_data_file_pattern, val_data_file_pattern, test_data_file_pattern global label_column, transform_feature_spec, metadata dataset = aip.TabularDataset(args.dataset_id) METADATA = 'gs://' + dataset.labels['user_metadata'] + "/metadata.jsonl" with tf.io.gfile.GFile(METADATA, "r") as f: metadata = json.load(f) TRANSFORMED_DATA_PREFIX = metadata['transformed_data_prefix'] label_column = metadata['label_column'] train_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/train/data-*.gz' val_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/val/data-*.gz' test_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/test/data-*.gz' TRANSFORM_ARTIFACTS_DIR = metadata['transform_artifacts_dir'] tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR) transform_feature_spec = tft_output.transformed_feature_spec() def get_model(): ''' Get the untrained model architecture ''' global model_artifacts vertex_model = model_.get(args.model_id) model_artifacts = vertex_model.gca_resource.artifact_uri model = tf.keras.models.load_model(model_artifacts) # Compile the model hyperparams = {} hyperparams["learning_rate"] = args.lr if args.experiment: aip.log_params(hyperparams) metadata.update(hyperparams) with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f: f.write(json.dumps(metadata)) train.compile(model, hyperparams) return model def warmup_model(model): ''' Warmup the initialized model weights ''' warmupparams = {} warmupparams["num_epochs"] = args.epochs warmupparams["batch_size"] = args.batch_size warmupparams["steps"] = args.steps warmupparams["start_learning_rate"] = args.start_lr warmupparams["end_learning_rate"] = args.lr train.warmup(model, warmupparams, train_data_file_pattern, label_column, transform_feature_spec) return model def train_model(model): ''' Train the model ''' trainparams = {} trainparams["num_epochs"] = args.epochs trainparams["batch_size"] = args.batch_size trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5} if args.experiment: aip.log_params(trainparams) metadata.update(trainparams) with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f: f.write(json.dumps(metadata)) train.train(model, trainparams, train_data_file_pattern, val_data_file_pattern, label_column, transform_feature_spec, args.tensorboard_log_dir, args.tuning) return model def evaluate_model(model): ''' Evaluate the model ''' evalparams = {} evalparams["batch_size"] = args.batch_size metrics = train.evaluate(model, evalparams, test_data_file_pattern, label_column, transform_feature_spec) metadata.update({'metrics': metrics}) with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f: f.write(json.dumps(metadata)) get_data() with strategy.scope(): model = get_model() if args.warmup: model = warmup_model(model) else: model = train_model(model) if args.evaluate: evaluate_model(model) if args.serving: logging.info('Save serving model to: ' + args.model_dir) serving.construct_serving_model( model=model, serving_model_dir=args.model_dir, metadata=metadata ) elif args.warmup: logging.info('Save warmed up model to: ' + model_artifacts) model.save(model_artifacts) else: logging.info('Save trained model to: ' + args.model_dir) model.save(args.model_dir) Explanation: Create the task script for the Python training package Next, you create the task.py script for driving the training package. Some noteable steps include: Command-line arguments: model-id: The resource ID of the Model resource you built during experimenting. This is the untrained model architecture. dataset-id: The resource ID of the Dataset resource to use for training. experiment: The name of the experiment. run: The name of the run within this experiment. tensorboard-logdir: The logging directory for Vertex AI Tensorboard. get_data(): Loads the Dataset resource into memory. Obtains the user metadata from the Dataset resource. From the metadata, obtain location of transformed data, transformation function and name of label column get_model(): Loads the Model resource into memory. Obtains location of model artifacts of the model architecture. Loads the model architecture. Compiles the model. warmup_model(): Warms up the initialized model weights train_model(): Train the model. evaluate_model(): Evaluates the model. Saves evaluation metrics to Cloud Storage bucket. End of explanation DATASET_ID = dataset.resource_name MODEL_ID = vertex_custom_model.resource_name !cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --experiment='chicago' --run='test' --project={PROJECT_ID} --epochs=5 --model-dir=/tmp --evaluate=True Explanation: Test training package locally Next, test your completed training package locally with just a few epochs. End of explanation MODEL_DIR = f"{BUCKET_NAME}/base_model" !cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --project={PROJECT_ID} --epochs=5 --steps=300 --batch_size=16 --lr=0.01 --start_lr=0.0001 --model-dir={MODEL_DIR} --warmup=True Explanation: Warmup training Now that you have tested the training scripts, you perform warmup training on the base model. Warmup training is used to stabilize the weight initialization. By doing so, each subsequent training and tuning of the model architecture will start with the same stabilized weight initialization. End of explanation DISPLAY_NAME = "chicago_" + TIMESTAMP job = aip.CustomPythonPackageTrainingJob( display_name=DISPLAY_NAME, python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz", python_module_name="trainer.task", container_uri=TRAIN_IMAGE, model_serving_container_image_uri=DEPLOY_IMAGE, project=PROJECT_ID, ) ! rm -rf custom/logs ! rm -rf custom/trainer/__pycache__ Explanation: Mirrored Strategy When training on a single VM, one can either train was a single compute device or with multiple compute devices on the same VM. With Vertex AI Distributed Training you can specify both the number of compute devices for the VM instance and type of compute devices: CPU, GPU. Vertex AI Distributed Training supports `tf.distribute.MirroredStrategy' for TensorFlow models. To enable training across multiple compute devices on the same VM, you do the following additional steps in your Python training script: Set the tf.distribute.MirrorStrategy Compile the model within the scope of tf.distribute.MirrorStrategy. Note: Tells MirroredStrategy which variables to mirror across your compute devices. Increase the batch size for each compute device to num_devices * batch size. During transitions, the distribution of batches will be synchronized as well as the updates to the model parameters. Create and run custom training job To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training job A custom training job is created with the CustomTrainingJob class, with the following parameters: display_name: The human readable name for the custom training job. container_uri: The training container image. python_package_gcs_uri: The location of the Python training package as a tarball. python_module_name: The relative path to the training script in the Python package. model_serving_container_uri: The container image for deploying the model. Note: There is no requirements parameter. You specify any requirements in the setup.py script in your Python package. End of explanation ! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_chicago.tar.gz Explanation: Store training script on your Cloud Storage bucket Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket. End of explanation MODEL_DIR = BUCKET_NAME + "/testing" CMDARGS = [ "--epochs=5", "--batch_size=16", "--distribute=mirrored", "--experiment=chicago", "--run=test", "--project=" + PROJECT_ID, "--model-id=" + MODEL_ID, "--dataset-id=" + DATASET_ID, ] model = job.run( model_display_name="chicago_" + TIMESTAMP, args=CMDARGS, replica_count=1, machine_type=TRAIN_COMPUTE, accelerator_type=TRAIN_GPU.name, accelerator_count=TRAIN_NGPU, base_output_dir=MODEL_DIR, service_account=SERVICE_ACCOUNT, tensorboard=tensorboard_resource_name, sync=True, ) Explanation: Run the custom Python package training job Next, you run the custom job to start the training job by invoking the method run(). The parameters are the same as when running a CustomTrainingJob. Note: The parameter service_account is set so that the initializing experiment step aip.init(experiment="...") has necessarily permission to access the Vertex AI Metadata Store. End of explanation job.delete() Explanation: Delete a custom training job After a training job is completed, you can delete the training job with the method delete(). Prior to completion, a training job can be canceled with the method cancel(). End of explanation model.delete() Explanation: Delete the model The method 'delete()' will delete the model. End of explanation if TRAIN_GPU: machine_spec = { "machine_type": TRAIN_COMPUTE, "accelerator_type": TRAIN_GPU, "accelerator_count": TRAIN_NGPU, } else: machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0} Explanation: Hyperparameter tuning Next, you perform hyperparameter tuning with the training package. The training package has some additions that make the same package usable for both hyperparameter tuning, as well as local testing and full cloud training: Command-Line: tuning: indicates to use the HyperTune service as a callback during training. train(): If tuning is set, creates and adds a callback to HyperTune service. Prepare your machine specification Now define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - machine_type: The type of GCP instance to provision -- e.g., n1-standard-8. - accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU. - accelerator_count: The number of accelerators. End of explanation DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard] DISK_SIZE = 200 # GB disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE} Explanation: Prepare your disk specification (optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. boot_disk_size_gb: Size of disk in GB. End of explanation CMDARGS = [ "--epochs=5", "--distribute=mirrored", # "--experiment=chicago", # "--run=tune", # "--project=" + PROJECT_ID, "--model-id=" + MODEL_ID, "--dataset-id=" + DATASET_ID, "--tuning=True", ] worker_pool_spec = [ { "replica_count": 1, "machine_spec": machine_spec, "disk_spec": disk_spec, "python_package_spec": { "executor_image_uri": TRAIN_IMAGE, "package_uris": [BUCKET_NAME + "/trainer_chicago.tar.gz"], "python_module": "trainer.task", "args": CMDARGS, }, } ] Explanation: Define worker pool specification for hyperparameter tuning job Next, define the worker pool specification. Note that we plan to tune the learning rate and batch size, so you do not pass them as command-line arguments (omitted). The Vertex AI Hyperparameter Tuning service will pick values for both learning rate and batch size during trials, which it will pass along as command-line arguments. End of explanation job = aip.CustomJob( display_name="chicago_" + TIMESTAMP, worker_pool_specs=worker_pool_spec ) Explanation: Create a custom job Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters: display_name: A human readable name for the custom job. worker_pool_specs: The specification for the corresponding VM instances. End of explanation from google.cloud.aiplatform import hyperparameter_tuning as hpt hpt_job = aip.HyperparameterTuningJob( display_name="chicago_" + TIMESTAMP, custom_job=job, metric_spec={ "val_loss": "minimize", }, parameter_spec={ "lr": hpt.DoubleParameterSpec(min=0.001, max=0.1, scale="log"), "batch_size": hpt.DiscreteParameterSpec([16, 32, 64, 128, 256], scale="linear"), }, search_algorithm=None, max_trial_count=8, parallel_trial_count=1, ) Explanation: Create a hyperparameter tuning job Use the class HyperparameterTuningJob to create a hyperparameter tuning job, with the following parameters: display_name: A human readable name for the custom job. custom_job: The worker pool spec from this custom job applies to the CustomJobs created in all the trials. metrics_spec: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize'). parameter_spec: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric. search_algorithm: The search algorithm to use: grid, random and None. If None is specified, the Vizier service (Bayesian) is used. max_trial_count: The maximum number of trials to perform. End of explanation hpt_job.run() Explanation: Run the hyperparameter tuning job Use the run() method to execute the hyperparameter tuning job. End of explanation best = (None, None, None, 0.0) for trial in hpt_job.trials: # Keep track of the best outcome if float(trial.final_measurement.metrics[0].value) > best[3]: try: best = ( trial.id, float(trial.parameters[0].value), float(trial.parameters[1].value), float(trial.final_measurement.metrics[0].value), ) except: best = ( trial.id, float(trial.parameters[0].value), None, float(trial.final_measurement.metrics[0].value), ) print(best) Explanation: Best trial Now look at which trial was the best: End of explanation hpt_job.delete() Explanation: Delete the hyperparameter tuning job The method 'delete()' will delete the hyperparameter tuning job. End of explanation LR = best[2] BATCH_SIZE = int(best[1]) Explanation: Save the best hyperparameter values End of explanation DISPLAY_NAME = "chicago_" + TIMESTAMP job = aip.CustomPythonPackageTrainingJob( display_name=DISPLAY_NAME, python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz", python_module_name="trainer.task", container_uri=TRAIN_IMAGE, model_serving_container_image_uri=DEPLOY_IMAGE, project=PROJECT_ID, ) Explanation: Create and run custom training job To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training job A custom training job is created with the CustomTrainingJob class, with the following parameters: display_name: The human readable name for the custom training job. container_uri: The training container image. python_package_gcs_uri: The location of the Python training package as a tarball. python_module_name: The relative path to the training script in the Python package. model_serving_container_uri: The container image for deploying the model. Note: There is no requirements parameter. You specify any requirements in the setup.py script in your Python package. End of explanation MODEL_DIR = BUCKET_NAME + "/trained" FULL_EPOCHS = 100 CMDARGS = [ f"--epochs={FULL_EPOCHS}", f"--lr={LR}", f"--batch_size={BATCH_SIZE}", "--distribute=mirrored", "--experiment=chicago", "--run=full", "--project=" + PROJECT_ID, "--model-id=" + MODEL_ID, "--dataset-id=" + DATASET_ID, "--evaluate=True", ] model = job.run( model_display_name="chicago_" + TIMESTAMP, args=CMDARGS, replica_count=1, machine_type=TRAIN_COMPUTE, accelerator_type=TRAIN_GPU.name, accelerator_count=TRAIN_NGPU, base_output_dir=MODEL_DIR, service_account=SERVICE_ACCOUNT, tensorboard=tensorboard_resource_name, sync=True, ) Explanation: Run the custom Python package training job Next, you run the custom job to start the training job by invoking the method run(). The parameters are the same as when running a CustomTrainingJob. Note: The parameter service_account is set so that the initializing experiment step aip.init(experiment="...") has necessarily permission to access the Vertex AI Metadata Store. End of explanation job.delete() Explanation: Delete a custom training job After a training job is completed, you can delete the training job with the method delete(). Prior to completion, a training job can be canceled with the method cancel(). End of explanation EXPERIMENT_NAME = "chicago" experiment_df = aip.get_experiment_df() experiment_df = experiment_df[experiment_df.experiment_name == EXPERIMENT_NAME] experiment_df.T Explanation: Get the experiment results Next, you use the experiment name as a parameter to the method get_experiment_df() to get the results of the experiment as a pandas dataframe. End of explanation METRICS = MODEL_DIR + "/model/metrics.txt" ! gsutil cat $METRICS Explanation: Review the custom model evaluation results Next, you review the evaluation metrics builtin into the training package. End of explanation tensorboard.delete() vertex_custom_model = model model = tf.keras.models.load_model(MODEL_DIR + "/model") Explanation: Delete the TensorBoard instance Next, delete the TensorBoard instance. End of explanation %%writefile custom/trainer/serving.py import tensorflow as tf import tensorflow_data_validation as tfdv import tensorflow_transform as tft import logging def _get_serve_features_fn(model, tft_output): Returns a function that accept a dictionary of features and applies TFT. model.tft_layer = tft_output.transform_features_layer() @tf.function def serve_features_fn(raw_features): Returns the output to be used in the serving signature. transformed_features = model.tft_layer(raw_features) probabilities = model(transformed_features) return {"scores": probabilities} return serve_features_fn def _get_serve_tf_examples_fn(model, tft_output, feature_spec): Returns a function that parses a serialized tf.Example and applies TFT. model.tft_layer = tft_output.transform_features_layer() @tf.function def serve_tf_examples_fn(serialized_tf_examples): Returns the output to be used in the serving signature. for key in list(feature_spec.keys()): if key not in features: feature_spec.pop(key) parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec) transformed_features = model.tft_layer(parsed_features) probabilities = model(transformed_features) return {"scores": probabilities} return serve_tf_examples_fn def construct_serving_model( model, serving_model_dir, metadata ): global features schema_location = metadata['schema'] features = metadata['numeric_features'] + metadata['categorical_features'] + metadata['embedding_features'] print("FEATURES", features) tft_output_dir = metadata["transform_artifacts_dir"] schema = tfdv.load_schema_text(schema_location) feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec tft_output = tft.TFTransformOutput(tft_output_dir) # Drop features that were not used in training features_input_signature = { feature_name: tf.TensorSpec( shape=(None, 1), dtype=spec.dtype, name=feature_name ) for feature_name, spec in feature_spec.items() if feature_name in features } signatures = { "serving_default": _get_serve_features_fn( model, tft_output ).get_concrete_function(features_input_signature), "serving_tf_example": _get_serve_tf_examples_fn( model, tft_output, feature_spec ).get_concrete_function( tf.TensorSpec(shape=[None], dtype=tf.string, name="examples") ), } logging.info("Model saving started...") model.save(serving_model_dir, signatures=signatures) logging.info("Model saving completed.") Explanation: Add a serving function Next, you add a serving function to your model for online and batch prediction. This allows prediction requests to be sent in raw format (unpreprocessed), either as a serialized TF.Example or JSONL object. The serving function will then preprocess the prediction request into the transformed format expected by the model. End of explanation os.chdir("custom") from trainer import serving SERVING_MODEL_DIR = BUCKET_NAME + "/serving_model" serving.construct_serving_model( model=model, serving_model_dir=SERVING_MODEL_DIR, metadata=metadata ) serving_model = tf.keras.models.load_model(SERVING_MODEL_DIR) os.chdir("..") Explanation: Construct the serving model Now construct the serving model and store the serving model to your Cloud Storage bucket. End of explanation EXPORTED_TFREC_PREFIX = metadata["exported_tfrec_prefix"] file_names = tf.data.TFRecordDataset.list_files( EXPORTED_TFREC_PREFIX + "/data-*.tfrecord" ) for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1): predictions = serving_model.signatures["serving_tf_example"](batch) for key in predictions: print(f"{key}: {predictions[key]}") Explanation: Test the serving model locally with tf.Example data Next, test the layer interface in the serving model for tf.Example data. End of explanation schema = tfdv.load_schema_text(metadata["schema"]) feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec instance = { "dropoff_grid": "POINT(-87.6 41.9)", "euclidean": 2064.2696, "loc_cross": "", "payment_type": "Credit Card", "pickup_grid": "POINT(-87.6 41.9)", "trip_miles": 1.37, "trip_day": 12, "trip_hour": 6, "trip_month": 2, "trip_day_of_week": 4, "trip_seconds": 555, } for feature_name in instance: dtype = feature_spec[feature_name].dtype instance[feature_name] = tf.constant([[instance[feature_name]]], dtype) predictions = serving_model.signatures["serving_default"](**instance) for key in predictions: print(f"{key}: {predictions[key].numpy()}") Explanation: Test the serving model locally with JSONL data Next, test the layer interface in the serving model for JSONL data. End of explanation vertex_serving_model = aip.Model.upload( display_name="chicago_" + TIMESTAMP, artifact_uri=SERVING_MODEL_DIR, serving_container_image_uri=DEPLOY_IMAGE, labels={"user_metadata": BUCKET_NAME[5:]}, sync=True, ) Explanation: Upload the serving model to a Vertex AI Model resource Next, you upload your serving custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource. End of explanation SERVING_OUTPUT_DATA_DIR = BUCKET_NAME + "/batch_eval" EXPORTED_JSONL_PREFIX = metadata["exported_jsonl_prefix"] MIN_NODES = 1 MAX_NODES = 1 job = vertex_serving_model.batch_predict( instances_format="jsonl", predictions_format="jsonl", job_display_name="chicago_" + TIMESTAMP, gcs_source=EXPORTED_JSONL_PREFIX + "*.jsonl", gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR, model_parameters=None, machine_type=DEPLOY_COMPUTE, accelerator_type=DEPLOY_GPU, accelerator_count=DEPLOY_NGPU, starting_replica_count=MIN_NODES, max_replica_count=MAX_NODES, sync=True, ) Explanation: Evaluate the serving model Next, evaluate the serving model with the evaluation (test) slices. For apples-to-apples comparison, you use the same evaluation slices for both the custom model and the AutoML model. Since your evaluation slices and metrics maybe custom, we recommend: Send each evaluation slice as a Vertex AI Batch Prediction Job. Use a custom evaluation script to evaluate the results from the batch prediction job. End of explanation batch_dir = ! gsutil ls $SERVING_OUTPUT_DATA_DIR batch_dir = batch_dir[0] outputs = ! gsutil ls $batch_dir errors = outputs[0] results = outputs[1] print("errors") ! gsutil cat $errors print("results") ! gsutil cat $results | head -n10 model = async_model Explanation: Perform custom evaluation metrics After the batch job has completed, you input the results and target labels to your custom evaluation script. For demonstration purposes, we just display the results of the batch prediction. End of explanation model.wait() Explanation: Wait for completion of AutoML training job Next, wait for the AutoML training job to complete. Alternatively, one can set the parameter sync to True in the run() method to block until the AutoML training job is completed. End of explanation model_evaluations = model.list_model_evaluations() for model_evaluation in model_evaluations: print(model_evaluation.to_dict()) Explanation: Review model evaluation scores After your model training has finished, you can review the evaluation scores for it using the list_model_evaluations() method. This method will return an iterator for each evaluation slice. End of explanation import json metadata = {} metadata["train_eval_metrics"] = METRICS metadata["custom_eval_metrics"] = "[you-fill-this-in]" with tf.io.gfile.GFile("gs://" + BUCKET_NAME[5:] + "/metadata.jsonl", "w") as f: json.dump(metadata, f) !gsutil cat $BUCKET_NAME/metadata.jsonl Explanation: Compare metric results with AutoML baseline Finally, you make a decision if the current experiment produces a custom model that is better than the AutoML baseline, as follows: - Compare the evaluation results for each evaluation slice between the custom model and the AutoML model. - Weight the results according to your business purposes. - Add up the result and make a determination if the custom model is better. Store evaluation results for custom model Next, you use the labels field to store user metadata containing the custom metrics information. End of explanation delete_all = False if delete_all: # Delete the dataset using the Vertex dataset object try: if "dataset" in globals(): dataset.delete() except Exception as e: print(e) # Delete the model using the Vertex model object try: if "model" in globals(): model.delete() except Exception as e: print(e) if "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint AutoML Training Job Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket End of explanation
2,450
Given the following text description, write Python code to implement the functionality described below step by step Description: Reproducing the COHERENT results - and New Physics constraints Code for reproducing the CEvNS signal observed by COHERENT - see arXiv Step1: Import the CEvNS module (for calculating the signal spectrum and loading the neutrino fluxes) Step2: Neutrino Flux @ SNS Let's load the neutrino flux. Note that here we're only plotting the continuum. There is also a population of monochromatic (29.65 MeV) muon neutrinos which we add in separately in the code (because the flux is a delta-function, it's hard to model here). Step3: COHERENT efficiency function Load in the efficiency (as a function of photoelectrons, PE). Set to zero below 5 PE. Step4: COHERENT event rate Calculate number of CEvNS signal events at COHERENT (in bins of 2 PE) Step5: Comparing with the COHERENT results First, let's load in the observed data and calculated spectrum (digitized from arXiv Step6: Now plot the results Step7: Fit to signal strength Very simple fit to the number of CEvNS signal events, using only a 1-bin likelihood. We start by defining the $\chi^2$, as given in arXiv Step8: To speed things up later (so we don't have to do the minimization every time), we'll tabulate and interpolate the chi-squared as a function of the number of signal events. This works because we're using a simple chi-squared which depends only on the number of signal events Step9: NSI constraints Calculate constraints on NSI parameters. Here, we're just assuming that the flavor-conserving e-e NSI couplings are non-zero, so we have to calculate the contribution to the rate from only the electron neutrinos and then see how that changes Step10: Flavour-conserving NSI Now, let's calculate the correction to the CEvNS rate from flavor-conserving NSI Step11: Calculate simplified (single bin) chi-squared (see chi-squared expression around p.32 in COHERENT paper) Step12: Calculate the (minimum) chi-squared on a grid and save to file Step13: Plot the 90% allowed regions Step14: Flavour-changing NSI ($e\mu$) Now the correction to the CEvNS rate from flavor-changing NSI ($e\mu$-type) Step15: Calculate delta-chisquared over a grid and save to file Step16: Flavour-changing NSI ($e\tau$) Finally, allowed regions for Flavour-changing NSI ($e\tau$-type) Step17: Limits on the neutrino magnetic moment Now let's calculate a limit on the neutrino magnetic moment (again, from a crude single-bin $\chi^2$). Step18: Scan over a grid Step19: Do some plotting Step20: Limits on new vector mediators First, let's calculate the total number of signal events at a given mediator mass and coupling... It takes a while to recalculate the number of signal events for each mediator mass and coupling, so we'll do some rescaling and interpolation trickery Step21: Now we scan over a grid in $g^2$ and $m_V$ to calculate the $\chi^2$ at each point Step22: Limits on a new scalar mediator Finally, let's look at limits on the couplings of a new scalar mediator $\phi$. We start by calculating the contribution to the number of signal events for a given mediator mass (this can be rescaled by the coupling $g_\phi^4$ later) Step23: Now grid-scan to get the $\Delta \chi^2$
Python Code: from __future__ import print_function %matplotlib inline import numpy as np import matplotlib #matplotlib.use('Agg') import matplotlib.pyplot as pl from scipy.integrate import quad from scipy.interpolate import interp1d, UnivariateSpline,InterpolatedUnivariateSpline from scipy.optimize import minimize from tqdm import tqdm #Change default font size so you don't need a magnifying glass matplotlib.rc('font', **{'size' : 16}) Explanation: Reproducing the COHERENT results - and New Physics constraints Code for reproducing the CEvNS signal observed by COHERENT - see arXiv:1708.01294. Note that the COHERENT-2017 data are now publicly available (arXiv:1804.09459) - this notebook uses digitized results from the original 2017 paper. Note that we neglect the axial charge of the nucleus, and thus the contribution from strange quarks. We also use a slightly different parametrisation of the Form Factor, compared to the COHERENT collaboration. End of explanation import CEvNS #help(CEvNS.xsec_CEvNS) Explanation: Import the CEvNS module (for calculating the signal spectrum and loading the neutrino fluxes) End of explanation #Initialise neutrino_flux interpolation function CEvNS.loadNeutrinoFlux("SNS") #Plot neutrino flux E_nu = np.logspace(0, np.log10(300),1000) pl.figure() pl.semilogy(E_nu, CEvNS.neutrino_flux_tot(E_nu)) pl.title(r"Neutrino flux at SNS", fontsize=12) pl.xlabel(r"Neutrino energy, $E_\nu$ [MeV]") pl.ylabel(r"$\Phi_\nu$ [cm$^{-2}$ s$^{-1}$ MeV$^{-1}$]") pl.show() Explanation: Neutrino Flux @ SNS Let's load the neutrino flux. Note that here we're only plotting the continuum. There is also a population of monochromatic (29.65 MeV) muon neutrinos which we add in separately in the code (because the flux is a delta-function, it's hard to model here). End of explanation COHERENT_PE, COHERENT_eff = np.loadtxt("DataFiles/COHERENT_eff.txt", unpack=True) effinterp = interp1d(COHERENT_PE, COHERENT_eff, bounds_error=False, fill_value=0.0) def efficiency_single(x): if (x > 4.9): return effinterp(x) else: return 1e-10 efficiency = np.vectorize(efficiency_single) PEvals = np.linspace(0, 50, 100) pl.figure() pl.plot(PEvals, efficiency(PEvals)) pl.xlabel("PE") pl.ylabel("Efficiency") pl.show() Explanation: COHERENT efficiency function Load in the efficiency (as a function of photoelectrons, PE). Set to zero below 5 PE. End of explanation #Nuclear properties for Cs and I A_Cs = 133.0 Z_Cs = 55.0 A_I = 127.0 Z_I = 53.0 #Mass fractions f_Cs = A_Cs/(A_Cs + A_I) f_I = A_I/(A_Cs + A_I) mass = 14.6 #target mass in kg time = 308.1 #exposure time in days PEperkeV = 1.17 #Number of PE per keV #Get the differential rate function from the CEvNS module #Note that this function allows for an extra vector mediator, #but the default coupling is zero, so we'll forget about it diffRate_CEvNS = CEvNS.differentialRate_CEvNS #Differential rates (times efficiency) for the two target nuclei, per PE dRdPE_Cs = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_Cs*diffRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs) dRdPE_I = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_I*diffRate_CEvNS(x/PEperkeV, A_I, Z_I) #Calculate number of signal events in each bin in the Standard Model (SM) PE_bins = np.linspace(0, 50, 26) N_SM_Cs = np.zeros(25) N_SM_I = np.zeros(25) N_SM_tot = np.zeros(25) for i in tqdm(range(25)): N_SM_Cs[i] = quad(dRdPE_Cs, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0] N_SM_I[i] = quad(dRdPE_I, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0] N_SM_tot[i] = N_SM_Cs[i] + N_SM_I[i] print("Total CEvNS events expected: ", np.sum(N_SM_tot)) Explanation: COHERENT event rate Calculate number of CEvNS signal events at COHERENT (in bins of 2 PE) End of explanation COHERENT_data = np.loadtxt("DataFiles/COHERENT_data.txt", usecols=(1,)) COHERENT_upper = np.loadtxt("DataFiles/COHERENT_upper.txt", usecols=(1,)) - COHERENT_data COHERENT_lower = COHERENT_data - np.loadtxt("DataFiles/COHERENT_lower.txt", usecols=(1,)) COHERENT_spect = np.loadtxt("DataFiles/COHERENT_spectrum.txt", usecols=(1,)) COHERENT_bins = np.arange(1,50,2) Explanation: Comparing with the COHERENT results First, let's load in the observed data and calculated spectrum (digitized from arXiv:1708.01294). End of explanation pl.figure(figsize=(10,6)) pl.step(PE_bins, np.append(N_SM_tot,0), 'g', linestyle="-", where = "post", label="CEvNS signal (this work)",linewidth=1.5) pl.step(PE_bins, np.append(COHERENT_spect,0), 'g', linestyle="--", where = "post", label="CEvNS signal (1708.01294)",linewidth=1.5) pl.axhline(0, linestyle='--', color = 'gray') pl.errorbar(COHERENT_bins, COHERENT_data, fmt='ko', \ yerr = [COHERENT_lower, COHERENT_upper], label="COHERENT data",\ capsize=0.0) pl.xlabel("Number of photoelectrons (PE)") pl.ylabel("Res. counts / 2 PE") pl.legend( fontsize=14) pl.xlim(0, 50) pl.ylim(-15, 35) pl.savefig("plots/COHERENT_data.pdf", bbox_inches="tight") pl.show() Explanation: Now plot the results: End of explanation def chisq_generic(N_sig, alpha, beta): #Beam-on backgrounds N_BG = 6.0 #Number of measured events N_meas = 142.0 #Statistical uncertainty sig_stat = np.sqrt(N_meas + 2*405 + N_BG) #Uncertainties unc = (alpha/0.28)**2 + (beta/0.25)**2 return ((N_meas - N_sig*(1.0+alpha) - N_BG*(1.0+beta))**2)/sig_stat**2 + unc #Calculate minimum chi-squared as a function of (alpha, beta) nuisance parameters def minchisq_Nsig(Nsig): minres = minimize(lambda x: chisq_generic(Nsig, x[0], x[1]), (0.0,0.0)) return minres.fun Nsiglist= np.linspace(0, 1000,1001) chi2list = [minchisq_Nsig(Ns) for Ns in Nsiglist] delta_chi2 = (chi2list - np.min(chi2list)) pl.figure(figsize=(6,6)) pl.plot(Nsiglist, delta_chi2, linewidth=2.0) pl.ylim(0, 25) pl.axvline(np.sum(N_SM_tot), linestyle='--', color='k') pl.text(172, 20, "SM prediction") pl.ylabel(r"$\Delta \chi^2$") pl.xlabel(r"CE$\nu$NS counts") pl.savefig("plots/COHERENT_likelihood.pdf", bbox_inches="tight") pl.show() Explanation: Fit to signal strength Very simple fit to the number of CEvNS signal events, using only a 1-bin likelihood. We start by defining the $\chi^2$, as given in arXiv:1708.01294. We use a generic form, so that we don't have to recalculate the number of signal events all the time... End of explanation deltachi2_Nsig = interp1d(Nsiglist, delta_chi2, bounds_error=False, fill_value=delta_chi2[-1]) Explanation: To speed things up later (so we don't have to do the minimization every time), we'll tabulate and interpolate the chi-squared as a function of the number of signal events. This works because we're using a simple chi-squared which depends only on the number of signal events: End of explanation #Differential rates (times efficiency) for the two target nuclei, per PE # For electron neutrinos ONLY dRdPE_Cs_e = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_Cs*diffRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs, nu_flavor="e") dRdPE_I_e = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_I*diffRate_CEvNS(x/PEperkeV, A_I, Z_I, nu_flavor="e") # For muon neutrinos ONLY dRdPE_Cs_mu = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_Cs*(diffRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs, nu_flavor="mu")+ diffRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs, nu_flavor="mub")) dRdPE_I_mu = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_I*(diffRate_CEvNS(x/PEperkeV, A_I, Z_I, nu_flavor="mu") + diffRate_CEvNS(x/PEperkeV, A_I, Z_I, nu_flavor="mub")) #Now calculate bin-by-bin signal from electron neutrinos bins_Cs_e = np.zeros(25) bins_I_e = np.zeros(25) for i in tqdm(range(25)): bins_Cs_e[i] = quad(dRdPE_Cs_e, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0] bins_I_e[i] = quad(dRdPE_I_e, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0] print("Number of CEvNS events due to nu_e: ", np.sum(bins_Cs_e + bins_I_e)) #Now calculate bin-by-bin signal from muon neutrinos bins_Cs_mu = np.zeros(25) bins_I_mu = np.zeros(25) for i in tqdm(range(25)): bins_Cs_mu[i] = quad(dRdPE_Cs_mu, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0] bins_I_mu[i] = quad(dRdPE_I_mu, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0] print("Number of CEvNS events due to nu_mu: ", np.sum(bins_Cs_mu + bins_I_mu)) Explanation: NSI constraints Calculate constraints on NSI parameters. Here, we're just assuming that the flavor-conserving e-e NSI couplings are non-zero, so we have to calculate the contribution to the rate from only the electron neutrinos and then see how that changes: End of explanation def NSI_corr(eps_uV, eps_dV, A, Z): SIN2THETAW = 0.2387 #Calculate standard weak nuclear charge (squared) Qsq = 4.0*((A - Z)*(-0.5) + Z*(0.5 - 2*SIN2THETAW))**2 #Calculate the modified nuclear charge from NSI Qsq_NSI = 4.0*((A - Z)*(-0.5 + eps_uV + 2.0*eps_dV) + Z*(0.5 - 2*SIN2THETAW + 2*eps_uV + eps_dV))**2 return Qsq_NSI/Qsq Explanation: Flavour-conserving NSI Now, let's calculate the correction to the CEvNS rate from flavor-conserving NSI: End of explanation def deltachisq_NSI_ee(eps_uV, eps_dV): #NB: bins_I and bins_Cs are calculated further up in the script (they are the SM signal prediction) #Signal events from Iodine (with NSI correction only applying to electron neutrino events) N_sig_I = (N_SM_I + (NSI_corr(eps_uV, eps_dV, A_I, Z_I) - 1.0)*bins_I_e) #Now signal events from Caesium N_sig_Cs = (N_SM_Cs + (NSI_corr(eps_uV, eps_dV, A_Cs, Z_Cs) - 1.0)*bins_Cs_e) #Number of signal events N_NSI = np.sum(N_sig_I + N_sig_Cs) return deltachi2_Nsig(N_NSI) Explanation: Calculate simplified (single bin) chi-squared (see chi-squared expression around p.32 in COHERENT paper): End of explanation Ngrid = 101 ulist = np.linspace(-1.0, 1.0, Ngrid) dlist = np.linspace(-1.0, 1.0, Ngrid) UL, DL = np.meshgrid(ulist, dlist) delta_chi2_grid_ee = 0.0*UL #Not very elegant loop for i in tqdm(range(Ngrid)): for j in range(Ngrid): delta_chi2_grid_ee[i,j] = deltachisq_NSI_ee(UL[i,j], DL[i,j]) #Find best-fit point ind_BF = np.argmin(delta_chi2_grid_ee) BF = [UL.flatten()[ind_BF], DL.flatten()[ind_BF]] print("Best fit point: ", BF) np.savetxt("results/COHERENT_NSI_deltachi2_ee.txt", delta_chi2_grid_ee, header="101x101 grid, corresponding to (uV, dV) values between -1 and 1. Flavor-conserving ee NSI.") Explanation: Calculate the (minimum) chi-squared on a grid and save to file: End of explanation pl.figure(figsize=(6,6)) #pl.contourf(DL, UL, delta_chi2_grid, levels=[0,1,2,3,4,5,6,7,8,9,10],cmap="Blues") pl.contourf(DL, UL, delta_chi2_grid_ee, levels=[0,4.6],cmap="Blues") #levels=[0,4.60] #pl.colorbar() pl.plot(0.0, 0.0,'k+', markersize=12.0, label="Standard Model") pl.plot(BF[1], BF[0], 'ro', label="Best fit") #pl.plot(-0.25, 0.5, 'ro') pl.ylabel(r"$\epsilon_{ee}^{uV}$", fontsize=22.0) pl.xlabel(r"$\epsilon_{ee}^{dV}$" ,fontsize=22.0) pl.title(r"$90\%$ CL allowed regions", fontsize=16.0) pl.legend(frameon=False, fontsize=12, numpoints=1) pl.savefig("plots/COHERENT_NSI_ee.pdf", bbox_inches="tight") pl.show() Explanation: Plot the 90% allowed regions: End of explanation def NSI_corr_changing(eps_uV, eps_dV, A, Z): SIN2THETAW = 0.2387 #Calculate standard weak nuclear charge (squared) Qsq = 4.0*((A - Z)*(-0.5) + Z*(0.5 - 2*SIN2THETAW))**2 #Calculate the modified nuclear charge from NSI Qsq_NSI = Qsq + 4.0*((A-Z)*(eps_uV + 2.0*eps_dV) + Z*(2.0*eps_uV + eps_dV))**2 return Qsq_NSI/Qsq def deltachisq_NSI_emu(eps_uV, eps_dV): #NB: bins_I and bins_Cs are calculated further up in the script (they are the SM signal prediction) N_sig_I = (N_SM_I)*NSI_corr_changing(eps_uV, eps_dV, A_I, Z_I) #Now signal events from Caesium N_sig_Cs = (N_SM_Cs)*NSI_corr_changing(eps_uV, eps_dV, A_Cs, Z_Cs) #Number of signal events N_NSI = np.sum(N_sig_I + N_sig_Cs) return deltachi2_Nsig(N_NSI) Explanation: Flavour-changing NSI ($e\mu$) Now the correction to the CEvNS rate from flavor-changing NSI ($e\mu$-type): End of explanation Ngrid = 101 ulist = np.linspace(-1.0, 1.0, Ngrid) dlist = np.linspace(-1.0, 1.0, Ngrid) UL, DL = np.meshgrid(ulist, dlist) delta_chi2_grid_emu = 0.0*UL #Not very elegant loop for i in tqdm(range(Ngrid)): for j in range(Ngrid): delta_chi2_grid_emu[i,j] = deltachisq_NSI_emu(UL[i,j], DL[i,j]) #Find best-fit point ind_BF = np.argmin(delta_chi2_grid_emu) BF = [UL.flatten()[ind_BF], DL.flatten()[ind_BF]] print("Best fit point: ", BF) np.savetxt("results/COHERENT_NSI_deltachi2_emu.txt", delta_chi2_grid_emu, header="101x101 grid, corresponding to (uV, dV) values between -1 and 1.") pl.figure(figsize=(6,6)) #pl.contourf(DL, UL, delta_chi2_grid, levels=[0,1,2,3,4,5,6,7,8,9,10],cmap="Blues") pl.contourf(DL, UL, delta_chi2_grid_emu, levels=[0,4.6],cmap="Blues") #levels=[0,4.60] #pl.colorbar() pl.plot(0.0, 0.0,'k+', markersize=12.0, label="Standard Model") pl.plot(BF[1], BF[0], 'ro', label="Best fit") #pl.plot(-0.25, 0.5, 'ro') pl.ylabel(r"$\epsilon_{e\mu}^{uV}$", fontsize=22.0) pl.xlabel(r"$\epsilon_{e\mu}^{dV}$" ,fontsize=22.0) pl.title(r"$90\%$ CL allowed regions", fontsize=16.0) pl.legend(frameon=False, fontsize=12, numpoints=1) pl.savefig("plots/COHERENT_NSI_emu.pdf", bbox_inches="tight") pl.show() Explanation: Calculate delta-chisquared over a grid and save to file End of explanation def deltachisq_NSI_etau(eps_uV, eps_dV): #NB: bins_I and bins_Cs are calculated further up in the script (they are the SM signal prediction) #Signal events from Iodine (with NSI correction only applying to electron neutrino events) N_sig_I = (N_SM_I + (NSI_corr_changing(eps_uV, eps_dV, A_I, Z_I) - 1.0)*bins_I_e) #Now signal events from Caesium N_sig_Cs = (N_SM_Cs + (NSI_corr_changing(eps_uV, eps_dV, A_Cs, Z_Cs) - 1.0)*bins_Cs_e) #Number of signal events N_NSI = np.sum(N_sig_I + N_sig_Cs) return deltachi2_Nsig(N_NSI) Ngrid = 101 ulist = np.linspace(-1.0, 1.0, Ngrid) dlist = np.linspace(-1.0, 1.0, Ngrid) UL, DL = np.meshgrid(ulist, dlist) delta_chi2_grid_etau = 0.0*UL #Not very elegant loop for i in tqdm(range(Ngrid)): for j in range(Ngrid): delta_chi2_grid_etau[i,j] = deltachisq_NSI_etau(UL[i,j], DL[i,j]) #Find best-fit point ind_BF = np.argmin(delta_chi2_grid_etau) BF = [UL.flatten()[ind_BF], DL.flatten()[ind_BF]] print("Best fit point: ", BF) np.savetxt("results/COHERENT_NSI_deltachi2_etau.txt", delta_chi2_grid_etau, header="101x101 grid, corresponding to (uV, dV) values between -1 and 1.") pl.figure(figsize=(6,6)) #pl.contourf(DL, UL, delta_chi2_grid, levels=[0,1,2,3,4,5,6,7,8,9,10],cmap="Blues") pl.contourf(DL, UL, delta_chi2_grid_etau, levels=[0,4.6],cmap="Blues") #levels=[0,4.60] #pl.colorbar() pl.plot(0.0, 0.0,'k+', markersize=12.0, label="Standard Model") pl.plot(BF[1], BF[0], 'ro', label="Best fit") #pl.plot(-0.25, 0.5, 'ro') pl.ylabel(r"$\epsilon_{e\tau}^{uV}$", fontsize=22.0) pl.xlabel(r"$\epsilon_{e\tau}^{dV}$" ,fontsize=22.0) pl.title(r"$90\%$ CL allowed regions", fontsize=16.0) pl.legend(frameon=False, fontsize=12, numpoints=1) pl.savefig("plots/COHERENT_NSI_etau.pdf", bbox_inches="tight") pl.show() Explanation: Flavour-changing NSI ($e\tau$) Finally, allowed regions for Flavour-changing NSI ($e\tau$-type) End of explanation #Calculate the number of neutrino magnetic moment scattering events #assuming a universal magnetic moment (in units of 1e-12 mu_B) diffRate_mag = np.vectorize(CEvNS.differentialRate_magnetic) dRdPE_mag = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*(f_Cs*diffRate_mag(x/PEperkeV, A_Cs, Z_Cs, 1e-12)\ + f_I*diffRate_mag(x/PEperkeV, A_I, Z_I, 1e-12)) N_mag = quad(dRdPE_mag, 0, 50)[0] print("Number of magnetic moment signal events (for mu_nu = 1e-12 mu_B):", N_mag) def deltachisq_mag(mu_nu): #Signal events is sum of standard CEvNS + magnetic moment events N_sig = np.sum(N_SM_tot) + N_mag*(mu_nu/1e-12)**2 return deltachi2_Nsig(N_sig) Explanation: Limits on the neutrino magnetic moment Now let's calculate a limit on the neutrino magnetic moment (again, from a crude single-bin $\chi^2$). End of explanation Ngrid = 501 maglist = np.logspace(-12, -6, Ngrid) deltachi2_list_mag = 0.0*maglist #Not very elegant loop for i in tqdm(range(Ngrid)): deltachi2_list_mag[i] = deltachisq_mag(maglist[i]) upper_limit = maglist[deltachi2_list_mag > 2.706][0] print("90% upper limit: ", upper_limit) Explanation: Scan over a grid: End of explanation pl.figure(figsize=(6,6)) pl.semilogx(maglist, deltachi2_list_mag, linewidth=2.0) #pl.ylim(0, 25) pl.axhline(2.706, linestyle='--', color='k') pl.axvline(upper_limit, linestyle=':', color='k') pl.text(1e-11, 3, "90% CL") pl.ylabel(r"$\Delta \chi^2$") pl.xlabel(r"Neutrino magnetic moment, $\mu_{\nu} / \mu_B$") pl.savefig("plots/COHERENT_magnetic.pdf", bbox_inches="tight") pl.show() Explanation: Do some plotting: End of explanation def tabulate_rate( m_med): vector_rate = lambda x, gsq: (1.0/PEperkeV)*efficiency(x)*mass*time*(f_Cs*CEvNS.differentialRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs,gsq,m_med)\ + f_I*CEvNS.differentialRate_CEvNS(x/PEperkeV, A_I, Z_I, gsq,m_med)) alpha = 1.0 PE_min = 4.0 PE_max = 50.0 Nvals = 500 PEvals = np.logspace(np.log10(PE_min), np.log10(PE_max),Nvals) Rvals_A = [np.sqrt(vector_rate(PEvals[i], 0)) for i in range(Nvals)] Rvals_B = [(1.0/(4.0*alpha*Rvals_A[i]))*(vector_rate(PEvals[i], alpha) - vector_rate(PEvals[i], -alpha)) for i in range(Nvals)] tabrate_A = InterpolatedUnivariateSpline(PEvals, Rvals_A, k = 1) tabrate_B = InterpolatedUnivariateSpline(PEvals, Rvals_B, k = 1) return tabrate_A, tabrate_B def N_sig_vector(gsq, m_med): integrand = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*(f_Cs*CEvNS.differentialRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs,gsq,m_med)\ + f_I*CEvNS.differentialRate_CEvNS(x/PEperkeV, A_I, Z_I, gsq,m_med)) xlist = np.linspace(4,50,100) integ_vals = np.vectorize(integrand)(xlist) return np.trapz(integ_vals, xlist) def N_sig_vector_tab(gsq, tabrate_A, tabrate_B): integrand = lambda x: (tabrate_A(x) + tabrate_B(x)*gsq)**2.0 xlist = np.linspace(4,50,100) integ_vals = np.vectorize(integrand)(xlist) return np.trapz(integ_vals, xlist) #return quad(integrand, 4.0, 50, epsabs=0.01)[0] def tabulate_Nsig(tabrate_A, tabrate_B): N_A = N_sig_vector_tab(0, tabrate_A, tabrate_B) N_C = 0.5*(N_sig_vector_tab(1.0, tabrate_A, tabrate_B) + N_sig_vector_tab(-1.0, tabrate_A, tabrate_B))- N_A N_B = N_sig_vector_tab(1.0, tabrate_A, tabrate_B) - N_A - N_C return N_A, N_B, N_C def N_sig_fulltab(gsq, Nsig_A, Nsig_B, Nsig_C): return Nsig_A + gsq*Nsig_B + gsq**2*Nsig_C #Calculate the number of signal events for a 1000 MeV Z', with coupling 1e-4 by doing: rate_A, rate_B = tabulate_rate(1000) N_A, N_B,N_C = tabulate_Nsig(rate_A, rate_B) #N_sig_vector_tab(1e-4, rate_A, rate_B) N_sig_fulltab(1e-4, N_A, N_B, N_C) Explanation: Limits on new vector mediators First, let's calculate the total number of signal events at a given mediator mass and coupling... It takes a while to recalculate the number of signal events for each mediator mass and coupling, so we'll do some rescaling and interpolation trickery: End of explanation gsq_list = np.append(np.logspace(0, 2, 100),1e20) m_list = np.sort(np.append(np.logspace(-2, 4,49), [1e-6,1e8])) #Need to search for the limit in a narrow band of coupling values g_upper = 1e-11*(50**2+m_list**2) g_lower = 1e-13*(50**2+m_list**2) deltachi2_vec_grid = np.zeros((51, 101)) for i in tqdm(range(len(m_list))): rate_A, rate_B = tabulate_rate(m_list[i]) N_A, N_B,N_C = tabulate_Nsig(rate_A, rate_B) for j, gsq in enumerate(gsq_list): N_sig = N_sig_fulltab(gsq*g_lower[i], N_A, N_B, N_C) deltachi2_vec_grid[i, j] = deltachi2_Nsig(N_sig) mgrid, ggrid = np.meshgrid(m_list, gsq_list, indexing='ij') ggrid *= 1e-13*(50**2 + mgrid**2) np.savetxt("results/COHERENT_Zprime.txt", np.c_[mgrid.flatten(), ggrid.flatten(), deltachi2_vec_grid.flatten()]) pl.figure(figsize=(6,6)) pl.loglog(m_list, g_upper, 'k--') pl.loglog(m_list, g_lower, 'k--') pl.contourf(mgrid, ggrid, deltachi2_vec_grid, levels=[2.7,1e10],cmap="Blues") pl.ylim(1e-10, 1e5) #pl.colorbar() pl.xlabel(r"$m_{Z'}$ [MeV]") pl.ylabel(r"$g_{Z'}^2$") pl.title("Blue region (and above) is excluded...", fontsize=12) pl.savefig("plots/COHERENT_Zprime.pdf") pl.show() Explanation: Now we scan over a grid in $g^2$ and $m_V$ to calculate the $\chi^2$ at each point: End of explanation def calc_Nsig_scalar(m_med): scalar_rate = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*(f_Cs*CEvNS.differentialRate_scalar(x/PEperkeV, A_Cs, Z_Cs,1,m_med)\ + f_I*CEvNS.differentialRate_scalar(x/PEperkeV, A_I, Z_I, 1,m_med)) xlist = np.linspace(4,50,100) integ_vals = np.vectorize(scalar_rate)(xlist) return np.trapz(integ_vals, xlist) #return quad(scalar_rate, PE_min, PE_max)[0] Explanation: Limits on a new scalar mediator Finally, let's look at limits on the couplings of a new scalar mediator $\phi$. We start by calculating the contribution to the number of signal events for a given mediator mass (this can be rescaled by the coupling $g_\phi^4$ later): End of explanation m_list = np.logspace(-3, 7,50) gsq_list = np.logspace(0, 4, 50) #Again, need to search in a specific range of coupling values to find the limit... g_upper = 1e-10*(50**2+m_list**2) g_lower = 1e-14*(50**2+m_list**2) deltachi2_scal_grid = np.zeros((len(m_list), len(gsq_list))) for i in tqdm(range(len(m_list))): Nsig_scalar = calc_Nsig_scalar(m_list[i]) for j in range(len(gsq_list)): deltachi2_scal_grid[i,j] = deltachi2_Nsig(np.sum(N_SM_tot) + Nsig_scalar*(gsq_list[j]*g_lower[i])**2) mgrid, ggrid = np.meshgrid(m_list, gsq_list, indexing='ij') ggrid *= 1e-14*(50**2+mgrid**2) np.savetxt("results/COHERENT_scalar.txt", np.c_[mgrid.flatten(), ggrid.flatten(), deltachi2_scal_grid.flatten()]) pl.figure(figsize=(6,6)) pl.loglog(m_list, g_upper, 'k--') pl.loglog(m_list, g_lower, 'k--') pl.contourf(mgrid, ggrid, deltachi2_scal_grid, levels=[2.7,1e10],cmap="Blues") #pl.colorbar() pl.xlabel(r"$m_{\phi}$ [MeV]") pl.ylabel(r"$g_{\phi}^2$") pl.title("Blue region (and above) is excluded...", fontsize=12) pl.savefig("plots/COHERENT_scalar.pdf") pl.show() Explanation: Now grid-scan to get the $\Delta \chi^2$: End of explanation
2,451
Given the following text description, write Python code to implement the functionality described below step by step Description: Counting Colonies with scikit-image Step1: Load in the plate image Step2: Construct a mask to remove the plate itself Step3: Creates a mask that is False if pixel is inside the plate and True if something is outside the plate. Apply mask (Note cool different way to apply the mask). Step5: Now check out scikit-image Lay some ground work Step6: Caveat LOG and DOG methods assume you are looking for bright spots. We could improve by inverting image. (but it was slow for demo) Get rid of big blobs Write a function that gets rid of any blob with a radius bigger than 15. Each blob is a tuple Step7: Now get the color of each blob Step8: Write a function that only takes a blob with an R channel less than 100
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np from PIL import Image from skimage.feature import blob_dog, blob_log, blob_doh from skimage.color import rgb2gray from skimage.draw import circle Explanation: Counting Colonies with scikit-image End of explanation image = np.array(Image.open("img/colonies.jpg")) plt.imshow(image) Explanation: Load in the plate image End of explanation center = np.array(image.shape[:2])/2 cutoff = 550 sq_cutoff = cutoff**2 mask = np.zeros(image.shape[:2],dtype=np.bool) for i in range(image.shape[0]): d_i = (i - center[0])**2 for j in range(image.shape[1]): d_j = (j - center[1])**2 # If this pixel is too far away from center mask it if d_i + d_j > sq_cutoff: mask[i,j] = True Explanation: Construct a mask to remove the plate itself End of explanation fig, ax = plt.subplots(1,2) ax[0].imshow(image) image[mask,:] = 255 ax[1].imshow(image) Explanation: Creates a mask that is False if pixel is inside the plate and True if something is outside the plate. Apply mask (Note cool different way to apply the mask). End of explanation # Convert image to grayscale image_gray = rgb2gray(image) plt.imshow(image_gray,cmap="gray") def plot_blobs(img,blobs): Plot a set of blobs on an image. fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.imshow(img, interpolation='nearest') for blob in blobs: y, x, r = blob c = plt.Circle((x, y), r, color="red", linewidth=2, fill=False) ax.add_patch(c) # blob_log blobs_log = blob_log(image_gray, max_sigma=30, num_sigma=10, threshold=.1) blobs_log[:, 2] = blobs_log[:, 2] * np.sqrt(2) plot_blobs(image,blobs_log) # blob_dog blobs_dog = blob_dog(image_gray, max_sigma=30, threshold=.1) blobs_dog[:, 2] = blobs_dog[:, 2] * np.sqrt(2) plot_blobs(image,blobs_dog) # blob_doh blobs_doh = blob_doh(image_gray, max_sigma=30, threshold=.01) plot_blobs(image,blobs_doh) Explanation: Now check out scikit-image Lay some ground work End of explanation ## KEY def filter_blobs(blobs,r_cutoff=15): new_blobs = [] for b in blobs: if b[2] < r_cutoff: new_blobs.append(b) return new_blobs new_blobs = filter_blobs(blobs_doh) plot_blobs(image,new_blobs) Explanation: Caveat LOG and DOG methods assume you are looking for bright spots. We could improve by inverting image. (but it was slow for demo) Get rid of big blobs Write a function that gets rid of any blob with a radius bigger than 15. Each blob is a tuple: blob1 = (y,x,r) End of explanation def get_blob_color(image,blob): # Grab circle center (a,b) and radius (r) a, b, r = blob # Draw circle for blob circle_mask = circle(a,b,r) # Create mask of False over whole image mask = np.zeros(image.shape[0:2],dtype=np.bool) mask[circle_mask] = True num_pixels = np.sum(mask) red = np.sum(image[mask,0])/num_pixels green = np.sum(image[mask,1])/num_pixels blue = np.sum(image[mask,2])/num_pixels return red, green, blue blob_colors = [] for b in new_blobs: blob_colors.append(get_blob_color(image,b)) Explanation: Now get the color of each blob End of explanation ## KEY num_not_red = 0 num_red = 0 for b in blob_colors: if b[0] < 100: num_not_red += 1 else: num_red += 1 print("Num not red",num_not_red) print("Num red",num_red) def plot_colored_results(img,blobs,colors): fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.imshow(img, interpolation='nearest') for i, blob in enumerate(blobs): y, x, r = blob if colors[i][0] < 100: color = "green" else: color = "gray" c = plt.Circle((x, y), r, color=color, linewidth=2, fill=False) ax.add_patch(c) plt.imshow(image) plot_colored_results(image,new_blobs,blob_colors) Explanation: Write a function that only takes a blob with an R channel less than 100 End of explanation
2,452
Given the following text description, write Python code to implement the functionality described below step by step Description: Higgs Boson Analysis with ATLAS Open Data This is an example analysis of the Higgs boson detection via the decay channel H &rarr; ZZ* &rarr; 4l From the decay products measured at the ATLAS experiment and provided as open data, you will be able to produce a histogram, and from there you can infer the invariant mass of the Higgs boson. Code Step1: Apply basic cuts More details on the cuts (filters applied to the event data) in the reference ATLAS paper on the discovery of the Higgs boson (mostly Section 4 and 4.1) Step2: Compute the invariant mass This computes the 4-vectors sum for the 4-lepton system using formulas from special relativity. See also http Step4: Note on sparkhistogram Use this to define the computeHistogram function if you cannot pip install sparkhistogram
Python Code: # Run this if you need to install Apache Spark (PySpark) # !pip install pyspark # Install sparkhistogram # Note: if you cannot install the package, create the computeHistogram # function as detailed at the end of this notebook. !pip install sparkhistogram # Run this to download the dataset # It is a small file (200 KB), this exercise is meant mostly to show the Spark API # See further details at https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics !wget https://sparkdltrigger.web.cern.ch/sparkdltrigger/ATLAS_Higgs_opendata/Data_4lep.parquet # Start the Spark Session # This uses local mode for simplicity # the use of findspark is optional # import findspark # findspark.init("/home/luca/Spark/spark-3.3.0-bin-hadoop3") from pyspark.sql import SparkSession spark = (SparkSession.builder .appName("H_ZZ_4Lep") .master("local[*]") .getOrCreate() ) # Read data with the candidate events df_events = spark.read.parquet("Data_4lep.parquet") df_events.printSchema() # Count the number of events before cuts (filter) print(f"Number of events: {df_events.count()}") Explanation: Higgs Boson Analysis with ATLAS Open Data This is an example analysis of the Higgs boson detection via the decay channel H &rarr; ZZ* &rarr; 4l From the decay products measured at the ATLAS experiment and provided as open data, you will be able to produce a histogram, and from there you can infer the invariant mass of the Higgs boson. Code: it is based on the original work at ATLAS outreach notebooks Data: from the 13TeV ATLAS opendata Physics: See ATLAS paper on the discovery of the Higgs boson (mostly Section 4 and 4.1) See also: https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics Author and contact: [email protected] March, 2022 H &rarr; ZZ* &rarr; 4l analsys End of explanation # Apply filters to the input data # only events with 4 leptons in the input data # cut on lepton charge # paper: "selecting two pairs of isolated leptons, each of which is comprised of two leptons with the same flavour and opposite charge" df_events = df_events.filter("lep_charge[0] + lep_charge[1] + lep_charge[2] + lep_charge[3] == 0") # cut on lepton type # paper: "selecting two pairs of isolated leptons, each of which is comprised of two leptons with the same flavour and opposite charge" df_events = df_events.filter("lep_type[0] + lep_type[1] + lep_type[2] + lep_type[3] in (44, 48, 52)") print(f"Number of events after applying cuts: {df_events.count()}") Explanation: Apply basic cuts More details on the cuts (filters applied to the event data) in the reference ATLAS paper on the discovery of the Higgs boson (mostly Section 4 and 4.1) End of explanation # This computes the 4-vectors sum for the 4-lepton system df_4lep = df_events.selectExpr( "lep_pt[0] * cos(lep_phi[0]) + lep_pt[1] * cos(lep_phi[1]) + lep_pt[2] * cos(lep_phi[2]) + lep_pt[3] * cos(lep_phi[3]) as Px", "lep_pt[0] * sin(lep_phi[0]) + lep_pt[1] * sin(lep_phi[1]) + lep_pt[2] * sin(lep_phi[2]) + lep_pt[3] * sin(lep_phi[3]) as Py", "lep_pt[0] * sinh(lep_eta[0]) + lep_pt[1] * sinh(lep_eta[1]) + lep_pt[2] * sinh(lep_eta[2]) + lep_pt[3] * sinh(lep_eta[3]) as Pz", "lep_E[0] + lep_E[1] + lep_E[2] + lep_E[3] as E" ) df_4lep.show(5) df_4lep_invmass = df_4lep.selectExpr("sqrt(E * E - ( Px * Px + Py * Py + Pz * Pz))/1e3 as invmass_GeV") df_4lep_invmass.show(5) # This defines the DataFrame transformation to compute the histogram of invariant mass # The result is a histogram with (energy) bin values and event counts foreach bin # Requires sparkhistogram # See https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_DataFrame_Histograms.md from sparkhistogram import computeHistogram # histogram parameters min_val = 80 max_val = 250 num_bins = (max_val - min_val) / 5.0 # use the helper function computeHistogram in the package sparkhistogram histogram_data = computeHistogram(df_4lep_invmass, "invmass_GeV", min_val, max_val, num_bins) # The action toPandas() here triggers the computation. # Histogram data is fetched into the driver as a Pandas Dataframe. %time histogram_data_pandas=histogram_data.toPandas() import numpy as np # Computes statistical error on the data (histogram) histogram_data_stat_errors = np.sqrt(histogram_data_pandas) # This plots the data histogram with error bars import matplotlib.pyplot as plt plt.style.use('seaborn-darkgrid') plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]}) f, ax = plt.subplots() x = histogram_data_pandas["value"] y = histogram_data_pandas["count"] err = histogram_data_stat_errors["count"] # scatter plot ax.plot(x, y, marker='o', color='red', linewidth=0) #ax.errorbar(x, y, err, fmt = 'ro') # histogram with error bars ax.bar(x, y, width = 5.0, yerr = err, capsize = 5, linewidth = 0.2, ecolor='blue', fill=False) ax.set_xlim(min_val-2, max_val) ax.set_xlabel('$m_{4lep}$ (GeV)') ax.set_ylabel('Number of Events / bucket_size = 5 GeV') ax.set_title("Distribution of the 4-Lepton Invariant Mass") # Label for the Z ang Higgs spectrum peaks txt_opts = {'horizontalalignment': 'left', 'verticalalignment': 'center', 'transform': ax.transAxes} plt.text(0.10, 0.86, "Z boson, mass = 91 GeV", **txt_opts) plt.text(0.27, 0.55, "Higgs boson, mass = 125 GeV", **txt_opts) # Add energy and luminosity plt.text(0.60, 0.92, "ATLAS open data, for education", **txt_opts) plt.text(0.60, 0.87, '$\sqrt{s}$=13 TeV,$\int$L dt = 10 fb$^{-1}$', **txt_opts) plt.show() spark.stop() Explanation: Compute the invariant mass This computes the 4-vectors sum for the 4-lepton system using formulas from special relativity. See also http://edu.itp.phys.ethz.ch/hs10/ppp1/2010_11_02.pdf and https://en.wikipedia.org/wiki/Invariant_mass End of explanation def computeHistogram(df: "DataFrame", value_col: str, min: float, max: float, bins: int) -> "DataFrame": This is a dataframe function to compute the count/frequecy histogram of a column Parameters ---------- df: the dataframe with the data to compute value_col: column name on which to compute the histogram min: minimum value in the histogram max: maximum value in the histogram bins: number of histogram buckets to compute Output DataFrame ---------------- bucket: the bucket number, range from 1 to bins (included) value: midpoint value of the given bucket count: number of values in the bucket step = (max - min) / bins # this will be used to fill in for missing buckets, i.e. buckets with no corresponding values df_buckets = spark.sql(f"select id+1 as bucket from range({bins})") histdf = (df .selectExpr(f"width_bucket({value_col}, {min}, {max}, {bins}) as bucket") .groupBy("bucket") .count() .join(df_buckets, "bucket", "right_outer") # add missing buckets and remove buckets out of range .selectExpr("bucket", f"{min} + (bucket - 1/2) * {step} as value", # use center value of the buckets "nvl(count, 0) as count") # buckets with no values will have a count of 0 .orderBy("bucket") ) return histdf Explanation: Note on sparkhistogram Use this to define the computeHistogram function if you cannot pip install sparkhistogram End of explanation
2,453
Given the following text description, write Python code to implement the functionality described below step by step Description: make the train_pivot, duplicate exist when index = ['Cliente','Producto'] for each cliente & producto, first find its most common Agencia_ID, Canal_ID, Ruta_SAK Step1: make pivot table of test Step2: groupby use Agencia_ID, Ruta_SAK, Cliente_ID, Producto_ID Step3: if predict week 8, use data from 3,4,5,6,7 if predict week 9, use data from 3,4,5,6,7 Step4: data for predict week [34567----9], time plus 2 week Step5: data for predict week 8&9, time plus 1 week train_45678 for 8+1 =9 Step6: train_34567 7+1 = 8 Step7: concat train_pivot_45678_to_9 & train_pivot_34567_to_8 to perform t_plus_1, train_data is over Step8: prepare for test data, for week 10, we use 5,6,7,8,9 Step9: begin predict for week 11 train_3456 for 6+2 = 8 Step10: train_4567 for 7 + 2 = 9 Step11: concat Step12: for test data week 11, we use 6,7,8,9 Step13: over Step14: create time feature Step15: fit mean feature on target Step16: add dummy feature Step17: add product feature Step18: add town feature Step19: begin xgboost training Step20: for 1 week later cv rmse 0.451181 with dummy canal, time regr, cv rmse 0.450972 without dummy canal, time regr, cv rmse 0.4485676 without dummy canal, time regr, producto info cv rmse 0.4487434 without dummy canal, time regr, producto info, cliente_per_town for 2 week later cv rmse 0.4513236 without dummy canal, time regr, producto info
Python Code: agencia_for_cliente_producto = train_dataset[['Cliente_ID','Producto_ID' ,'Agencia_ID']].groupby(['Cliente_ID', 'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index() canal_for_cliente_producto = train_dataset[['Cliente_ID', 'Producto_ID','Canal_ID']].groupby(['Cliente_ID', 'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index() ruta_for_cliente_producto = train_dataset[['Cliente_ID', 'Producto_ID','Ruta_SAK']].groupby(['Cliente_ID', 'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index() gc.collect() agencia_for_cliente_producto.to_pickle('agencia_for_cliente_producto.csv') canal_for_cliente_producto.to_pickle('canal_for_cliente_producto.csv') ruta_for_cliente_producto.to_pickle('ruta_for_cliente_producto.csv') agencia_for_cliente_producto = pd.read_pickle('agencia_for_cliente_producto.csv') canal_for_cliente_producto = pd.read_pickle('canal_for_cliente_producto.csv') ruta_for_cliente_producto = pd.read_pickle('ruta_for_cliente_producto.csv') # train_dataset['log_demand'] = train_dataset['Demanda_uni_equil'].apply(np.log1p) pivot_train = pd.pivot_table(data= train_dataset[['Cliente_ID','Producto_ID','log_demand','Semana']], values='log_demand', index=['Cliente_ID','Producto_ID'], columns=['Semana'], aggfunc=np.mean,fill_value = 0).reset_index() pivot_train.head() pivot_train = pd.merge(left = pivot_train, right = agencia_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID']) pivot_train = pd.merge(left = pivot_train, right = canal_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID']) pivot_train = pd.merge(left = pivot_train, right = ruta_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID']) pivot_train.to_pickle('pivot_train_with_zero.pickle') pivot_train = pd.read_pickle('pivot_train_with_zero.pickle') pivot_train.to_pickle('pivot_train_with_nan.pickle') pivot_train = pd.read_pickle('pivot_train_with_nan.pickle') pivot_train = pivot_train.rename(columns={3: 'Sem3', 4: 'Sem4',5: 'Sem5', 6: 'Sem6',7: 'Sem7', 8: 'Sem8',9: 'Sem9'}) pivot_train.head() pivot_train.columns.values Explanation: make the train_pivot, duplicate exist when index = ['Cliente','Producto'] for each cliente & producto, first find its most common Agencia_ID, Canal_ID, Ruta_SAK End of explanation test_dataset = pd.read_csv('origin/test.csv') test_dataset.head() test_dataset[test_dataset['Semana'] == 10].shape test_dataset[test_dataset['Semana'] == 11].shape pivot_test = pd.merge(left=pivot_train, right = test_dataset[['id','Cliente_ID','Producto_ID','Semana']], on =['Cliente_ID','Producto_ID'],how = 'inner' ) pivot_test.head() pivot_test_new = pd.merge(pivot_train[['Cliente_ID', 'Producto_ID', 'Sem3', 'Sem4', 'Sem5', 'Sem6', 'Sem7', 'Sem8', 'Sem9']],right = test_dataset, on = ['Cliente_ID','Producto_ID'],how = 'right') pivot_test_new.head() pivot_test_new.to_pickle('pivot_test.pickle') pivot_test.to_pickle('pivot_test.pickle') pivot_test = pd.read_pickle('pivot_test.pickle') pivot_test.head() Explanation: make pivot table of test End of explanation train_dataset.head() import itertools col_list = ['Agencia_ID', 'Ruta_SAK', 'Cliente_ID', 'Producto_ID'] all_combine = itertools.combinations(col_list,2) list_2element_combine = [list(tuple) for tuple in all_combine] col_1elm_2elm = col_list + list_2element_combine col_1elm_2elm train_dataset_test = train_dataset[train_dataset['Semana'] < 8].copy() Explanation: groupby use Agencia_ID, Ruta_SAK, Cliente_ID, Producto_ID End of explanation def categorical_useful(train_dataset,pivot_train): # if is_train: # train_dataset_test = train_dataset[train_dataset['Semana'] < 8].copy() # elif is_train == False: train_dataset_test = train_dataset.copy() log_demand_by_agen = train_dataset_test[['Agencia_ID','log_demand']].groupby('Agencia_ID').mean().reset_index() log_demand_by_ruta = train_dataset_test[['Ruta_SAK','log_demand']].groupby('Ruta_SAK').mean().reset_index() log_demand_by_cliente = train_dataset_test[['Cliente_ID','log_demand']].groupby('Cliente_ID').mean().reset_index() log_demand_by_producto = train_dataset_test[['Producto_ID','log_demand']].groupby('Producto_ID').mean().reset_index() log_demand_by_agen_ruta = train_dataset_test[['Agencia_ID', 'Ruta_SAK', 'log_demand']].groupby(['Agencia_ID', 'Ruta_SAK']).mean().reset_index() log_demand_by_agen_cliente = train_dataset_test[['Agencia_ID', 'Cliente_ID', 'log_demand']].groupby(['Agencia_ID', 'Cliente_ID']).mean().reset_index() log_demand_by_agen_producto = train_dataset_test[['Agencia_ID', 'Producto_ID', 'log_demand']].groupby(['Agencia_ID', 'Producto_ID']).mean().reset_index() log_demand_by_ruta_cliente = train_dataset_test[['Ruta_SAK', 'Cliente_ID', 'log_demand']].groupby(['Ruta_SAK', 'Cliente_ID']).mean().reset_index() log_demand_by_ruta_producto = train_dataset_test[['Ruta_SAK', 'Producto_ID', 'log_demand']].groupby(['Ruta_SAK', 'Producto_ID']).mean().reset_index() log_demand_by_cliente_producto = train_dataset_test[['Cliente_ID', 'Producto_ID', 'log_demand']].groupby(['Cliente_ID', 'Producto_ID']).mean().reset_index() log_demand_by_cliente_producto_agen = train_dataset_test[[ 'Cliente_ID','Producto_ID','Agencia_ID','log_demand']].groupby(['Cliente_ID', 'Agencia_ID','Producto_ID']).mean().reset_index() log_sum_by_cliente = train_dataset_test[['Cliente_ID','log_demand']].groupby('Cliente_ID').sum().reset_index() ruta_freq_semana = train_dataset[['Semana','Ruta_SAK']].groupby(['Ruta_SAK']).count().reset_index() clien_freq_semana = train_dataset[['Semana','Cliente_ID']].groupby(['Cliente_ID']).count().reset_index() agen_freq_semana = train_dataset[['Semana','Agencia_ID']].groupby(['Agencia_ID']).count().reset_index() prod_freq_semana = train_dataset[['Semana','Producto_ID']].groupby(['Producto_ID']).count().reset_index() pivot_train = pd.merge(left = pivot_train,right = ruta_freq_semana, how = 'left', on = ['Ruta_SAK']).rename(columns={'Semana': 'ruta_freq'}) pivot_train = pd.merge(left = pivot_train,right = clien_freq_semana, how = 'left', on = ['Cliente_ID']).rename(columns={'Semana': 'clien_freq'}) pivot_train = pd.merge(left = pivot_train,right = agen_freq_semana, how = 'left', on = ['Agencia_ID']).rename(columns={'Semana': 'agen_freq'}) pivot_train = pd.merge(left = pivot_train,right = prod_freq_semana, how = 'left', on = ['Producto_ID']).rename(columns={'Semana': 'prod_freq'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_agen, how = 'left', on = ['Agencia_ID']).rename(columns={'log_demand': 'agen_for_log_de'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_ruta, how = 'left', on = ['Ruta_SAK']).rename(columns={'log_demand': 'ruta_for_log_de'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_cliente, how = 'left', on = ['Cliente_ID']).rename(columns={'log_demand': 'cliente_for_log_de'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_producto, how = 'left', on = ['Producto_ID']).rename(columns={'log_demand': 'producto_for_log_de'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_agen_ruta, how = 'left', on = ['Agencia_ID', 'Ruta_SAK']).rename(columns={'log_demand': 'agen_ruta_for_log_de'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_agen_cliente, how = 'left', on = ['Agencia_ID', 'Cliente_ID']).rename(columns={'log_demand': 'agen_cliente_for_log_de'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_agen_producto, how = 'left', on = ['Agencia_ID', 'Producto_ID']).rename(columns={'log_demand': 'agen_producto_for_log_de'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_ruta_cliente, how = 'left', on = ['Ruta_SAK', 'Cliente_ID']).rename(columns={'log_demand': 'ruta_cliente_for_log_de'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_ruta_producto, how = 'left', on = ['Ruta_SAK', 'Producto_ID']).rename(columns={'log_demand': 'ruta_producto_for_log_de'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_cliente_producto, how = 'left', on = ['Cliente_ID', 'Producto_ID']).rename(columns={'log_demand': 'cliente_producto_for_log_de'}) pivot_train = pd.merge(left = pivot_train, right = log_sum_by_cliente, how = 'left', on = ['Cliente_ID']).rename(columns={'log_demand': 'cliente_for_log_sum'}) pivot_train = pd.merge(left = pivot_train, right = log_demand_by_cliente_producto_agen, how = 'left', on = ['Cliente_ID', 'Producto_ID', 'Agencia_ID']).rename(columns={'log_demand': 'cliente_producto_agen_for_log_sum'}) pivot_train['corr'] = pivot_train['producto_for_log_de'] * pivot_train['cliente_for_log_de'] / train_dataset_test['log_demand'].median() return pivot_train def define_time_features(df, to_predict = 't_plus_1' , t_0 = 8): if(to_predict == 't_plus_1' ): df['t_min_1'] = df['Sem'+str(t_0-1)] if(to_predict == 't_plus_2' ): df['t_min_6'] = df['Sem'+str(t_0-6)] df['t_min_2'] = df['Sem'+str(t_0-2)] df['t_min_3'] = df['Sem'+str(t_0-3)] df['t_min_4'] = df['Sem'+str(t_0-4)] df['t_min_5'] = df['Sem'+str(t_0-5)] if(to_predict == 't_plus_1' ): df['t1_min_t2'] = df['t_min_1'] - df['t_min_2'] df['t1_min_t3'] = df['t_min_1'] - df['t_min_3'] df['t1_min_t4'] = df['t_min_1'] - df['t_min_4'] df['t1_min_t5'] = df['t_min_1'] - df['t_min_5'] if(to_predict == 't_plus_2' ): df['t2_min_t6'] = df['t_min_2'] - df['t_min_6'] df['t3_min_t6'] = df['t_min_3'] - df['t_min_6'] df['t4_min_t6'] = df['t_min_4'] - df['t_min_6'] df['t5_min_t6'] = df['t_min_5'] - df['t_min_6'] df['t2_min_t3'] = df['t_min_2'] - df['t_min_3'] df['t2_min_t4'] = df['t_min_2'] - df['t_min_4'] df['t2_min_t5'] = df['t_min_2'] - df['t_min_5'] df['t3_min_t4'] = df['t_min_3'] - df['t_min_4'] df['t3_min_t5'] = df['t_min_3'] - df['t_min_5'] df['t4_min_t5'] = df['t_min_4'] - df['t_min_5'] return df def lin_regr(row, to_predict, t_0, semanas_numbers): row = row.copy() row.index = semanas_numbers row = row.dropna() if(len(row>2)): X = np.ones(shape=(len(row), 2)) X[:,1] = row.index y = row.values regr = linear_model.LinearRegression() regr.fit(X, y) if(to_predict == 't_plus_1'): return regr.predict([[1,t_0+1]])[0] elif(to_predict == 't_plus_2'): return regr.predict([[1,t_0+2]])[0] else: return None def lin_regr_features(pivot_df,to_predict, semanas_numbers,t_0): pivot_df = pivot_df.copy() semanas_names = ['Sem%i' %i for i in semanas_numbers] columns = ['Sem%i' %i for i in semanas_numbers] columns.append('Producto_ID') pivot_grouped = pivot_df[columns].groupby('Producto_ID').aggregate('mean') pivot_grouped['LR_prod'] = np.zeros(len(pivot_grouped)) pivot_grouped['LR_prod'] = pivot_grouped[semanas_names].apply(lin_regr, axis = 1, to_predict = to_predict, t_0 = t_0, semanas_numbers = semanas_numbers ) pivot_df = pd.merge(pivot_df, pivot_grouped[['LR_prod']], how='left', left_on = 'Producto_ID', right_index=True) pivot_df['LR_prod_corr'] = pivot_df['LR_prod'] * pivot_df['cliente_for_log_sum'] / 100 return pivot_df cliente_tabla = pd.read_csv('origin/cliente_tabla.csv') town_state = pd.read_csv('origin/town_state.csv') town_state['town_id'] = town_state['Town'].str.split() town_state['town_id'] = town_state['Town'].str.split(expand = True) def add_pro_info(dataset): train_basic_feature = dataset[['Cliente_ID','Producto_ID','Agencia_ID']].copy() train_basic_feature.drop_duplicates(inplace = True) cliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' ) # print cliente_per_town.shape cliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' ) # print cliente_per_town.shape cliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index() # print cliente_per_town_count.head() cliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','town_id','Agencia_ID']], cliente_per_town_count,on = 'town_id',how = 'inner') # print cliente_per_town_count_final.head() cliente_per_town_count_final.drop_duplicates(inplace = True) dataset_final = pd.merge(dataset,cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente','Agencia_ID']], on = ['Cliente_ID','Producto_ID','Agencia_ID'],how = 'left') return dataset_final pre_product = pd.read_csv('preprocessed_products.csv',index_col = 0) pre_product['weight_per_piece'] = pd.to_numeric(pre_product['weight_per_piece'], errors='coerce') pre_product['weight'] = pd.to_numeric(pre_product['weight'], errors='coerce') pre_product['pieces'] = pd.to_numeric(pre_product['pieces'], errors='coerce') def add_product(dataset): dataset = pd.merge(dataset,pre_product[['ID','weight','weight_per_piece','pieces']], left_on = 'Producto_ID',right_on = 'ID',how = 'left') return dataset Explanation: if predict week 8, use data from 3,4,5,6,7 if predict week 9, use data from 3,4,5,6,7 End of explanation train_34567 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6,7]), :].copy() train_pivot_34567_to_9 = pivot_train.loc[(pivot_train['Sem9'].notnull()),:].copy() train_pivot_34567_to_9 = categorical_useful(train_34567,train_pivot_34567_to_9) del train_34567 gc.collect() train_pivot_34567_to_9 = define_time_features(train_pivot_34567_to_9, to_predict = 't_plus_2' , t_0 = 9) train_pivot_34567_to_9 = lin_regr_features(train_pivot_34567_to_9,to_predict ='t_plus_2', semanas_numbers = [3,4,5,6,7],t_0 = 9) train_pivot_34567_to_9['target'] = train_pivot_34567_to_9['Sem9'] train_pivot_34567_to_9.drop(['Sem8','Sem9'],axis =1,inplace = True) #add cum_sum train_pivot_cum_sum = train_pivot_34567_to_9[['Sem3','Sem4','Sem5','Sem6','Sem7']].cumsum(axis = 1) train_pivot_34567_to_9.drop(['Sem3','Sem4','Sem5','Sem6','Sem7'],axis =1,inplace = True) train_pivot_34567_to_9 = pd.concat([train_pivot_34567_to_9,train_pivot_cum_sum],axis =1) train_pivot_34567_to_9 = train_pivot_34567_to_9.rename(columns={'Sem3': 't_m_6_cum', 'Sem4': 't_m_5_cum','Sem5': 't_m_4_cum', 'Sem6': 't_m_3_cum','Sem7': 't_m_2_cum'}) # add geo_info train_pivot_34567_to_9 = add_pro_info(train_pivot_34567_to_9) #add product info train_pivot_34567_to_9 = add_product(train_pivot_34567_to_9) train_pivot_34567_to_9.drop(['ID'],axis = 1,inplace = True) gc.collect() train_pivot_34567_to_9.head() Explanation: data for predict week [34567----9], time plus 2 week End of explanation train_45678 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7,8]), :].copy() train_pivot_45678_to_9 = pivot_train.loc[(pivot_train['Sem9'].notnull()),:].copy() train_pivot_45678_to_9 = categorical_useful(train_45678,train_pivot_45678_to_9) del train_45678 gc.collect() train_pivot_45678_to_9 = define_time_features(train_pivot_45678_to_9, to_predict = 't_plus_1' , t_0 = 9) train_pivot_45678_to_9 = lin_regr_features(train_pivot_45678_to_9,to_predict ='t_plus_1', semanas_numbers = [4,5,6,7,8],t_0 = 8) train_pivot_45678_to_9['target'] = train_pivot_45678_to_9['Sem9'] train_pivot_45678_to_9.drop(['Sem3','Sem9'],axis =1,inplace = True) #add cum_sum train_pivot_cum_sum = train_pivot_45678_to_9[['Sem4','Sem5','Sem6','Sem7','Sem8']].cumsum(axis = 1) train_pivot_45678_to_9.drop(['Sem4','Sem5','Sem6','Sem7','Sem8'],axis =1,inplace = True) train_pivot_45678_to_9 = pd.concat([train_pivot_45678_to_9,train_pivot_cum_sum],axis =1) train_pivot_45678_to_9 = train_pivot_45678_to_9.rename(columns={'Sem4': 't_m_5_cum', 'Sem5': 't_m_4_cum','Sem6': 't_m_3_cum', 'Sem7': 't_m_2_cum','Sem8': 't_m_1_cum'}) # add geo_info train_pivot_45678_to_9 = add_pro_info(train_pivot_45678_to_9) #add product info train_pivot_45678_to_9 = add_product(train_pivot_45678_to_9) train_pivot_45678_to_9.drop(['ID'],axis = 1,inplace = True) gc.collect() train_pivot_45678_to_9.head() train_pivot_45678_to_9.columns.values train_pivot_45678_to_9.to_csv('train_pivot_45678_to_9.csv') train_pivot_45678_to_9 = pd.read_csv('train_pivot_45678_to_9.csv',index_col = 0) train_pivot_45678_to_9.to_csv('train_pivot_45678_to_9_new.csv') train_pivot_45678_to_9 = pd.read_csv('train_pivot_45678_to_9_new.csv',index_col = 0) Explanation: data for predict week 8&9, time plus 1 week train_45678 for 8+1 =9 End of explanation train_34567 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6,7]), :].copy() train_pivot_34567_to_8 = pivot_train.loc[(pivot_train['Sem8'].notnull()),:].copy() train_pivot_34567_to_8 = categorical_useful(train_34567,train_pivot_34567_to_8) del train_34567 gc.collect() train_pivot_34567_to_8 = define_time_features(train_pivot_34567_to_8, to_predict = 't_plus_1' , t_0 = 8) train_pivot_34567_to_8 = lin_regr_features(train_pivot_34567_to_8,to_predict = 't_plus_1', semanas_numbers = [3,4,5,6,7],t_0 = 7) train_pivot_34567_to_8['target'] = train_pivot_34567_to_8['Sem8'] train_pivot_34567_to_8.drop(['Sem8','Sem9'],axis =1,inplace = True) #add cum_sum train_pivot_cum_sum = train_pivot_34567_to_8[['Sem3','Sem4','Sem5','Sem6','Sem7']].cumsum(axis = 1) train_pivot_34567_to_8.drop(['Sem3','Sem4','Sem5','Sem6','Sem7'],axis =1,inplace = True) train_pivot_34567_to_8 = pd.concat([train_pivot_34567_to_8,train_pivot_cum_sum],axis =1) train_pivot_34567_to_8 = train_pivot_34567_to_8.rename(columns={'Sem3': 't_m_5_cum','Sem4': 't_m_4_cum', 'Sem5': 't_m_3_cum','Sem6': 't_m_2_cum', 'Sem7': 't_m_1_cum'}) # add product_info train_pivot_34567_to_8 = add_pro_info(train_pivot_34567_to_8) #add product train_pivot_34567_to_8 = add_product(train_pivot_34567_to_8) train_pivot_34567_to_8.drop(['ID'],axis = 1,inplace = True) gc.collect() train_pivot_34567_to_8.head() train_pivot_34567_to_8.columns.values train_pivot_34567_to_8.to_csv('train_pivot_34567_to_8.csv') train_pivot_34567_to_8 = pd.read_csv('train_pivot_34567_to_8.csv',index_col = 0) gc.collect() Explanation: train_34567 7+1 = 8 End of explanation train_pivot_xgb_time1 = pd.concat([train_pivot_45678_to_9, train_pivot_34567_to_8],axis = 0,copy = False) train_pivot_xgb_time1.columns.values train_pivot_xgb_time1.shape train_pivot_xgb_time1.to_csv('train_pivot_xgb_time1_44fea.csv') train_pivot_xgb_time1.to_csv('train_pivot_xgb_time1.csv') del train_pivot_xgb_time1 del train_pivot_45678_to_9 del train_pivot_34567_to_8 gc.collect() Explanation: concat train_pivot_45678_to_9 & train_pivot_34567_to_8 to perform t_plus_1, train_data is over End of explanation pivot_test.head() pivot_test_week10 = pivot_test.loc[pivot_test['sem10_sem11'] == 10] pivot_test_week10.reset_index(drop=True,inplace = True) pivot_test_week10.head() pivot_test_week10.shape train_56789 = train_dataset.loc[train_dataset['Semana'].isin([5,6,7,8,9]), :].copy() train_pivot_56789_to_10 = pivot_test_week10.copy() train_pivot_56789_to_10 = categorical_useful(train_56789,train_pivot_56789_to_10) del train_56789 gc.collect() train_pivot_56789_to_10 = define_time_features(train_pivot_56789_to_10, to_predict = 't_plus_1' , t_0 = 10) train_pivot_56789_to_10 = lin_regr_features(train_pivot_56789_to_10,to_predict ='t_plus_1' , semanas_numbers = [5,6,7,8,9],t_0 = 9) train_pivot_56789_to_10.drop(['Sem3','Sem4'],axis =1,inplace = True) #add cum_sum train_pivot_cum_sum = train_pivot_56789_to_10[['Sem5','Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1) train_pivot_56789_to_10.drop(['Sem5','Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True) train_pivot_56789_to_10 = pd.concat([train_pivot_56789_to_10,train_pivot_cum_sum],axis =1) train_pivot_56789_to_10 = train_pivot_56789_to_10.rename(columns={'Sem5': 't_m_5_cum', 'Sem6': 't_m_4_cum','Sem7': 't_m_3_cum', 'Sem8': 't_m_2_cum','Sem9': 't_m_1_cum'}) # add product_info train_pivot_56789_to_10 = add_pro_info(train_pivot_56789_to_10) # train_pivot_56789_to_10 = add_product(train_pivot_56789_to_10) train_pivot_56789_to_10.drop(['ID'],axis =1,inplace = True) train_pivot_56789_to_10.head() train_pivot_56789_to_10.columns.values train_pivot_56789_to_10.to_pickle('train_pivot_56789_to_10_44fea.pickle') Explanation: prepare for test data, for week 10, we use 5,6,7,8,9 End of explanation train_3456 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6]), :].copy() train_pivot_3456_to_8 = pivot_train.loc[(pivot_train['Sem8'].notnull()),:].copy() train_pivot_3456_to_8 = categorical_useful(train_3456,train_pivot_3456_to_8) del train_3456 gc.collect() train_pivot_3456_to_8 = define_time_features(train_pivot_3456_to_8, to_predict = 't_plus_2' , t_0 = 8) #notice that the t_0 means different train_pivot_3456_to_8 = lin_regr_features(train_pivot_3456_to_8,to_predict = 't_plus_2', semanas_numbers = [3,4,5,6],t_0 = 6) train_pivot_3456_to_8['target'] = train_pivot_3456_to_8['Sem8'] train_pivot_3456_to_8.drop(['Sem7','Sem8','Sem9'],axis =1,inplace = True) #add cum_sum train_pivot_cum_sum = train_pivot_3456_to_8[['Sem3','Sem4','Sem5','Sem6']].cumsum(axis = 1) train_pivot_3456_to_8.drop(['Sem3','Sem4','Sem5','Sem6'],axis =1,inplace = True) train_pivot_3456_to_8 = pd.concat([train_pivot_3456_to_8,train_pivot_cum_sum],axis =1) train_pivot_3456_to_8 = train_pivot_3456_to_8.rename(columns={'Sem4': 't_m_4_cum', 'Sem5': 't_m_3_cum','Sem6': 't_m_2_cum', 'Sem3': 't_m_5_cum'}) # add product_info train_pivot_3456_to_8 = add_pro_info(train_pivot_3456_to_8) train_pivot_3456_to_8 = add_product(train_pivot_3456_to_8) train_pivot_3456_to_8.drop(['ID'],axis =1,inplace = True) train_pivot_3456_to_8.head() train_pivot_3456_to_8.columns.values train_pivot_3456_to_8.to_csv('train_pivot_3456_to_8.csv') Explanation: begin predict for week 11 train_3456 for 6+2 = 8 End of explanation train_4567 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7]), :].copy() train_pivot_4567_to_9 = pivot_train.loc[(pivot_train['Sem9'].notnull()),:].copy() train_pivot_4567_to_9 = categorical_useful(train_4567,train_pivot_4567_to_9) del train_4567 gc.collect() train_pivot_4567_to_9 = define_time_features(train_pivot_4567_to_9, to_predict = 't_plus_2' , t_0 = 9) #notice that the t_0 means different train_pivot_4567_to_9 = lin_regr_features(train_pivot_4567_to_9,to_predict = 't_plus_2', semanas_numbers = [4,5,6,7],t_0 = 7) train_pivot_4567_to_9['target'] = train_pivot_4567_to_9['Sem9'] train_pivot_4567_to_9.drop(['Sem3','Sem8','Sem9'],axis =1,inplace = True) #add cum_sum train_pivot_cum_sum = train_pivot_4567_to_9[['Sem7','Sem4','Sem5','Sem6']].cumsum(axis = 1) train_pivot_4567_to_9.drop(['Sem7','Sem4','Sem5','Sem6'],axis =1,inplace = True) train_pivot_4567_to_9 = pd.concat([train_pivot_4567_to_9,train_pivot_cum_sum],axis =1) train_pivot_4567_to_9 = train_pivot_4567_to_9.rename(columns={'Sem4': 't_m_5_cum', 'Sem5': 't_m_4_cum','Sem6': 't_m_3_cum', 'Sem7': 't_m_2_cum'}) # add product_info train_pivot_4567_to_9 = add_pro_info(train_pivot_4567_to_9) train_pivot_4567_to_9 = add_product(train_pivot_4567_to_9) train_pivot_4567_to_9.drop(['ID'],axis =1,inplace = True) train_pivot_4567_to_9.head() train_pivot_4567_to_9.columns.values train_pivot_4567_to_9.to_csv('train_pivot_4567_to_9.csv') Explanation: train_4567 for 7 + 2 = 9 End of explanation train_pivot_xgb_time2 = pd.concat([train_pivot_3456_to_8, train_pivot_4567_to_9],axis = 0,copy = False) train_pivot_xgb_time2.columns.values train_pivot_xgb_time2.shape train_pivot_xgb_time2.to_csv('train_pivot_xgb_time2_38fea.csv') train_pivot_xgb_time2 = pd.read_csv('train_pivot_xgb_time2.csv',index_col = 0) train_pivot_xgb_time2.head() del train_pivot_3456_to_8 del train_pivot_4567_to_9 del train_pivot_xgb_time2 del train_pivot_34567_to_8 del train_pivot_45678_to_9 del train_pivot_xgb_time1 gc.collect() Explanation: concat End of explanation pivot_test_week11 = pivot_test_new.loc[pivot_test_new['Semana'] == 11] pivot_test_week11.reset_index(drop=True,inplace = True) pivot_test_week11.head() pivot_test_week11.shape train_6789 = train_dataset.loc[train_dataset['Semana'].isin([6,7,8,9]), :].copy() train_pivot_6789_to_11 = pivot_test_week11.copy() train_pivot_6789_to_11 = categorical_useful(train_6789,train_pivot_6789_to_11) del train_6789 gc.collect() train_pivot_6789_to_11 = define_time_features(train_pivot_6789_to_11, to_predict = 't_plus_2' , t_0 = 11) train_pivot_6789_to_11 = lin_regr_features(train_pivot_6789_to_11,to_predict ='t_plus_2' , semanas_numbers = [6,7,8,9],t_0 = 9) train_pivot_6789_to_11.drop(['Sem3','Sem4','Sem5'],axis =1,inplace = True) #add cum_sum train_pivot_cum_sum = train_pivot_6789_to_11[['Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1) train_pivot_6789_to_11.drop(['Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True) train_pivot_6789_to_11 = pd.concat([train_pivot_6789_to_11,train_pivot_cum_sum],axis =1) train_pivot_6789_to_11 = train_pivot_6789_to_11.rename(columns={'Sem6': 't_m_5_cum', 'Sem7': 't_m_4_cum', 'Sem8': 't_m_3_cum','Sem9': 't_m_2_cum'}) # add product_info train_pivot_6789_to_11 = add_pro_info(train_pivot_6789_to_11) train_pivot_6789_to_11 = add_product(train_pivot_6789_to_11) train_pivot_6789_to_11.drop(['ID'],axis = 1,inplace = True) train_pivot_6789_to_11.head() train_pivot_6789_to_11.shape train_pivot_6789_to_11.to_pickle('train_pivot_6789_to_11_new.pickle') Explanation: for test data week 11, we use 6,7,8,9 End of explanation % time pivot_train_categorical_useful = categorical_useful(train_dataset,pivot_train,is_train = True) % time pivot_train_categorical_useful = categorical_useful(train_dataset,pivot_train,is_train = True) pivot_train_categorical_useful_train.to_csv('pivot_train_categorical_useful_with_nan.csv') pivot_train_categorical_useful_train = pd.read_csv('pivot_train_categorical_useful_with_nan.csv',index_col = 0) pivot_train_categorical_useful_train.head() Explanation: over End of explanation pivot_train_categorical_useful.head() pivot_train_categorical_useful_time = define_time_features(pivot_train_categorical_useful, to_predict = 't_plus_1' , t_0 = 8) pivot_train_categorical_useful_time.head() pivot_train_categorical_useful_time.columns Explanation: create time feature End of explanation # Linear regression features pivot_train_categorical_useful_time_LR = lin_regr_features(pivot_train_categorical_useful_time, semanas_numbers = [3,4,5,6,7]) pivot_train_categorical_useful_time_LR.head() pivot_train_categorical_useful_time_LR.columns pivot_train_categorical_useful_time_LR.to_csv('pivot_train_categorical_useful_time_LR.csv') pivot_train_categorical_useful_time_LR = pd.read_csv('pivot_train_categorical_useful_time_LR.csv',index_col = 0) pivot_train_categorical_useful_time_LR.head() Explanation: fit mean feature on target End of explanation # pivot_train_canal = pd.get_dummies(pivot_train_categorical_useful_train['Canal_ID']) # pivot_train_categorical_useful_train = pivot_train_categorical_useful_train.join(pivot_train_canal) # pivot_train_categorical_useful_train.head() Explanation: add dummy feature End of explanation %ls pre_product = pd.read_csv('preprocessed_products.csv',index_col = 0) pre_product.head() pre_product['weight_per_piece'] = pd.to_numeric(pre_product['weight_per_piece'], errors='coerce') pre_product['weight'] = pd.to_numeric(pre_product['weight'], errors='coerce') pre_product['pieces'] = pd.to_numeric(pre_product['pieces'], errors='coerce') pivot_train_categorical_useful_time_LR_weight = pd.merge(pivot_train_categorical_useful_time_LR, pre_product[['ID','weight','weight_per_piece']], left_on = 'Producto_ID',right_on = 'ID',how = 'left') pivot_train_categorical_useful_time_LR_weight.head() pivot_train_categorical_useful_time_LR_weight = pd.merge(pivot_train_categorical_useful_time_LR, pre_product[['ID','weight','weight_per_piece']], left_on = 'Producto_ID',right_on = 'ID',how = 'left') pivot_train_categorical_useful_time_LR_weight.head() pivot_train_categorical_useful_time_LR_weight.to_csv('pivot_train_categorical_useful_time_LR_weight.csv') pivot_train_categorical_useful_time_LR_weight = pd.read_csv('pivot_train_categorical_useful_time_LR_weight.csv',index_col = 0) pivot_train_categorical_useful_time_LR_weight.head() Explanation: add product feature End of explanation %cd '/media/siyuan/0009E198000CD19B/bimbo/origin' %ls cliente_tabla = pd.read_csv('cliente_tabla.csv') town_state = pd.read_csv('town_state.csv') town_state['town_id'] = town_state['Town'].str.split() town_state['town_id'] = town_state['Town'].str.split(expand = True) train_basic_feature = pivot_train_categorical_useful_time_LR_weight[['Cliente_ID','Producto_ID','Agencia_ID']] cliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' ) cliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' ) cliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index() cliente_per_town_count['NombreCliente'] = cliente_per_town_count['NombreCliente']/float(100000) cliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','Agencia_ID','town_id']], cliente_per_town_count,on = 'town_id',how = 'left') pivot_train_categorical_useful_time_LR_weight_town = pd.merge(pivot_train_categorical_useful_time_LR_weight, cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente']], on = ['Cliente_ID','Producto_ID'],how = 'left') cliente_tabla.head() town_state.head() town_state['town_id'] = town_state['Town'].str.split() town_state['town_id'] = town_state['Town'].str.split(expand = True) town_state.head() pivot_train_categorical_useful_time_LR_weight.columns.values train_basic_feature = pivot_train_categorical_useful_time_LR_weight[['Cliente_ID','Producto_ID','Agencia_ID']] cliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' ) cliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' ) cliente_per_town.head() cliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index() cliente_per_town_count['NombreCliente'] = cliente_per_town_count['NombreCliente']/float(100000) cliente_per_town_count.head() cliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','Agencia_ID','town_id']], cliente_per_town_count,on = 'town_id',how = 'left') cliente_per_town_count_final.head() pivot_train_categorical_useful_time_LR_weight_town = pd.merge(pivot_train_categorical_useful_time_LR_weight, cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente']], on = ['Cliente_ID','Producto_ID'],how = 'left') pivot_train_categorical_useful_time_LR_weight_town.head() pivot_train_categorical_useful_time_LR_weight_town.columns.values Explanation: add town feature End of explanation train_pivot_xgb_time1.columns.values train_pivot_xgb_time1 = train_pivot_xgb_time1.drop(['Cliente_ID','Producto_ID','Agencia_ID', 'Ruta_SAK','Canal_ID'],axis = 1) pivot_train_categorical_useful_train_time_no_nan = pivot_train_categorical_useful_train[pivot_train_categorical_useful_train['Sem8'].notnull()] # pivot_train_categorical_useful_train_time_no_nan = pivot_train_categorical_useful_train[pivot_train_categorical_useful_train['Sem9'].notnull()] pivot_train_categorical_useful_train_time_no_nan_sample = pivot_train_categorical_useful_train_time_no_nan.sample(1000000) train_feature = pivot_train_categorical_useful_train_time_no_nan_sample.drop(['Sem8','Sem9'],axis = 1) train_label = pivot_train_categorical_useful_train_time_no_nan_sample[['Sem8','Sem9']] #seperate train and test data # datasource: sparse_week_Agencia_Canal_Ruta_normalized_csr label:train_label %time train_set, valid_set, train_labels, valid_labels = train_test_split(train_feature,\ train_label, test_size=0.10) # dtrain = xgb.DMatrix(train_feature,label = train_label['Sem8'],missing=NaN) dtrain = xgb.DMatrix(train_feature,label = train_label['Sem8'],missing=NaN) param = {'booster':'gbtree', 'nthread': 7, 'max_depth':6, 'eta':0.2, 'silent':0, 'subsample':0.7, 'objective':'reg:linear', 'eval_metric':'rmse', 'colsample_bytree':0.7} # param = {'eta':0.1, 'eval_metric':'rmse','nthread': 8} # evallist = [(dvalid,'eval'), (dtrain,'train')] num_round = 1000 # plst = param.items() # bst = xgb.train( plst, dtrain, num_round, evallist ) cvresult = xgb.cv(param, dtrain, num_round, nfold=5,show_progress=True,show_stdv=False, seed = 0, early_stopping_rounds=10) print(cvresult.tail()) Explanation: begin xgboost training End of explanation # xgb.plot_importance(cvresult) Explanation: for 1 week later cv rmse 0.451181 with dummy canal, time regr, cv rmse 0.450972 without dummy canal, time regr, cv rmse 0.4485676 without dummy canal, time regr, producto info cv rmse 0.4487434 without dummy canal, time regr, producto info, cliente_per_town for 2 week later cv rmse 0.4513236 without dummy canal, time regr, producto info End of explanation
2,454
Given the following text description, write Python code to implement the functionality described below step by step Description: Regression diagnostics This example file shows how to use a few of the statsmodels regression diagnostic tests in a real-life context. You can learn about more tests and find out more information abou the tests here on the Regression Diagnostics page. Note that most of the tests described here only return a tuple of numbers, without any annotation. A full description of outputs is always included in the docstring and in the online statsmodels documentation. For presentation purposes, we use the zip(name,test) construct to pretty-print short descriptions in the examples below. Estimate a regression model Step1: Normality of the residuals Jarque-Bera test Step2: Omni test Step3: Influence tests Once created, an object of class OLSInfluence holds attributes and methods that allow users to assess the influence of each observation. For example, we can compute and extract the first few rows of DFbetas by Step4: Explore other options by typing dir(influence_test) Useful information on leverage can also be plotted Step5: Other plotting options can be found on the Graphics page. Multicollinearity Condition number Step6: Heteroskedasticity tests Breush-Pagan test Step7: Goldfeld-Quandt test Step8: Linearity Harvey-Collier multiplier test for Null hypothesis that the linear specification is correct
Python Code: %matplotlib inline from __future__ import print_function from statsmodels.compat import lzip import statsmodels import numpy as np import pandas as pd import statsmodels.formula.api as smf import statsmodels.stats.api as sms import matplotlib.pyplot as plt # Load data url = 'http://vincentarelbundock.github.io/Rdatasets/csv/HistData/Guerry.csv' dat = pd.read_csv(url) # Fit regression model (using the natural log of one of the regressaors) results = smf.ols('Lottery ~ Literacy + np.log(Pop1831)', data=dat).fit() # Inspect the results print(results.summary()) Explanation: Regression diagnostics This example file shows how to use a few of the statsmodels regression diagnostic tests in a real-life context. You can learn about more tests and find out more information abou the tests here on the Regression Diagnostics page. Note that most of the tests described here only return a tuple of numbers, without any annotation. A full description of outputs is always included in the docstring and in the online statsmodels documentation. For presentation purposes, we use the zip(name,test) construct to pretty-print short descriptions in the examples below. Estimate a regression model End of explanation name = ['Jarque-Bera', 'Chi^2 two-tail prob.', 'Skew', 'Kurtosis'] test = sms.jarque_bera(results.resid) lzip(name, test) Explanation: Normality of the residuals Jarque-Bera test: End of explanation name = ['Chi^2', 'Two-tail probability'] test = sms.omni_normtest(results.resid) lzip(name, test) Explanation: Omni test: End of explanation from statsmodels.stats.outliers_influence import OLSInfluence test_class = OLSInfluence(results) test_class.dfbetas[:5,:] Explanation: Influence tests Once created, an object of class OLSInfluence holds attributes and methods that allow users to assess the influence of each observation. For example, we can compute and extract the first few rows of DFbetas by: End of explanation from statsmodels.graphics.regressionplots import plot_leverage_resid2 fig, ax = plt.subplots(figsize=(8,6)) fig = plot_leverage_resid2(results, ax = ax) Explanation: Explore other options by typing dir(influence_test) Useful information on leverage can also be plotted: End of explanation np.linalg.cond(results.model.exog) Explanation: Other plotting options can be found on the Graphics page. Multicollinearity Condition number: End of explanation name = ['Lagrange multiplier statistic', 'p-value', 'f-value', 'f p-value'] test = sms.het_breushpagan(results.resid, results.model.exog) lzip(name, test) Explanation: Heteroskedasticity tests Breush-Pagan test: End of explanation name = ['F statistic', 'p-value'] test = sms.het_goldfeldquandt(results.resid, results.model.exog) lzip(name, test) Explanation: Goldfeld-Quandt test End of explanation name = ['t value', 'p value'] test = sms.linear_harvey_collier(results) lzip(name, test) Explanation: Linearity Harvey-Collier multiplier test for Null hypothesis that the linear specification is correct: End of explanation
2,455
Given the following text description, write Python code to implement the functionality described below step by step Description: Least squares fitting using linfit.py This notebook demonstrates the function linfit, which I propose adding to the SciPy library. linfit is designed to be a fast, lightweight function, written entirely in Python, that only calculates only as much as the user desires, and no more. It can handle arbitrarily large data sets. What is linfit and what does it do? linfit is a function that performs least squares fitting of a straight line to an $(x,y)$ data set. It has the following features Step1: Very simple demonstration. Perform a simple linear fit without any weighting. Step2: Plot the above data and fit. Step3: Perform same fit without weighting but get estimates for uncertainties in slope and y-intercept from covariance matrix. Step4: Demonstration of a fit to data with error estimates for each data point Step5: Plot the data with error bars together with the fit. Plot the residuals in a separate graph above the data with fit. Step6: Fit the same $(x,y)$ data set but with a single value of $\sigma$ for the entire data set. Step7: Fit a huge data set. Create data set with 100000 data points. Step8: Fit straight line to the data. Step9: Plot the data together with the fit. Step10: Compare execution times of linfit and polyfit to fit a randomly generated data set of 10000 $(x,y)$ data points. On my computers, linfit is about 6 times faster than polyfit for unweighted data and about 3 times faster for weighted data. Step11: Using linear fitting routine for non-linear fitting Linear fitting with weighting can be used to fit functions that are nonlinear in the fitting parameters, provided the fitting function can be transformed into one that is linear in the fitting paramters. This can be done for exponential functions and power-law functions. This approach is illusutrated in the next two examples. Using an exponential fitting function with linfit Nuclear decay provides a convenient example of an exponential fitting function Step12: Linear and semi-log plots of the data with error bars Step13: The semi-log plot suggests we can use linfit to fit the data by taking the logarithm of the $y$ data. Taking the logarithm of the exponential fitting function gives $$\ln N = -\frac{t}{\tau} + \ln N_0\;.$$ Defining $y=\ln N$, $a=-1/\tau$, and $b=\ln N_0$, the equation takes the form $y = at+b$ and can be fit using linfit. The uncertainties $\Delta y$ are related to $\Delta N$ by taking the differential of the tranformation $y=\ln N$ Step14: Next we perform the fit on the tranformed data Step15: Extract $\tau$ and $N_0$ from fit of transformed data Step16: Extract the uncertainties in the fitting parameters $a$ and $b$. Step17: Get the uncertainties in $\tau$ and $N_0$ from the transformation equations Step18: Plot the data and fit on linear and semi-log plots
Python Code: import numpy as np import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec # for unequal plot boxes from linfit import linfit Explanation: Least squares fitting using linfit.py This notebook demonstrates the function linfit, which I propose adding to the SciPy library. linfit is designed to be a fast, lightweight function, written entirely in Python, that only calculates only as much as the user desires, and no more. It can handle arbitrarily large data sets. What is linfit and what does it do? linfit is a function that performs least squares fitting of a straight line to an $(x,y)$ data set. It has the following features: linfit can fit a straight line $ax+b$ to unweighted $(x,y)$ data where the output is the slope $a$ and y-intercept $b$ that minimizes the square of the residuals $$E = \sum_{i=1}^n \left[ y_i-(ax_i+b) \right]^2$$ where $n$ is the number of data points. Optional outputs: $\Delta a$ and $\Delta b$, the respective uncertainties in the fitted values of $a$ and $b$. By default, these estimates of $\Delta a$ and $\Delta b$ use the residuals $|y_i-(ax_i+b)|$ as estimates of the uncertainties $\sigma_i$ of the data ("relative weighting"). $y_i-(ax_i+b)$, the residuals. Alternatively, linfit can fit a straight line $ax+b$ to $(x,y)$ data that is weighted using estimates of the uncertainties of the data provided as an optional keyword argument. The uncertanties can be expressed as either as (1) a single number $\sigma$ or (2) as an array of uncertainties $\sigma_i$. In the first case, the output is the slope $a$ and y-intercept $b$ that minimizes $\chi^2$, which is defined as $$\chi^2 = \frac{1}{\sigma^2}\sum_{i=0}^n \left[ y_i-(a x_i + b) \right]^2$$In the second case, the output is the slope $a$ and y-intercept $b$ that minimizes $\chi^2$, which is defined as $$\chi^2 = \sum_{i=0}^n \left[ \frac{y_i-(a x_i + b)}{\sigma_i} \right]^2$$ Optional outputs: $\chi^2/(n-2)$, the reduced value of $\chi^2$, which should be approximately equal to 1 if a straight line is a good model of the data and good error estimates $\sigma_i$ are provided. $\Delta a$ and $\Delta b$, the uncertainties $\Delta a$ and $\Delta b$ in the fitted values of $a$ and $b$. By default, these estimates of $\Delta a$ and $\Delta b$ use the residuals $|y_i-(ax_i+b)|$ as estimates of the uncertainties $\sigma_i$ of the data. $y_i-(ax_i+b)$, the residuals. linfit can also be used for nonlinear fitting for functions that can be transformed to be linear in the fitting parameters. In this case, using weighting is essential to get the fit right. Demonstrations of linfit.py Import libraries we will need. End of explanation x = np.array([0, 1, 2, 3]) y = np.array([-1, 0.2, 0.9, 2.1]) fit, cvm = linfit(x, y) print("slope = {0:0.2f}, y-intercept = {1:0.2f}".format(fit[0], fit[1])) Explanation: Very simple demonstration. Perform a simple linear fit without any weighting. End of explanation xfit = np.array([-0.2, 3.2]) yfit = fit[0]*xfit + fit[1] plt.plot(x, y, 'oC3', label="data") plt.plot(xfit, yfit, zorder=-1, label="ax+b") plt.text(-0.3, 1.1, "a={0:0.2f}\nb={1:0.2f}" .format(fit[0], fit[1]), fontsize=12) plt.legend(loc="upper left") plt.xlabel('x') plt.show() Explanation: Plot the above data and fit. End of explanation fit, cvm = linfit(x, y, relsigma=True) dfit = [np.sqrt(cvm[i,i]) for i in range(2)] print(u"slope = {0:0.2f} \xb1 {1:0.2f}".format(fit[0], dfit[0])) print(u"y-intercept = {0:0.2f} \xb1 {1:0.2f}".format(fit[1], dfit[1])) Explanation: Perform same fit without weighting but get estimates for uncertainties in slope and y-intercept from covariance matrix. End of explanation # data set for linear fitting x = np.array([2.3, 4.7, 7.1, 9.6, 11.7, 14.1, 16.4, 18.8, 21.1, 23.0]) y = np.array([-25., 3., 110., 110., 230., 300., 270., 320., 450., 400.]) sigmay = np.array([15., 30., 30., 40., 40., 50., 40., 30., 50., 30.]) # Fit linear data set with weighting fit, cvm, info = linfit(x, y, sigmay=sigmay, relsigma=False, return_all=True) dfit = [np.sqrt(cvm[i,i]) for i in range(2)] print(u"slope = {0:0.1f} \xb1 {1:0.1f}".format(fit[0], dfit[0])) print(u"y-intercept = {0:0.0f} \xb1 {1:0.0f}".format(fit[1], dfit[1])) Explanation: Demonstration of a fit to data with error estimates for each data point End of explanation # Open figure window for plotting data with linear fit fig = plt.figure(1, figsize=(8, 8)) gs = gridspec.GridSpec(2, 1, height_ratios=[2.5, 6]) # Bottom plot: data and fit ax1 = fig.add_subplot(gs[1]) # Plot data with error bars ax1.errorbar(x, y, yerr=sigmay, ecolor='k', mec='k', fmt='oC3', ms=6) # Plot fit (behind data) endx = 0.05 * (x.max() - x.min()) xFit = np.array([x.min() - endx, x.max() + endx]) yFit = fit[0] * xFit + fit[1] ax1.plot(xFit, yFit, '-', zorder=-1) # Print out results of fit on plot ax1.text(0.05, 0.9, # slope of fit u'slope = {0:0.1f} \xb1 {1:0.1f}'.format(fit[0], dfit[0]), ha='left', va='center', transform=ax1.transAxes) ax1.text(0.05, 0.83, # y-intercept of fit u'y-intercept = {0:0.1f} \xb1 {1:0.1f}'.format(fit[1], dfit[1]), ha='left', va='center', transform=ax1.transAxes) ax1.text(0.05, 0.76, # reduced chi-squared of fit 'redchisq = {0:0.2f}'.format(info.rchisq), ha='left', va='center', transform=ax1.transAxes) ax1.text(0.05, 0.69, # correlation coefficient of fitted slope & y-intercept 'rcov = {0:0.2f}'.format(cvm[0, 1] / (dfit[0] * dfit[1])), ha='left', va='center', transform=ax1.transAxes) # Label axes ax1.set_xlabel('time') ax1.set_ylabel('velocity') # Top plot: residuals ax2 = fig.add_subplot(gs[0]) ax2.axhline(color='gray', lw=0.5, zorder=-1) ax2.errorbar(x, info.resids, yerr=sigmay, ecolor='k', mec='k', fmt='oC3', ms=6) ax2.set_ylabel('residuals') ax2.set_ylim(-100, 150) ax2.set_yticks((-100, 0, 100)) plt.show() Explanation: Plot the data with error bars together with the fit. Plot the residuals in a separate graph above the data with fit. End of explanation sigmay0 = 34.9 fit, cvm, info = linfit(x, y, sigmay=sigmay0, relsigma=False, return_all=True) dfit = [np.sqrt(cvm[i,i]) for i in range(2)] print(u"slope = {0:0.1f} \xb1 {1:0.1f}".format(fit[0], dfit[0])) print(u"y-intercept = {0:0.0f} \xb1 {1:0.0f}".format(fit[1], dfit[1])) print(u"redchisq = {0:0.2f}".format(info.rchisq)) Explanation: Fit the same $(x,y)$ data set but with a single value of $\sigma$ for the entire data set. End of explanation def randomData(xmax, npts): x = np.random.uniform(-xmax, xmax, npts) scale = np.sqrt(xmax) a, b = scale * (np.random.rand(2)-0.5) y = a*x + b + a * scale * np.random.randn(npts) dy = a * scale * (1.0 + np.random.rand(npts)) return x, y, dy npts = 100000 x, y, dy = randomData(100., npts) Explanation: Fit a huge data set. Create data set with 100000 data points. End of explanation fit, cvm = linfit(x, y) slope, yint = fit Explanation: Fit straight line to the data. End of explanation xm = 0.05*(x.max()-x.min()) xfit = np.array([x.min()-xm, x.max()+xm]) yfit = slope*xfit + yint plt.plot(xfit, yfit, '-k') plt.plot(x, y, ".C3", ms=1, zorder=-1) Explanation: Plot the data together with the fit. End of explanation import timeit setup = ''' from linfit import linfit import numpy as np def randomData(xmax, npts): x = np.random.uniform(-xmax, xmax, npts) scale = np.sqrt(xmax) a, b = scale * (np.random.rand(2)-0.5) y = a*x + b + a * scale * np.random.randn(npts) dy = a * scale * (1.0 + np.random.rand(npts)) return x, y, dy npts = 100000 x, y, dy = randomData(100., npts) ''' nreps = 7 nruns = 100 linfitNOwt = min(timeit.Timer('fit, cvm = linfit(x, y)', setup=setup).repeat(nreps, nruns)) polyfitNOwt = min(timeit.Timer('slope, yint = np.polyfit(x, y, 1)', setup=setup).repeat(nreps, nruns)) print("TIME COMPARISON WITH NO WEIGHTING OF DATA") print(" linfit time = {}\n polyfit time = {}\n ratio = {}" .format(linfitNOwt, polyfitNOwt, polyfitNOwt/linfitNOwt)) linfitWT = min(timeit.Timer('slope, yint = linfit(x, y, sigmay=dy)', setup=setup).repeat(nreps, nruns)) polyfitWT = min(timeit.Timer('slope, yint = np.polyfit(x, y, 1, w=dy)', setup=setup).repeat(nreps, nruns)) print("TIME COMPARISON WITH WEIGHTING OF DATA") print(" linfit time = {}\n polyfit time = {}\n ratio = {}" .format(linfitWT, polyfitWT, polyfitWT/linfitWT)) Explanation: Compare execution times of linfit and polyfit to fit a randomly generated data set of 10000 $(x,y)$ data points. On my computers, linfit is about 6 times faster than polyfit for unweighted data and about 3 times faster for weighted data. End of explanation t = np.array([0., 32.8, 65.6, 98.4, 131.2, 164., 196.8, 229.6, 262.4, 295.2, 328., 360.8, 393.6, 426.4, 459.2, 492.]) N = np.array([5.08, 3.29, 2.23, 1.48, 1.11, 0.644, 0.476, 0.273, 0.188, 0.141, 0.0942, 0.0768, 0.0322, 0.0322, 0.0198, 0.0198]) dN = np.array([0.11, 0.09, 0.07, 0.06, 0.05, 0.04, 0.03, 0.03, 0.02, 0.02, 0.015, 0.014, 0.009, 0.009, 0.007, 0.007]) Explanation: Using linear fitting routine for non-linear fitting Linear fitting with weighting can be used to fit functions that are nonlinear in the fitting parameters, provided the fitting function can be transformed into one that is linear in the fitting paramters. This can be done for exponential functions and power-law functions. This approach is illusutrated in the next two examples. Using an exponential fitting function with linfit Nuclear decay provides a convenient example of an exponential fitting function: $N(t) = N_0 e^{-t/\tau}$. Here are the $N$ vs $t$ data together with the uncertainties $\Delta N$. End of explanation fig = plt.figure(1, figsize=(10, 3.5)) ax1 = fig.add_subplot(1,2,1) ax2 = fig.add_subplot(1,2,2) ax2.set_yscale("log") for ax in [ax1, ax2]: ax.errorbar(t, N, yerr=dN, xerr=None, fmt='oC3', ecolor='k', ms=3) ax.set_xlim(-10, 500) ax.set_xlabel('t') ax.set_ylabel('N') Explanation: Linear and semi-log plots of the data with error bars: End of explanation y = np.log(N) dy = dN/N Explanation: The semi-log plot suggests we can use linfit to fit the data by taking the logarithm of the $y$ data. Taking the logarithm of the exponential fitting function gives $$\ln N = -\frac{t}{\tau} + \ln N_0\;.$$ Defining $y=\ln N$, $a=-1/\tau$, and $b=\ln N_0$, the equation takes the form $y = at+b$ and can be fit using linfit. The uncertainties $\Delta y$ are related to $\Delta N$ by taking the differential of the tranformation $y=\ln N$: $$ \begin{align} \Delta y &= \left(\frac{\partial y}{\partial N}\right)\Delta N \ &= \frac{\Delta N}{N} \end{align} $$ To fit the data, we tranform the $N$ and $\Delta N$ data: End of explanation fit, cvm, info = linfit(t, y, sigmay=dy, relsigma=False, return_all=True) Explanation: Next we perform the fit on the tranformed data: End of explanation a, b = fit[0], fit[1] tau = -1.0/a N0 = np.exp(b) Explanation: Extract $\tau$ and $N_0$ from fit of transformed data End of explanation dfit = [np.sqrt(cvm[i,i]) for i in range(2)] da, db = dfit[0], dfit[1] Explanation: Extract the uncertainties in the fitting parameters $a$ and $b$. End of explanation dtau = da/(a*a) dN0 = np.exp(b)*db Explanation: Get the uncertainties in $\tau$ and $N_0$ from the transformation equations: $$ \begin{align} \tau=−1/a \quad &\Rightarrow \Delta \tau = \left|\frac{\partial \tau}{\partial a}\right|\Delta a \quad \Rightarrow \Delta \tau = \frac{\Delta a}{a^2} \ N_0=e^b \quad &\Rightarrow \Delta N_0 = \left|\frac{\partial N_0}{\partial b}\right|\Delta b \quad \Rightarrow \Delta N_0 = e^b\Delta b \ \end{align} $$ End of explanation tm = 0.05*(t.max()-t.min()) tfit = np.linspace(t.min()-tm, t.max()+tm, 50) Nfit = N0*np.exp(-tfit/tau) fig = plt.figure(1, figsize=(10, 3.5)) ax1 = fig.add_subplot(1,2,1) ax2 = fig.add_subplot(1,2,2) ax2.set_yscale("log") ax2.set_ylim(0.01, 10.) for ax in [ax1, ax2]: ax.errorbar(t, N, yerr=dN, xerr=None, fmt='oC3', ecolor='k', ms=4) ax.plot(tfit, Nfit, '-', color="gray", zorder=-1) ax.set_xlim(-10, 500) ax.set_xlabel('t') ax.set_ylabel('N') ax.text(0.95, 0.95,"$\\tau = {0:0.1f}\pm{1:0.1f}$\n$N_0 = {2:0.2f}\pm{3:0.2f}$" .format(tau, dtau, N0, dN0), fontsize=12, ha='right', va='top', transform=ax.transAxes) Explanation: Plot the data and fit on linear and semi-log plots End of explanation
2,456
Given the following text description, write Python code to implement the functionality described below step by step Description: Step2: KD-trees Question 1 <img src="images/Screen Shot 2016-07-03 at 12.14.16 AM.png"> Screenshot taken from Coursera <!--TEASER_END--> Question 2 <img src="images/Screen Shot 2016-07-02 at 11.56.04 PM.png"> Screenshot taken from Coursera <!--TEASER_END--> Answer Step3: Question 3 <img src="images/Screen Shot 2016-07-02 at 11.56.09 PM.png"> Screenshot taken from Coursera <!--TEASER_END--> Answer Step4: Question 4 <img src="images/Screen Shot 2016-07-03 at 12.16.20 AM.png"> Screenshot taken from Coursera <!--TEASER_END--> Answer Step5: After split x1_split1_x2_split2_x2_split1 by 1.654, then we will have 2 more leaves. Data point 4
Python Code: import numpy as np x1 = np.array([-1.58, 0.91, -0.73, -4.22, 4.19, -0.33]) x2 = np.array([-2.01, 3.98, 4.00, 1.16, -2.02, 2.15]) x = np.vstack((x1, x2)).T x # Mid range of x1 x1_midrange = (x1.max() + x1.min())/2 x1_midrange def get_mid_range(data, column=0): Get midrange of data by column - x1: column=0 - x2: column=1 midrange = (data[:, column].max() + data[:, column].min())/2 return midrange def split_by(x, value, column=0): Split x array by value and column - x1: column=0 - x2: column=1 split1 = x[x[:, column] <= value] split2 = x[x[:, column] > value] return split1, split2 x1_midrange = get_mid_range(x) x1_split1, x1_split2 = split_by(x, x1_midrange) # Split values of x1 x1_split1 # Split values of x1 x1_split2 Explanation: KD-trees Question 1 <img src="images/Screen Shot 2016-07-03 at 12.14.16 AM.png"> Screenshot taken from Coursera <!--TEASER_END--> Question 2 <img src="images/Screen Shot 2016-07-02 at 11.56.04 PM.png"> Screenshot taken from Coursera <!--TEASER_END--> Answer End of explanation # Mid range of x2 for the 1st split # x1_split1_x2_midrange = (x1_split1[:, 1].max() + x1_split1[:, 1].min())/2 x1_split1_x2_midrange = get_mid_range(x1_split1, column=1) print x1_split1_x2_midrange # # Mid range of x2 for 2nd split x1_split2_x2_midrange = get_mid_range(x1_split2, column=1) print x1_split2_x2_midrange Explanation: Question 3 <img src="images/Screen Shot 2016-07-02 at 11.56.09 PM.png"> Screenshot taken from Coursera <!--TEASER_END--> Answer End of explanation x1_split1_x2_split1, x1_split1_x2_split2 = split_by(x1_split1, x2_x1_split1_midrange, column=1) x1_split1_x2_split1 # node still has 3 data points # continue to split x1_split1_x2_split2 x1_split1_x2_split2_midrange = get_mid_range(x1_split1_x2_split2, column=1) x1_split1_x2_split2_midrange x1_split1_x2_split2_x2_split1, x1_split1_x2_split2_x2_split2 = split_by(x1_split1_x2_split2, x1_split1_x2_split2_midrange, column=1) x1_split1_x2_split2_x2_split1 # Continue to split x1_split1_x2_split2_x2_split2 x1_split1_x2_split2_x2_split1_midrange = get_mid_range(x1_split1_x2_split2_x2_split1, column=1) x1_split1_x2_split2_x2_split1_midrange Explanation: Question 4 <img src="images/Screen Shot 2016-07-03 at 12.16.20 AM.png"> Screenshot taken from Coursera <!--TEASER_END--> Answer End of explanation x1_split2_x2_split1, x1_split2_x2_split2 = split_by(x1_split2, x1_split2_x2_midrange, column=1) x1_split2_x2_split1 x1_split2_x2_split2 Explanation: After split x1_split1_x2_split2_x2_split1 by 1.654, then we will have 2 more leaves. Data point 4: [-4.22, 1.16] will be the leaves contain the query point (-3, 1.5) Question 5 <img src="images/Screen Shot 2016-07-03 at 1.34.41 PM.png"> Screenshot taken from Coursera <!--TEASER_END--> Answer End of explanation
2,457
Given the following text description, write Python code to implement the functionality described below step by step Description: NWB use-case pvc-7 --- Data courtesy of Aleena Garner, Allen Institute for Brain Sciences --- Here we demonstrate how data from the NWB pvc-7 use-case can be stored in NIX files. Context Step1: Open a file and inspect its content Step2: Explore Stimulus Step3: Explore video and imaging data
Python Code: from nixio import * import numpy as np import matplotlib.pylab as plt %matplotlib inline from utils.notebook import print_stats from utils.plotting import Plotter Explanation: NWB use-case pvc-7 --- Data courtesy of Aleena Garner, Allen Institute for Brain Sciences --- Here we demonstrate how data from the NWB pvc-7 use-case can be stored in NIX files. Context: In vivo calcium imaging of layer 4 cells in mouse primary visual cortex. Two-photon images sampled @ 30 Hz Visual stimuli of sinusoidal moving gratings were presented. In this example, we use a subset of the original data file. We only use image frames 5000 to frame 6000. Image data was 10 times down-sampled. End of explanation f = File.open("data/pvc-7.nix.h5", FileMode.ReadOnly) print_stats(f.blocks) block = f.blocks[0] print_stats(block.data_arrays) print_stats(block.tags) Explanation: Open a file and inspect its content End of explanation # get recording tag recording = block.tags[0] # stimulus combinations array stimulus = recording.features[0].data # display the stimulus conditions for label in stimulus.dimensions[0].labels: print label + ' :', print '\n' # actual stimulus condition values for cmb in stimulus.data[:]: for x in cmb: print "%.2f\t" % x, print '\n' # get particular stimulus combination index = 2 print "a stimulus combination %s" % str(stimulus.data[index]) # find out when stimulus was displayed start = recording.position[index] end = recording.extent[index] print "was displayed from frame %d to frame %d" % (start, end) Explanation: Explore Stimulus End of explanation # get movie arrays from file movies = filter(lambda x: x.type == 'movie', recording.references) print_stats(movies) # get mouse image at the beginning of the selected stimulus mouse = movies[1] image_index = int(np.where(np.array(mouse.dimensions[0].ticks) > start)[0][0]) plt.imshow(mouse.data[image_index]) # get eye image at the end of the selected stimulus eye = movies[0] image_index = int(np.where(np.array(eye.dimensions[0].ticks) > end)[0][0]) plt.imshow(eye.data[image_index]) # get 2-photon image at the beginning of the selected stimulus imaging = filter(lambda x: x.type == 'imaging', recording.references)[0] image_index = int(np.where(np.array(imaging.dimensions[0].ticks) > start)[0][0]) plt.imshow(imaging.data[image_index]) # plot mouse speed in the whole window (TODO: add stimulus events) speeds = filter(lambda x: x.type == 'runspeed', recording.references)[0] p = Plotter() p.add(speeds) p.plot() f.close() Explanation: Explore video and imaging data End of explanation
2,458
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Seaice MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required Step7: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required Step8: 3.2. Ocean Freezing Point Value Is Required Step9: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required Step10: 4.2. Canonical Horizontal Resolution Is Required Step11: 4.3. Number Of Horizontal Gridpoints Is Required Step12: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required Step13: 5.2. Target Is Required Step14: 5.3. Simulations Is Required Step15: 5.4. Metrics Used Is Required Step16: 5.5. Variables Is Required Step17: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required Step18: 6.2. Additional Parameters Is Required Step19: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required Step20: 7.2. On Diagnostic Variables Is Required Step21: 7.3. Missing Processes Is Required Step22: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required Step23: 8.2. Properties Is Required Step24: 8.3. Budget Is Required Step25: 8.4. Was Flux Correction Used Is Required Step26: 8.5. Corrected Conserved Prognostic Variables Is Required Step27: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required Step28: 9.2. Grid Type Is Required Step29: 9.3. Scheme Is Required Step30: 9.4. Thermodynamics Time Step Is Required Step31: 9.5. Dynamics Time Step Is Required Step32: 9.6. Additional Details Is Required Step33: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required Step34: 10.2. Number Of Layers Is Required Step35: 10.3. Additional Details Is Required Step36: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required Step37: 11.2. Number Of Categories Is Required Step38: 11.3. Category Limits Is Required Step39: 11.4. Ice Thickness Distribution Scheme Is Required Step40: 11.5. Other Is Required Step41: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required Step42: 12.2. Number Of Snow Levels Is Required Step43: 12.3. Snow Fraction Is Required Step44: 12.4. Additional Details Is Required Step45: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required Step46: 13.2. Transport In Thickness Space Is Required Step47: 13.3. Ice Strength Formulation Is Required Step48: 13.4. Redistribution Is Required Step49: 13.5. Rheology Is Required Step50: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required Step51: 14.2. Thermal Conductivity Is Required Step52: 14.3. Heat Diffusion Is Required Step53: 14.4. Basal Heat Flux Is Required Step54: 14.5. Fixed Salinity Value Is Required Step55: 14.6. Heat Content Of Precipitation Is Required Step56: 14.7. Precipitation Effects On Salinity Is Required Step57: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required Step58: 15.2. Ice Vertical Growth And Melt Is Required Step59: 15.3. Ice Lateral Melting Is Required Step60: 15.4. Ice Surface Sublimation Is Required Step61: 15.5. Frazil Ice Is Required Step62: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required Step63: 16.2. Sea Ice Salinity Thermal Impacts Is Required Step64: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required Step65: 17.2. Constant Salinity Value Is Required Step66: 17.3. Additional Details Is Required Step67: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required Step68: 18.2. Constant Salinity Value Is Required Step69: 18.3. Additional Details Is Required Step70: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required Step71: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required Step72: 20.2. Additional Details Is Required Step73: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required Step74: 21.2. Formulation Is Required Step75: 21.3. Impacts Is Required Step76: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required Step77: 22.2. Snow Aging Scheme Is Required Step78: 22.3. Has Snow Ice Formation Is Required Step79: 22.4. Snow Ice Formation Scheme Is Required Step80: 22.5. Redistribution Is Required Step81: 22.6. Heat Diffusion Is Required Step82: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required Step83: 23.2. Ice Radiation Transmission Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-3', 'seaice') Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: EC-EARTH-CONSORTIUM Source ID: SANDBOX-3 Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:00 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation
2,459
Given the following text description, write Python code to implement the functionality described below step by step Description: Algorithms Exercise 1 Imports Step3: Word counting Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic Step5: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts. Step7: Write a function sort_word_counts that return a list of sorted word counts Step8: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt Step9: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
Python Code: %matplotlib inline from matplotlib import pyplot as plt import numpy as np Explanation: Algorithms Exercise 1 Imports End of explanation def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'): Split a string into a list of words, removing punctuation and stop words. s = s.replace("\n", " ") for i in range(len(punctuation)): s = s.replace(punctuation[i], " ") #clean = ''.join([c.lower() for c in s if c not in punctuation]) clean = s.split(" ") # Check if stop_words is a string if stop_words != None: if (isinstance(stop_words, str)): stop_lst = stop_words.split(" ") go_words = [w.lower() for w in clean if w not in stop_lst and len(w) > 0] else: go_words = [w.lower() for w in clean if w not in stop_words and len(w) > 0] else: go_words = [w.lower() for w in clean if len(w) > 0] return(go_words) #raise NotImplementedError() assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \ ['this', 'way', 'that', 'things', 'will', 'end'] wasteland = APRIL is the cruellest month, breeding Lilacs out of the dead land, mixing Memory and desire, stirring Dull roots with spring rain. assert tokenize(wasteland, stop_words='is the of and') == \ ['april','cruellest','month','breeding','lilacs','out','dead','land', 'mixing','memory','desire','stirring','dull','roots','with','spring', 'rain'] Explanation: Word counting Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic: Split the string into lines using splitlines. Split each line into a list of words and merge the lists for each line. Use Python's builtin filter function to remove all punctuation. If stop_words is a list, remove all occurences of the words in the list. If stop_words is a space delimeted string of words, split them and remove them. Remove any remaining empty words. Make all words lowercase. End of explanation def count_words(data): Return a word count dictionary from the list of words in data. data.sort() word_dict = {} for i in range(len(data)): if i == 0 or (data[i] != data[i-1]): word_dict[data[i]] = 1 else: word_dict[data[i]] += 1 return(word_dict) #raise NotImplementedError() assert count_words(tokenize('this and the this from and a a a')) == \ {'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2} Explanation: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts. End of explanation def sort_word_counts(wc): Return a list of 2-tuples of (word, count), sorted by count descending. word_tups = [(key, wc[key]) for key in wc] res = sorted(word_tups, key = lambda word: word[1], reverse = True) return(res) #raise NotImplementedError() assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \ [('a', 4), ('this', 3), ('and', 2), ('the', 1)] Explanation: Write a function sort_word_counts that return a list of sorted word counts: Each element of the list should be a (word, count) tuple. The list should be sorted by the word counts, with the higest counts coming first. To perform this sort, look at using the sorted function with a custom key and reverse argument. End of explanation file = open('mobydick_chapter1.txt', 'r') data = file.read() file.close() swc = sort_word_counts(count_words(tokenize(data, 'the of and a to in is it that as'))) print(len(swc)) print(swc) #raise NotImplementedError() assert swc[0]==('i',43) assert len(swc)==848 Explanation: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt: Read the file into a string. Tokenize with stop words of 'the of and a to in is it that as'. Perform a word count, the sort and save the result in a variable named swc. End of explanation # YOUR CODE HERE raise NotImplementedError() assert True # use this for grading the dotplot Explanation: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research... End of explanation
2,460
Given the following text description, write Python code to implement the functionality described below step by step Description: <small><i>The PCA section of this notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small> Dimensionality Reduction Step1: Introducing Principal Component Analysis Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset Step2: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution Step3: To see what these numbers mean, let's view them as vectors plotted on top of the data Step4: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction. The explained variance quantifies this measure of "importance" in direction. Another way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance Step5: By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression Step6: The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%! This is the sense in which "dimensionality reduction" works Step7: We could also do the same plot, using Altair and Pandas Step8: But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like $$ image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots $$ What PCA does is to choose optimal basis functions so that only a few are needed to get a reasonable approximation. The low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum Step9: Here we see that with only six PCA components, we recover a reasonable approximation of the input! Thus we see that PCA can be viewed from two angles. It can be viewed as dimensionality reduction, or it can be viewed as a form of lossy data compression where the loss favors noise. In this way, PCA can be used as a filtering process as well. Choosing the Number of Components But how much information have we thrown away? We can figure this out by looking at the explained variance as a function of the components Step10: Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations. Other Dimensionality Reducting Routines Note that scikit-learn contains many other unsupervised dimensionality reduction routines Step11: Discussion What do you get when you apply PCA to the cocktail party problem? How would you describe the difference between maximizing variance via orthogonal features (PCA) and finding independent signals (ICA)? Non-negative matrix factorization NMF is like ICA in that it is trying to learn the parts of the data that make up the whole, by looking at the reconstructability of them matrix. This was originally published by Lee and Seung, "Learning the parts of objects by non-negative matrix factorization", and applied to image data below. VQ here is vector quantization (VQ), yet another dimensionality reduction method ... it's kinda like K-means but not Back to biology! Enough images and signal processing ... where is the RNA!??!? Let's apply these algorithms to some biological datasets. We'll use the 300-cell dataset (6 clusters, 50 cells each) data from the Macosko2015 paper. Rather than plotting each cell in each component, we'll look at the mean (or median) contribution of each component to the cell types.
Python Code: from __future__ import print_function, division %matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy import stats # use seaborn plotting style defaults import seaborn as sns; sns.set() Explanation: <small><i>The PCA section of this notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small> Dimensionality Reduction: Principal Component Analysis in-depth Here we'll explore Principal Component Analysis, which is an extremely useful linear dimensionality reduction technique. We'll start with our standard set of initial imports: End of explanation np.random.seed(1) X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T plt.plot(X[:, 0], X[:, 1], 'o') plt.axis('equal'); Explanation: Introducing Principal Component Analysis Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset: End of explanation from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(X) print(pca.explained_variance_) print(pca.components_) Explanation: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution: End of explanation plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5) for length, vector in zip(pca.explained_variance_, pca.components_): v = vector * 3 * np.sqrt(length) plt.plot([0, v[0]], [0, v[1]], '-k', lw=3) plt.axis('equal'); Explanation: To see what these numbers mean, let's view them as vectors plotted on top of the data: End of explanation clf = PCA(0.95) # keep 95% of variance X_trans = clf.fit_transform(X) print(X.shape) print(X_trans.shape) Explanation: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction. The explained variance quantifies this measure of "importance" in direction. Another way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance: End of explanation X_new = clf.inverse_transform(X_trans) plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.2) plt.plot(X_new[:, 0], X_new[:, 1], 'ob', alpha=0.8) plt.axis('equal'); Explanation: By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression: End of explanation from sklearn.datasets import load_digits digits = load_digits() X = digits.data y = digits.target pca = PCA(2) # project from 64 to 2 dimensions Xproj = pca.fit_transform(X) print(X.shape) print(Xproj.shape) plt.scatter(Xproj[:, 0], Xproj[:, 1], c=y, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('tab10', 10)) plt.colorbar(); Explanation: The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%! This is the sense in which "dimensionality reduction" works: if you can approximate a data set in a lower dimension, you can often have an easier time visualizing it or fitting complicated models to the data. Application of PCA to Digits The dimensionality reduction might seem a bit abstract in two dimensions, but the projection and dimensionality reduction can be extremely useful when visualizing high-dimensional data. Let's take a quick look at the application of PCA to the digits data we looked at before: End of explanation from decompositionplots import plot_image_components sns.set_style('white') plot_image_components(digits.data[0]) Explanation: We could also do the same plot, using Altair and Pandas: digits_smushed = pd.DataFrame(Xproj) digits_smushed['target'] = digits.target digits_smushed.head() This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, without reference to the labels. What do the Components Mean? PCA is a very useful dimensionality reduction algorithm, because it has a very intuitive interpretation via eigenvectors. The input data is represented as a vector: in the case of the digits, our data is $$ x = [x_1, x_2, x_3 \cdots] $$ but what this really means is $$ image(x) = x_1 \cdot{\rm (pixel~1)} + x_2 \cdot{\rm (pixel~2)} + x_3 \cdot{\rm (pixel~3)} \cdots $$ If we reduce the dimensionality in the pixel space to (say) 6, we recover only a partial image: End of explanation from decompositionplots import plot_pca_interactive plot_pca_interactive(digits.data) Explanation: But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like $$ image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots $$ What PCA does is to choose optimal basis functions so that only a few are needed to get a reasonable approximation. The low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum: End of explanation sns.set() pca = PCA().fit(X) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); Explanation: Here we see that with only six PCA components, we recover a reasonable approximation of the input! Thus we see that PCA can be viewed from two angles. It can be viewed as dimensionality reduction, or it can be viewed as a form of lossy data compression where the loss favors noise. In this way, PCA can be used as a filtering process as well. Choosing the Number of Components But how much information have we thrown away? We can figure this out by looking at the explained variance as a function of the components: End of explanation import fig_code fig_code.cocktail_party() Explanation: Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations. Other Dimensionality Reducting Routines Note that scikit-learn contains many other unsupervised dimensionality reduction routines: some you might wish to try are Other dimensionality reduction techniques which are useful to know about: sklearn.decomposition.PCA: Principal Component Analysis sklearn.decomposition.RandomizedPCA: extremely fast approximate PCA implementation based on a randomized algorithm sklearn.decomposition.SparsePCA: PCA variant including L1 penalty for sparsity sklearn.decomposition.FastICA: Independent Component Analysis sklearn.decomposition.NMF: non-negative matrix factorization sklearn.manifold.LocallyLinearEmbedding: nonlinear manifold learning technique based on local neighborhood geometry sklearn.manifold.IsoMap: nonlinear manifold learning technique based on a sparse graph algorithm Each of these has its own strengths & weaknesses, and areas of application. You can read about them on the scikit-learn website. Independent component analysis Here we'll learn about indepednent component analysis (ICA), a matrix decomposition method that's an alternative to PCA. Independent Component Analysis (ICA) ICA was originally created for the "cocktail party problem" for audio processing. It's an incredible feat that our brains are able to filter out all these different sources of audio, automatically! (I really like how smug that guy looks - it's really over the top) Source Cocktail party problem Given multiple sources of sound (people talking, the band playing, glasses clinking), how do you distinguish independent sources of sound? Imagine at a cocktail party you have multiple microphones stationed throughout, and you get to hear all of these different sounds. Source What if you applied PCA to the cocktail party problem? Example adapted from the excellent scikit-learn documentation. End of explanation from decompositionplots import explore_smushers explore_smushers() Explanation: Discussion What do you get when you apply PCA to the cocktail party problem? How would you describe the difference between maximizing variance via orthogonal features (PCA) and finding independent signals (ICA)? Non-negative matrix factorization NMF is like ICA in that it is trying to learn the parts of the data that make up the whole, by looking at the reconstructability of them matrix. This was originally published by Lee and Seung, "Learning the parts of objects by non-negative matrix factorization", and applied to image data below. VQ here is vector quantization (VQ), yet another dimensionality reduction method ... it's kinda like K-means but not Back to biology! Enough images and signal processing ... where is the RNA!??!? Let's apply these algorithms to some biological datasets. We'll use the 300-cell dataset (6 clusters, 50 cells each) data from the Macosko2015 paper. Rather than plotting each cell in each component, we'll look at the mean (or median) contribution of each component to the cell types. End of explanation
2,461
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents <p><div class="lev1 toc-item"><a href="#Z-Stage" data-toc-modified-id="Z-Stage-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Z-Stage</a></div><div class="lev1 toc-item"><a href="#Pump" data-toc-modified-id="Pump-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Pump</a></div><div class="lev1 toc-item"><a href="#PMT" data-toc-modified-id="PMT-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>PMT</a></div><div class="lev1 toc-item"><a href="#MAX11210-ADC" data-toc-modified-id="MAX11210-ADC-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>MAX11210 ADC</a></div> Step1: Z-Stage Step2: Pump Step3: PMT Step4: MAX11210 ADC
Python Code: import logging; logging.basicConfig(level=logging.DEBUG) import time import mr_box_peripheral_board as mrbox import serial reload(mrbox) # Try to connect to MR-Box control board. retry_count = 2 for i in xrange(retry_count): try: proxy.close() except NameError: pass try: proxy = mrbox.SerialProxy(baudrate=57600, settling_time_s=2.5) break except serial.SerialException: time.sleep(1) else: raise IOError('Could not connect to MR-Box control board.') proxy._timeout_s = 20 Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Z-Stage" data-toc-modified-id="Z-Stage-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Z-Stage</a></div><div class="lev1 toc-item"><a href="#Pump" data-toc-modified-id="Pump-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Pump</a></div><div class="lev1 toc-item"><a href="#PMT" data-toc-modified-id="PMT-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>PMT</a></div><div class="lev1 toc-item"><a href="#MAX11210-ADC" data-toc-modified-id="MAX11210-ADC-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>MAX11210 ADC</a></div> End of explanation from mr_box_peripheral_board import zstage_ui reload(zstage_ui) zstage_ui_ = zstage_ui.ZStageUI(proxy) # Display user interface for z-stage. zstage_ui_.widget Explanation: Z-Stage End of explanation from mr_box_peripheral_board import pump_ui reload(pump_ui) pump_ui_ = pump_ui.PumpUI(proxy) pump_ui_.widget Explanation: Pump End of explanation import si_prefix as si from mr_box_peripheral_board import pmt_ui reload(pmt_ui) pmt_ui_ = pmt_ui.PmtUI(proxy) pmt_ui_.widget Explanation: PMT End of explanation from mr_box_peripheral_board import max11210_adc_ui reload(max11210_adc_ui) # max11210_adc_ui_ = max11210_adc_ui.Max11210AdcUI(proxy) # max11210_adc_ui_.widget import ipywidgets as ipw INPUT_RANGE_UNIPOLAR = 1 INPUT_RANGE_BIPOLAR = 2 CLOCK_SOURCE_EXTERNAL = 1 CLOCK_SOURCE_INTERNAL = 2 FORMAT_OFFSET = 1 FORMAT_TWOS_COMPLEMENT = 2 CONVERSION_MODE_SINGLE = 1 CONVERSION_MODE_CONTINUOUS = 2 def MAX11210_begin(proxy): proxy.MAX11210_setDefault(); proxy.MAX11210_setLineFreq(60); # 60 Hz proxy.MAX11210_setInputRange(INPUT_RANGE_UNIPOLAR); proxy.MAX11210_setClockSource(CLOCK_SOURCE_INTERNAL); proxy.MAX11210_setEnableRefBuf(True); proxy.MAX11210_setEnableSigBuf(True); proxy.MAX11210_setFormat(FORMAT_OFFSET); proxy.MAX11210_setConvMode(CONVERSION_MODE_SINGLE); proxy.MAX11210_selfCal(); proxy.MAX11210_sysOffsetCal(); proxy.MAX11210_sysGainCal(); from collections import OrderedDict import pandas as pd MAX11210_begin(proxy) calibration_settings = \ pd.Series(OrderedDict([('SelfCalGain', proxy.MAX11210_getSelfCalGain()), ('SelfCalOffset', proxy.MAX11210_getSelfCalOffset()), ('SysGainCal', proxy.MAX11210_getSysGainCal()), ('SysOffsetCal', proxy.MAX11210_getSysOffsetCal())])) print '# Calibration settings #\n' print calibration_settings print '# Register statuses #\n' print '----- STAT1 -----\n' + max11210_adc_ui.format_STAT1(proxy.MAX11210_getSTAT1()) + '\n' print '----- CTRL1 -----\n' + max11210_adc_ui.format_CTRL1(proxy.MAX11210_getCTRL1()) + '\n' # print '----- CTRL2 -----\n' + str(format(proxy.MAX11210_getCTRL2(),'b')) + '\n' print '----- CTRL3 -----\n' + max11210_adc_ui.format_CTRL3(proxy.MAX11210_getCTRL3()) %matplotlib inline import datetime as dt import matplotlib as mpl import matplotlib.ticker import si_prefix as si from IPython.display import display formatter = mpl.ticker.FuncFormatter(lambda x, *args: si.si_format(x, 3)) Vref = 3.0 #Reference Voltage 3.0 V def _pmt_read(*args): # proxy.MAX11210_setConvMode(CONVERSION_MODE_SINGLE) proxy.MAX11210_setGain(adc_gain.value) raw_values = max11210_adc_ui.MAX11210_read(proxy, pmt_rate.value, pmt_duration.value) print 'Sampling Rate: ' + str(pmt_rate.value) + ' Hz' print 'Digital Gain: ' + str(adc_gain.value) voltage = (raw_values / (2 ** 24 - 1))*(Vref/adc_gain.value) current = voltage / 30e3 # 30 kOhm current.to_clipboard() axis = current.plot() axis.yaxis.set_major_formatter(formatter) pmt_rate = ipw.Dropdown(description='Sample rate (Hz)', options=(1, 2, 5, 10, 15, 30, 60, 120), value=1) adc_gain = ipw.Dropdown(description='Digital Gain', options=(OrderedDict([('X1',1),('X2',2),('X4',4),('X8',8),('X16',16)])), value=1) pmt_duration = ipw.FloatSlider(description='PMT duration (s)', min=1, max=1000, value=10) pmt_read = ipw.Button(description='Read PMT') pmt_read.on_click(_pmt_read) ipw.VBox([pmt_rate,adc_gain, pmt_duration, pmt_read]) Explanation: MAX11210 ADC End of explanation
2,462
Given the following text description, write Python code to implement the functionality described below step by step Description: Eye Drops Analysis for NIRS and Pulse Ox Writing a new notebook to analyze eye drops only Initialize and Select ROP Subject Number Step1: Baseline Average Calculation Step2: First Eye Drop Avg Every 10 Sec For 5 Minutes Step3: Second Eye Drop Avg Every 10 Sec For 5 Minutes Step4: Third Eye Drop Avg Every 10 Sec For 5 Minutes Step5: Export to CSV
Python Code: from ROPini import * #Takes a little bit, wait a while. Hour1, Minute1, Hour2, Minute2, Hour3, Minute3 = [int(x) for x in raw_input("Enter times for eye drops here: ").split()] #Syntax should be "HH MM HH MM HH MM" First time, second time, third time all in one line. #No commas or colons. W1 = datetime(Year, Month, Day, Hour1, Minute1) W2 = datetime(Year, Month, Day, Hour2, Minute2) W3 = datetime(Year, Month, Day, Hour3, Minute3) print "First Eye Drop Time:\t" + str(Hour1) + ":" +str(Minute1) print "Second Eye Drop Time:\t" + str(Hour2) + ":" +str(Minute2) print "Third Eye Drop Time:\t" + str(Hour3) + ":" +str(Minute3) Explanation: Eye Drops Analysis for NIRS and Pulse Ox Writing a new notebook to analyze eye drops only Initialize and Select ROP Subject Number End of explanation avg0NIRS = df.StO2[Y:W1].mean() avg0PI = df.PI[Y:W1].mean() avg0O2 = df.SpO2[Y:W1].mean() avg0PR = df.PR[Y:W1].mean() print 'Baseline Averages\n', 'NIRS :\t', avg0NIRS, '\nPI :\t',avg0PI, '\nSpO2 :\t',avg0O2,'\nPR :\t',avg0PR, #df.std() for standard deviation Explanation: Baseline Average Calculation End of explanation def perdeltadrop1(start, end, delta): rdrop1 = [] curr = start while curr < end: rdrop1.append(curr) curr += delta return rdrop1 dfdrop1NI = df.StO2[W1:W1+timedelta(minutes=5)] dfdrop1PI = df.PI[W1:W1+timedelta(minutes=5)] dfdrop1O2 = df.SpO2[W1:W1+timedelta(minutes=5)] dfdrop1PR = df.PR[W1:W1+timedelta(minutes=5)] windrop1 = timedelta(seconds=10) rdrop1 = perdeltadrop1(W1, W1+timedelta(minutes=5), windrop1) avgdrop1NI = Series(index = rdrop1, name = 'StO2 1st ED') avgdrop1PI = Series(index = rdrop1, name = 'PI 1st ED') avgdrop1O2 = Series(index = rdrop1, name = 'SpO2 1st ED') avgdrop1PR = Series(index = rdrop1, name = 'PR 1st ED') for i in rdrop1: avgdrop1NI[i] = dfdrop1NI[i:(i+windrop1)].mean() avgdrop1PI[i] = dfdrop1PI[i:(i+windrop1)].mean() avgdrop1O2[i] = dfdrop1O2[i:(i+windrop1)].mean() avgdrop1PR[i] = dfdrop1PR[i:(i+windrop1)].mean() resultdrops1 = concat([avgdrop1NI, avgdrop1PI, avgdrop1O2, avgdrop1PR], axis=1, join='inner') print resultdrops1 Explanation: First Eye Drop Avg Every 10 Sec For 5 Minutes End of explanation def perdeltadrop2(start, end, delta): rdrop2 = [] curr = start while curr < end: rdrop2.append(curr) curr += delta return rdrop2 dfdrop2NI = df.StO2[W2:W2+timedelta(minutes=5)] dfdrop2PI = df.PI[W2:W2+timedelta(minutes=5)] dfdrop2O2 = df.SpO2[W2:W2+timedelta(minutes=5)] dfdrop2PR = df.PR[W2:W2+timedelta(minutes=5)] windrop2 = timedelta(seconds=10) rdrop2 = perdeltadrop2(W2, W2+timedelta(minutes=5), windrop2) avgdrop2NI = Series(index = rdrop2, name = 'StO2 2nd ED') avgdrop2PI = Series(index = rdrop2, name = 'PI 2nd ED') avgdrop2O2 = Series(index = rdrop2, name = 'SpO2 2nd ED') avgdrop2PR = Series(index = rdrop2, name = 'PR 2nd ED') for i in rdrop2: avgdrop2NI[i] = dfdrop2NI[i:(i+windrop2)].mean() avgdrop2PI[i] = dfdrop2PI[i:(i+windrop2)].mean() avgdrop2O2[i] = dfdrop2O2[i:(i+windrop2)].mean() avgdrop2PR[i] = dfdrop2PR[i:(i+windrop2)].mean() resultdrops2 = concat([avgdrop2NI, avgdrop2PI, avgdrop2O2, avgdrop2PR], axis=1, join='inner') print resultdrops2 Explanation: Second Eye Drop Avg Every 10 Sec For 5 Minutes End of explanation def perdeltadrop3(start, end, delta): rdrop3 = [] curr = start while curr < end: rdrop3.append(curr) curr += delta return rdrop3 dfdrop3NI = df.StO2[W3:W3+timedelta(minutes=5)] dfdrop3PI = df.PI[W3:W3+timedelta(minutes=5)] dfdrop3O2 = df.SpO2[W3:W3+timedelta(minutes=5)] dfdrop3PR = df.PR[W3:W3+timedelta(minutes=5)] windrop3 = timedelta(seconds=10) rdrop3 = perdeltadrop3(W3, W3+timedelta(minutes=5), windrop3) avgdrop3NI = Series(index = rdrop3, name = 'StO2 3rd ED') avgdrop3PI = Series(index = rdrop3, name = 'PI 3rd ED') avgdrop3O2 = Series(index = rdrop3, name = 'SpO2 3rd ED') avgdrop3PR = Series(index = rdrop3, name = 'PR 3rd ED') for i in rdrop3: avgdrop3NI[i] = dfdrop3NI[i:(i+windrop3)].mean() avgdrop3PI[i] = dfdrop3PI[i:(i+windrop3)].mean() avgdrop3O2[i] = dfdrop3O2[i:(i+windrop3)].mean() avgdrop3PR[i] = dfdrop3PR[i:(i+windrop3)].mean() resultdrops3 = concat([avgdrop3NI, avgdrop3PI, avgdrop3O2, avgdrop3PR], axis=1, join='inner') print resultdrops3 Explanation: Third Eye Drop Avg Every 10 Sec For 5 Minutes End of explanation import csv import os #we change the directory that python looks at to the new place. os.chdir("/Users/John/Dropbox/LLU/ROP/Python Output Files") #csv properties class excel_tab(csv.excel): delimiter = '\t' csv.register_dialect("excel_tab", excel_tab) #CSV file with open('ROP'+BabyNumber+'EyeDrops.csv', 'w') as f: writer = csv.writer(f, dialect=excel_tab) writer.writerow([avg0NIRS, ',NIRS Start']) #NIRS data for i in rdrop1: writer.writerow([avgdrop1NI[i]]) for i in rdrop2: writer.writerow([avgdrop2NI[i]]) for i in rdrop3: writer.writerow([avgdrop3NI[i]]) writer.writerow([avg0PI, ',PI Start']) #PI data for i in rdrop1: writer.writerow([avgdrop1PI[i]]) for i in rdrop2: writer.writerow([avgdrop2PI[i]]) for i in rdrop3: writer.writerow([avgdrop3PI[i]]) writer.writerow([avg0O2, ',SpO2 Start']) #SpO2 data for i in rdrop1: writer.writerow([avgdrop1O2[i]]) for i in rdrop2: writer.writerow([avgdrop2O2[i]]) for i in rdrop3: writer.writerow([avgdrop3O2[i]]) writer.writerow([avg0PR, ',PR Start']) #PR Data for i in rdrop1: writer.writerow([avgdrop1PR[i]]) for i in rdrop2: writer.writerow([avgdrop2PR[i]]) for i in rdrop3: writer.writerow([avgdrop3PR[i]]) Explanation: Export to CSV End of explanation
2,463
Given the following text description, write Python code to implement the functionality described below step by step Description: GAMES OR ADVERSARIAL SEARCH This notebook serves as supporting material for topics covered in Chapter 5 - Adversarial Search in the book Artificial Intelligence Step1: GAME REPRESENTATION To represent games we make use of the Game class, which we can subclass and override its functions to represent our own games. A helper tool is the namedtuple GameState, which in some cases can come in handy, especially when our game needs us to remember a board (like chess). GameState namedtuple GameState is a namedtuple which represents the current state of a game. It is used to help represent games whose states can't be easily represented normally, or for games that require memory of a board, like Tic-Tac-Toe. Gamestate is defined as follows Step2: Now let's get into details of all the methods in our Game class. You have to implement these methods when you create new classes that would represent your game. actions(self, state) Step3: The class TicTacToe has been inherited from the class Game. As mentioned earlier, you really want to do this. Catching bugs and errors becomes a whole lot easier. Additional methods in TicTacToe Step4: In moves, we have a nested dictionary system. The outer's dictionary has keys as the states and values the possible moves from that state (as a dictionary). The inner dictionary of moves has keys the move names and values the next state after the move is complete. Below is an example that showcases moves. We want the next state after move 'a1' from 'A', which is 'B'. A quick glance at the above image confirms that this is indeed the case. Step5: We will now take a look at the functions we need to implement. First we need to create an object of the Fig52Game class. Step6: actions Step7: result Step8: utility Step9: terminal_test Step10: to_move Step11: As a whole the class Fig52 that inherits from the class Game and overrides its functions Step12: MIN-MAX Overview This algorithm (often called Minimax) computes the next move for a player (MIN or MAX) at their current state. It recursively computes the minimax value of successor states, until it reaches terminals (the leaves of the tree). Using the utility value of the terminal states, it computes the values of parent states until it reaches the initial node (the root of the tree). It is worth noting that the algorithm works in a depth-first manner. The pseudocode can be found below Step13: Implementation In the implementation we are using two functions, max_value and min_value to calculate the best move for MAX and MIN respectively. These functions interact in an alternating recursion; one calls the other until a terminal state is reached. When the recursion halts, we are left with scores for each move. We return the max. Despite returning the max, it will work for MIN too since for MIN the values are their negative (hence the order of values is reversed, so the higher the better for MIN too). Step14: Example We will now play the Fig52 game using this algorithm. Take a look at the Fig52Game from above to follow along. It is the turn of MAX to move, and he is at state A. He can move to B, C or D, using moves a1, a2 and a3 respectively. MAX's goal is to maximize the end value. So, to make a decision, MAX needs to know the values at the aforementioned nodes and pick the greatest one. After MAX, it is MIN's turn to play. So MAX wants to know what will the values of B, C and D be after MIN plays. The problem then becomes what move will MIN make at B, C and D. The successor states of all these nodes are terminal states, so MIN will pick the smallest value for each node. So, for B he will pick 3 (from move b1), for C he will pick 2 (from move c1) and for D he will again pick 2 (from move d3). Let's see this in code Step15: Now MAX knows that the values for B, C and D are 3, 2 and 2 (produced by the above moves of MIN). The greatest is 3, which he will get with move a1. This is then the move MAX will make. Let's see the algorithm in full action Step16: Visualization Below we have a simple game visualization using the algorithm. After you run the command, click on the cell to move the game along. You can input your own values via a list of 27 integers. Step17: ALPHA-BETA Overview While Minimax is great for computing a move, it can get tricky when the number of game states gets bigger. The algorithm needs to search all the leaves of the tree, which increase exponentially to its depth. For Tic-Tac-Toe, where the depth of the tree is 9 (after the 9th move, the game ends), we can have at most 9! terminal states (at most because not all terminal nodes are at the last level of the tree; some are higher up because the game ended before the 9th move). This isn't so bad, but for more complex problems like chess, we have over $10^{40}$ terminal nodes. Unfortunately we have not found a way to cut the exponent away, but we nevertheless have found ways to alleviate the workload. Here we examine pruning the game tree, which means removing parts of it that we do not need to examine. The particular type of pruning is called alpha-beta, and the search in whole is called alpha-beta search. To showcase what parts of the tree we don't need to search, we will take a look at the example Fig52Game. In the example game, we need to find the best move for player MAX at state A, which is the maximum value of MIN's possible moves at successor states. MAX(A) = MAX( MIN(B), MIN(C), MIN(D) ) MIN(B) is the minimum of 3, 12, 8 which is 3. So the above formula becomes Step18: Implementation Like minimax, we again make use of functions max_value and min_value, but this time we utilise the a and b values, updating them and stopping the recursive call if we end up on nodes with values worse than a and b (for MAX and MIN). The algorithm finds the maximum value and returns the move that results in it. The implementation Step19: Example We will play the Fig52 Game with the alpha-beta search algorithm. It is the turn of MAX to play at state A. Step20: The optimal move for MAX is a1, for the reasons given above. MIN will pick move b1 for B resulting in a value of 3, updating the a value of MAX to 3. Then, when we find under C a node of value 2, we will stop searching under that sub-tree since it is less than a. From D we have a value of 2. So, the best move for MAX is the one resulting in a value of 3, which is a1. Below we see the best moves for MIN starting from B, C and D respectively. Note that the algorithm in these cases works the same way as minimax, since all the nodes below the aforementioned states are terminal. Step21: Visualization Below you will find the visualization of the alpha-beta algorithm for a simple game. Click on the cell after you run the command to move the game along. You can input your own values via a list of 27 integers. Step22: PLAYERS So, we have finished the implementation of the TicTacToe and Fig52Game classes. What these classes do is defining the rules of the games. We need more to create an AI that can actually play games. This is where random_player and alphabeta_player come in. query_player The query_player function allows you, a human opponent, to play the game. This function requires a display method to be implemented in your game class, so that successive game states can be displayed on the terminal, making it easier for you to visualize the game and play accordingly. random_player The random_player is a function that plays random moves in the game. That's it. There isn't much more to this guy. alphabeta_player The alphabeta_player, on the other hand, calls the alphabeta_search function, which returns the best move in the current game state. Thus, the alphabeta_player always plays the best move given a game state, assuming that the game tree is small enough to search entirely. play_game The play_game function will be the one that will actually be used to play the game. You pass as arguments to it an instance of the game you want to play and the players you want in this game. Use it to play AI vs AI, AI vs human, or even human vs human matches! LET'S PLAY SOME GAMES! Game52 Let's start by experimenting with the Fig52Game first. For that we'll create an instance of the subclass Fig52Game inherited from the class Game Step23: First we try out our random_player(game, state). Given a game state it will give us a random move every time Step24: The alphabeta_player(game, state) will always give us the best move possible, for the relevant player (MAX or MIN) Step25: What the alphabeta_player does is, it simply calls the method alphabeta_full_search. They both are essentially the same. In the module, both alphabeta_full_search and minimax_decision have been implemented. They both do the same job and return the same thing, which is, the best move in the current state. It's just that alphabeta_full_search is more efficient with regards to time because it prunes the search tree and hence, explores lesser number of states. Step26: Demonstrating the play_game function on the game52 Step27: Note that if you are the first player then alphabeta_player plays as MIN, and if you are the second player then alphabeta_player plays as MAX. This happens because that's the way the game is defined in the class Fig52Game. Having a look at the code of this class should make it clear. TicTacToe Now let's play TicTacToe. First we initialize the game by creating an instance of the subclass TicTacToe inherited from the class Game Step28: We can print a state using the display method Step29: Hmm, so that's the initial state of the game; no X's and no O's. Let us create a new game state by ourselves to experiment Step30: So, how does this game state look like? Step31: The random_player will behave how he is supposed to i.e. pseudo-randomly Step32: But the alphabeta_player will always give the best move, as expected Step33: Now let's make two players play against each other. We use the play_game function for this. The play_game function makes players play the match against each other and returns the utility for the first player, of the terminal state reached when the game ends. Hence, for our TicTacToe game, if we get the output +1, the first player wins, -1 if the second player wins, and 0 if the match ends in a draw. Step34: The output is (usually) -1, because random_player loses to alphabeta_player. Sometimes, however, random_player manages to draw with alphabeta_player. Since an alphabeta_player plays perfectly, a match between two alphabeta_players should always end in a draw. Let's see if this happens Step35: A random_player should never win against an alphabeta_player. Let's test that. Step36: Canvas_TicTacToe(Canvas) This subclass is used to play TicTacToe game interactively in Jupyter notebooks. TicTacToe class is called while initializing this subclass. Let's have a match between random_player and alphabeta_player. Click on the board to call players to make a move. Step37: Now, let's play a game ourselves against a random_player Step38: Yay! We (usually) win. But we cannot win against an alphabeta_player, however hard we try.
Python Code: from games import * from notebook import psource, pseudocode Explanation: GAMES OR ADVERSARIAL SEARCH This notebook serves as supporting material for topics covered in Chapter 5 - Adversarial Search in the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from games.py module. Let's import required classes, methods, global variables etc., from games module. CONTENTS Game Representation Game Examples Tic-Tac-Toe Figure 5.2 Game Min-Max Alpha-Beta Players Let's Play Some Games! End of explanation %psource Game Explanation: GAME REPRESENTATION To represent games we make use of the Game class, which we can subclass and override its functions to represent our own games. A helper tool is the namedtuple GameState, which in some cases can come in handy, especially when our game needs us to remember a board (like chess). GameState namedtuple GameState is a namedtuple which represents the current state of a game. It is used to help represent games whose states can't be easily represented normally, or for games that require memory of a board, like Tic-Tac-Toe. Gamestate is defined as follows: GameState = namedtuple('GameState', 'to_move, utility, board, moves') to_move: It represents whose turn it is to move next. utility: It stores the utility of the game state. Storing this utility is a good idea, because, when you do a Minimax Search or an Alphabeta Search, you generate many recursive calls, which travel all the way down to the terminal states. When these recursive calls go back up to the original callee, we have calculated utilities for many game states. We store these utilities in their respective GameStates to avoid calculating them all over again. board: A dict that stores the board of the game. moves: It stores the list of legal moves possible from the current position. Game class Let's have a look at the class Game in our module. We see that it has functions, namely actions, result, utility, terminal_test, to_move and display. We see that these functions have not actually been implemented. This class is just a template class; we are supposed to create the class for our game, by inheriting this Game class and implementing all the methods mentioned in Game. End of explanation %psource TicTacToe Explanation: Now let's get into details of all the methods in our Game class. You have to implement these methods when you create new classes that would represent your game. actions(self, state): Given a game state, this method generates all the legal actions possible from this state, as a list or a generator. Returning a generator rather than a list has the advantage that it saves space and you can still operate on it as a list. result(self, state, move): Given a game state and a move, this method returns the game state that you get by making that move on this game state. utility(self, state, player): Given a terminal game state and a player, this method returns the utility for that player in the given terminal game state. While implementing this method assume that the game state is a terminal game state. The logic in this module is such that this method will be called only on terminal game states. terminal_test(self, state): Given a game state, this method should return True if this game state is a terminal state, and False otherwise. to_move(self, state): Given a game state, this method returns the player who is to play next. This information is typically stored in the game state, so all this method does is extract this information and return it. display(self, state): This method prints/displays the current state of the game. GAME EXAMPLES Below we give some examples for games you can create and experiment on. Tic-Tac-Toe Take a look at the class TicTacToe. All the methods mentioned in the class Game have been implemented here. End of explanation moves = dict(A=dict(a1='B', a2='C', a3='D'), B=dict(b1='B1', b2='B2', b3='B3'), C=dict(c1='C1', c2='C2', c3='C3'), D=dict(d1='D1', d2='D2', d3='D3')) utils = dict(B1=3, B2=12, B3=8, C1=2, C2=4, C3=6, D1=14, D2=5, D3=2) initial = 'A' Explanation: The class TicTacToe has been inherited from the class Game. As mentioned earlier, you really want to do this. Catching bugs and errors becomes a whole lot easier. Additional methods in TicTacToe: __init__(self, h=3, v=3, k=3) : When you create a class inherited from the Game class (class TicTacToe in our case), you'll have to create an object of this inherited class to initialize the game. This initialization might require some additional information which would be passed to __init__ as variables. For the case of our TicTacToe game, this additional information would be the number of rows h, number of columns v and how many consecutive X's or O's are needed in a row, column or diagonal for a win k. Also, the initial game state has to be defined here in __init__. compute_utility(self, board, move, player) : A method to calculate the utility of TicTacToe game. If 'X' wins with this move, this method returns 1; if 'O' wins return -1; else return 0. k_in_row(self, board, move, player, delta_x_y) : This method returns True if there is a line formed on TicTacToe board with the latest move else False. TicTacToe GameState Now, before we start implementing our TicTacToe game, we need to decide how we will be representing our game state. Typically, a game state will give you all the current information about the game at any point in time. When you are given a game state, you should be able to tell whose turn it is next, how the game will look like on a real-life board (if it has one) etc. A game state need not include the history of the game. If you can play the game further given a game state, you game state representation is acceptable. While we might like to include all kinds of information in our game state, we wouldn't want to put too much information into it. Modifying this game state to generate a new one would be a real pain then. Now, as for our TicTacToe game state, would storing only the positions of all the X's and O's be sufficient to represent all the game information at that point in time? Well, does it tell us whose turn it is next? Looking at the 'X's and O's on the board and counting them should tell us that. But that would mean extra computing. To avoid this, we will also store whose move it is next in the game state. Think about what we've done here. We have reduced extra computation by storing additional information in a game state. Now, this information might not be absolutely essential to tell us about the state of the game, but it does save us additional computation time. We'll do more of this later on. To store game states will will use the GameState namedtuple. to_move: A string of a single character, either 'X' or 'O'. utility: 1 for win, -1 for loss, 0 otherwise. board: All the positions of X's and O's on the board. moves: All the possible moves from the current state. Note here, that storing the moves as a list, as it is done here, increases the space complexity of Minimax Search from O(m) to O(bm). Refer to Sec. 5.2.1 of the book. Representing a move in TicTacToe game Now that we have decided how our game state will be represented, it's time to decide how our move will be represented. Becomes easy to use this move to modify a current game state to generate a new one. For our TicTacToe game, we'll just represent a move by a tuple, where the first and the second elements of the tuple will represent the row and column, respectively, where the next move is to be made. Whether to make an 'X' or an 'O' will be decided by the to_move in the GameState namedtuple. Fig52 Game For a more trivial example we will represent the game in Figure 5.2 of the book. <img src="images/fig_5_2.png" width="75%"> The states are represented wih capital letters inside the triangles (eg. "A") while moves are the labels on the edges between states (eg. "a1"). Terminal nodes carry utility values. Note that the terminal nodes are named in this example 'B1', 'B2' and 'B2' for the nodes below 'B', and so forth. We will model the moves, utilities and initial state like this: End of explanation print(moves['A']['a1']) Explanation: In moves, we have a nested dictionary system. The outer's dictionary has keys as the states and values the possible moves from that state (as a dictionary). The inner dictionary of moves has keys the move names and values the next state after the move is complete. Below is an example that showcases moves. We want the next state after move 'a1' from 'A', which is 'B'. A quick glance at the above image confirms that this is indeed the case. End of explanation fig52 = Fig52Game() Explanation: We will now take a look at the functions we need to implement. First we need to create an object of the Fig52Game class. End of explanation psource(Fig52Game.actions) print(fig52.actions('B')) Explanation: actions: Returns the list of moves one can make from a given state. End of explanation psource(Fig52Game.result) print(fig52.result('A', 'a1')) Explanation: result: Returns the next state after we make a specific move. End of explanation psource(Fig52Game.utility) print(fig52.utility('B1', 'MAX')) print(fig52.utility('B1', 'MIN')) Explanation: utility: Returns the value of the terminal state for a player ('MAX' and 'MIN'). Note that for 'MIN' the value returned is the negative of the utility. End of explanation psource(Fig52Game.terminal_test) print(fig52.terminal_test('C3')) Explanation: terminal_test: Returns True if the given state is a terminal state, False otherwise. End of explanation psource(Fig52Game.to_move) print(fig52.to_move('A')) Explanation: to_move: Return the player who will move in this state. End of explanation psource(Fig52Game) Explanation: As a whole the class Fig52 that inherits from the class Game and overrides its functions: End of explanation pseudocode("Minimax-Decision") Explanation: MIN-MAX Overview This algorithm (often called Minimax) computes the next move for a player (MIN or MAX) at their current state. It recursively computes the minimax value of successor states, until it reaches terminals (the leaves of the tree). Using the utility value of the terminal states, it computes the values of parent states until it reaches the initial node (the root of the tree). It is worth noting that the algorithm works in a depth-first manner. The pseudocode can be found below: End of explanation psource(minimax_decision) Explanation: Implementation In the implementation we are using two functions, max_value and min_value to calculate the best move for MAX and MIN respectively. These functions interact in an alternating recursion; one calls the other until a terminal state is reached. When the recursion halts, we are left with scores for each move. We return the max. Despite returning the max, it will work for MIN too since for MIN the values are their negative (hence the order of values is reversed, so the higher the better for MIN too). End of explanation print(minimax_decision('B', fig52)) print(minimax_decision('C', fig52)) print(minimax_decision('D', fig52)) Explanation: Example We will now play the Fig52 game using this algorithm. Take a look at the Fig52Game from above to follow along. It is the turn of MAX to move, and he is at state A. He can move to B, C or D, using moves a1, a2 and a3 respectively. MAX's goal is to maximize the end value. So, to make a decision, MAX needs to know the values at the aforementioned nodes and pick the greatest one. After MAX, it is MIN's turn to play. So MAX wants to know what will the values of B, C and D be after MIN plays. The problem then becomes what move will MIN make at B, C and D. The successor states of all these nodes are terminal states, so MIN will pick the smallest value for each node. So, for B he will pick 3 (from move b1), for C he will pick 2 (from move c1) and for D he will again pick 2 (from move d3). Let's see this in code: End of explanation print(minimax_decision('A', fig52)) Explanation: Now MAX knows that the values for B, C and D are 3, 2 and 2 (produced by the above moves of MIN). The greatest is 3, which he will get with move a1. This is then the move MAX will make. Let's see the algorithm in full action: End of explanation from notebook import Canvas_minimax from random import randint minimax_viz = Canvas_minimax('minimax_viz', [randint(1, 50) for i in range(27)]) Explanation: Visualization Below we have a simple game visualization using the algorithm. After you run the command, click on the cell to move the game along. You can input your own values via a list of 27 integers. End of explanation pseudocode("Alpha-Beta-Search") Explanation: ALPHA-BETA Overview While Minimax is great for computing a move, it can get tricky when the number of game states gets bigger. The algorithm needs to search all the leaves of the tree, which increase exponentially to its depth. For Tic-Tac-Toe, where the depth of the tree is 9 (after the 9th move, the game ends), we can have at most 9! terminal states (at most because not all terminal nodes are at the last level of the tree; some are higher up because the game ended before the 9th move). This isn't so bad, but for more complex problems like chess, we have over $10^{40}$ terminal nodes. Unfortunately we have not found a way to cut the exponent away, but we nevertheless have found ways to alleviate the workload. Here we examine pruning the game tree, which means removing parts of it that we do not need to examine. The particular type of pruning is called alpha-beta, and the search in whole is called alpha-beta search. To showcase what parts of the tree we don't need to search, we will take a look at the example Fig52Game. In the example game, we need to find the best move for player MAX at state A, which is the maximum value of MIN's possible moves at successor states. MAX(A) = MAX( MIN(B), MIN(C), MIN(D) ) MIN(B) is the minimum of 3, 12, 8 which is 3. So the above formula becomes: MAX(A) = MAX( 3, MIN(C), MIN(D) ) Next move we will check is c1, which leads to a terminal state with utility of 2. Before we continue searching under state C, let's pop back into our formula with the new value: MAX(A) = MAX( 3, MIN(2, c2, .... cN), MIN(D) ) We do not know how many moves state C allows, but we know that the first one results in a value of 2. Do we need to keep searching under C? The answer is no. The value MIN will pick on C will at most be 2. Since MAX already has the option to pick something greater than that, 3 from B, he does not need to keep searching under C. In alpha-beta we make use of two additional parameters for each state/node, a and b, that describe bounds on the possible moves. The parameter a denotes the best choice (highest value) for MAX along that path, while b denotes the best choice (lowest value) for MIN. As we go along we update a and b and prune a node branch when the value of the node is worse than the value of a and b for MAX and MIN respectively. In the above example, after the search under state B, MAX had an a value of 3. So, when searching node C we found a value less than that, 2, we stopped searching under C. You can read the pseudocode below: End of explanation %psource alphabeta_search Explanation: Implementation Like minimax, we again make use of functions max_value and min_value, but this time we utilise the a and b values, updating them and stopping the recursive call if we end up on nodes with values worse than a and b (for MAX and MIN). The algorithm finds the maximum value and returns the move that results in it. The implementation: End of explanation print(alphabeta_search('A', fig52)) Explanation: Example We will play the Fig52 Game with the alpha-beta search algorithm. It is the turn of MAX to play at state A. End of explanation print(alphabeta_search('B', fig52)) print(alphabeta_search('C', fig52)) print(alphabeta_search('D', fig52)) Explanation: The optimal move for MAX is a1, for the reasons given above. MIN will pick move b1 for B resulting in a value of 3, updating the a value of MAX to 3. Then, when we find under C a node of value 2, we will stop searching under that sub-tree since it is less than a. From D we have a value of 2. So, the best move for MAX is the one resulting in a value of 3, which is a1. Below we see the best moves for MIN starting from B, C and D respectively. Note that the algorithm in these cases works the same way as minimax, since all the nodes below the aforementioned states are terminal. End of explanation from notebook import Canvas_alphabeta from random import randint alphabeta_viz = Canvas_alphabeta('alphabeta_viz', [randint(1, 50) for i in range(27)]) Explanation: Visualization Below you will find the visualization of the alpha-beta algorithm for a simple game. Click on the cell after you run the command to move the game along. You can input your own values via a list of 27 integers. End of explanation game52 = Fig52Game() Explanation: PLAYERS So, we have finished the implementation of the TicTacToe and Fig52Game classes. What these classes do is defining the rules of the games. We need more to create an AI that can actually play games. This is where random_player and alphabeta_player come in. query_player The query_player function allows you, a human opponent, to play the game. This function requires a display method to be implemented in your game class, so that successive game states can be displayed on the terminal, making it easier for you to visualize the game and play accordingly. random_player The random_player is a function that plays random moves in the game. That's it. There isn't much more to this guy. alphabeta_player The alphabeta_player, on the other hand, calls the alphabeta_search function, which returns the best move in the current game state. Thus, the alphabeta_player always plays the best move given a game state, assuming that the game tree is small enough to search entirely. play_game The play_game function will be the one that will actually be used to play the game. You pass as arguments to it an instance of the game you want to play and the players you want in this game. Use it to play AI vs AI, AI vs human, or even human vs human matches! LET'S PLAY SOME GAMES! Game52 Let's start by experimenting with the Fig52Game first. For that we'll create an instance of the subclass Fig52Game inherited from the class Game: End of explanation print(random_player(game52, 'A')) print(random_player(game52, 'A')) Explanation: First we try out our random_player(game, state). Given a game state it will give us a random move every time: End of explanation print( alphabeta_player(game52, 'A') ) print( alphabeta_player(game52, 'B') ) print( alphabeta_player(game52, 'C') ) Explanation: The alphabeta_player(game, state) will always give us the best move possible, for the relevant player (MAX or MIN): End of explanation minimax_decision('A', game52) alphabeta_search('A', game52) Explanation: What the alphabeta_player does is, it simply calls the method alphabeta_full_search. They both are essentially the same. In the module, both alphabeta_full_search and minimax_decision have been implemented. They both do the same job and return the same thing, which is, the best move in the current state. It's just that alphabeta_full_search is more efficient with regards to time because it prunes the search tree and hence, explores lesser number of states. End of explanation game52.play_game(alphabeta_player, alphabeta_player) game52.play_game(alphabeta_player, random_player) game52.play_game(query_player, alphabeta_player) game52.play_game(alphabeta_player, query_player) Explanation: Demonstrating the play_game function on the game52: End of explanation ttt = TicTacToe() Explanation: Note that if you are the first player then alphabeta_player plays as MIN, and if you are the second player then alphabeta_player plays as MAX. This happens because that's the way the game is defined in the class Fig52Game. Having a look at the code of this class should make it clear. TicTacToe Now let's play TicTacToe. First we initialize the game by creating an instance of the subclass TicTacToe inherited from the class Game: End of explanation ttt.display(ttt.initial) Explanation: We can print a state using the display method: End of explanation my_state = GameState( to_move = 'X', utility = '0', board = {(1,1): 'X', (1,2): 'O', (1,3): 'X', (2,1): 'O', (2,3): 'O', (3,1): 'X', }, moves = [(2,2), (3,2), (3,3)] ) Explanation: Hmm, so that's the initial state of the game; no X's and no O's. Let us create a new game state by ourselves to experiment: End of explanation ttt.display(my_state) Explanation: So, how does this game state look like? End of explanation random_player(ttt, my_state) random_player(ttt, my_state) Explanation: The random_player will behave how he is supposed to i.e. pseudo-randomly: End of explanation alphabeta_player(ttt, my_state) Explanation: But the alphabeta_player will always give the best move, as expected: End of explanation ttt.play_game(random_player, alphabeta_player) Explanation: Now let's make two players play against each other. We use the play_game function for this. The play_game function makes players play the match against each other and returns the utility for the first player, of the terminal state reached when the game ends. Hence, for our TicTacToe game, if we get the output +1, the first player wins, -1 if the second player wins, and 0 if the match ends in a draw. End of explanation for _ in range(10): print(ttt.play_game(alphabeta_player, alphabeta_player)) Explanation: The output is (usually) -1, because random_player loses to alphabeta_player. Sometimes, however, random_player manages to draw with alphabeta_player. Since an alphabeta_player plays perfectly, a match between two alphabeta_players should always end in a draw. Let's see if this happens: End of explanation for _ in range(10): print(ttt.play_game(random_player, alphabeta_player)) Explanation: A random_player should never win against an alphabeta_player. Let's test that. End of explanation from notebook import Canvas_TicTacToe bot_play = Canvas_TicTacToe('bot_play', 'random', 'alphabeta') Explanation: Canvas_TicTacToe(Canvas) This subclass is used to play TicTacToe game interactively in Jupyter notebooks. TicTacToe class is called while initializing this subclass. Let's have a match between random_player and alphabeta_player. Click on the board to call players to make a move. End of explanation rand_play = Canvas_TicTacToe('rand_play', 'human', 'random') Explanation: Now, let's play a game ourselves against a random_player: End of explanation ab_play = Canvas_TicTacToe('ab_play', 'human', 'alphabeta') Explanation: Yay! We (usually) win. But we cannot win against an alphabeta_player, however hard we try. End of explanation
2,464
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Toplevel MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required Step7: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required Step8: 3.2. CMIP3 Parent Is Required Step9: 3.3. CMIP5 Parent Is Required Step10: 3.4. Previous Name Is Required Step11: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required Step12: 4.2. Code Version Is Required Step13: 4.3. Code Languages Is Required Step14: 4.4. Components Structure Is Required Step15: 4.5. Coupler Is Required Step16: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required Step17: 5.2. Atmosphere Double Flux Is Required Step18: 5.3. Atmosphere Fluxes Calculation Grid Is Required Step19: 5.4. Atmosphere Relative Winds Is Required Step20: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required Step21: 6.2. Global Mean Metrics Used Is Required Step22: 6.3. Regional Metrics Used Is Required Step23: 6.4. Trend Metrics Used Is Required Step24: 6.5. Energy Balance Is Required Step25: 6.6. Fresh Water Balance Is Required Step26: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required Step27: 7.2. Atmos Ocean Interface Is Required Step28: 7.3. Atmos Land Interface Is Required Step29: 7.4. Atmos Sea-ice Interface Is Required Step30: 7.5. Ocean Seaice Interface Is Required Step31: 7.6. Land Ocean Interface Is Required Step32: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required Step33: 8.2. Atmos Ocean Interface Is Required Step34: 8.3. Atmos Land Interface Is Required Step35: 8.4. Atmos Sea-ice Interface Is Required Step36: 8.5. Ocean Seaice Interface Is Required Step37: 8.6. Runoff Is Required Step38: 8.7. Iceberg Calving Is Required Step39: 8.8. Endoreic Basins Is Required Step40: 8.9. Snow Accumulation Is Required Step41: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required Step42: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required Step43: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required Step44: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required Step45: 12.2. Additional Information Is Required Step46: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required Step47: 13.2. Additional Information Is Required Step48: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required Step49: 14.2. Additional Information Is Required Step50: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required Step51: 15.2. Additional Information Is Required Step52: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required Step53: 16.2. Additional Information Is Required Step54: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required Step55: 17.2. Equivalence Concentration Is Required Step56: 17.3. Additional Information Is Required Step57: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required Step58: 18.2. Additional Information Is Required Step59: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required Step60: 19.2. Additional Information Is Required Step61: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required Step62: 20.2. Additional Information Is Required Step63: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required Step64: 21.2. Additional Information Is Required Step65: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required Step66: 22.2. Aerosol Effect On Ice Clouds Is Required Step67: 22.3. Additional Information Is Required Step68: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required Step69: 23.2. Aerosol Effect On Ice Clouds Is Required Step70: 23.3. RFaci From Sulfate Only Is Required Step71: 23.4. Additional Information Is Required Step72: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required Step73: 24.2. Additional Information Is Required Step74: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step76: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required Step77: 25.4. Additional Information Is Required Step78: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step80: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required Step81: 26.4. Additional Information Is Required Step82: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required Step83: 27.2. Additional Information Is Required Step84: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required Step85: 28.2. Crop Change Only Is Required Step86: 28.3. Additional Information Is Required Step87: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required Step88: 29.2. Additional Information Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-1', 'toplevel') Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: NCAR Source ID: SANDBOX-1 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:22 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation
2,465
Given the following text description, write Python code to implement the functionality described below step by step Description: Inference in Discrete Bayesian Network In this notebook, we show a simple example for doing Exact inference in Bayesian Networks using pgmpy. We will be using the Asia network (http Step1: If you would like to create a model from scratch, please refer to the Creating Bayesian Networks notebook Step2: Step 3 Step3: Step 5
Python Code: # Fetch the asia model from the bnlearn repository from pgmpy.utils import get_example_model asia_model = get_example_model("asia") print("Nodes: ", asia_model.nodes()) print("Edges: ", asia_model.edges()) asia_model.get_cpds() Explanation: Inference in Discrete Bayesian Network In this notebook, we show a simple example for doing Exact inference in Bayesian Networks using pgmpy. We will be using the Asia network (http://www.bnlearn.com/bnrepository/#asia) for this example. Step 1: Define the model. End of explanation # Initializing the VariableElimination class from pgmpy.inference import VariableElimination asia_infer = VariableElimination(asia_model) Explanation: If you would like to create a model from scratch, please refer to the Creating Bayesian Networks notebook: https://github.com/pgmpy/pgmpy/blob/dev/examples/Creating%20a%20Bayesian%20Network.ipynb Step 2: Initialize the inference class Currently, pgmpy support two algorithms for inference: 1. Variable Elimination and, 2. Belief Propagation. Both of these are exact inferece algorithms. The following example uses VariableElimination but BeliefPropagation has an identifcal API, so all the methods show below would also work for BeliefPropagation. End of explanation # Computing the probability of bronc given smoke=no. q = asia_infer.query(variables=["bronc"], evidence={"smoke": "no"}) print(q) # Computing the joint probability of bronc and asia given smoke=yes q = asia_infer.query(variables=["bronc", "asia"], evidence={"smoke": "yes"}) print(q) # Computing the probabilities (not joint) of bronc and asia given smoke=no q = asia_infer.query(variables=["bronc", "asia"], evidence={"smoke": "no"}, joint=False) for factor in q.values(): print(factor) # Computing the MAP of bronc given smoke=no. q = asia_infer.map_query(variables=["bronc"], evidence={"smoke": "no"}) print(q) # Computing the MAP of bronc and asia given smoke=yes q = asia_infer.map_query(variables=["bronc", "asia"], evidence={"smoke": "yes"}) print(q) Explanation: Step 3: Doing Inference using hard evidence End of explanation lung_virt_evidence = TabularCPD(variable="lung", variable_card=2, values=[[0.4], [0.6]]) # Query with hard evidence smoke = no and virtual evidence lung = [0.4, 0.6] q = asia_infer.query( variables=["bronc"], evidence={"smoke": "no"}, virtual_evidence=[lung_virt_evidence] ) print(q) # Query with hard evidence smoke = no and virtual evidences lung = [0.4, 0.6] and bronc = [0.3, 0.7] lung_virt_evidence = TabularCPD(variable="lung", variable_card=2, values=[[0.4], [0.7]]) print(asia_model.get_cpds("lung")) Explanation: Step 5: Inference using virtual evidence End of explanation
2,466
Given the following text description, write Python code to implement the functionality described below step by step Description: Time-energy fit 3ML allows the possibility to model a time-varying source by explicitly fitting the time-dependent part of the model. Let's see this with an example. First we import what we need Step1: Generating the datasets Then we generate a simulated dataset for a source with a cutoff powerlaw spectrum with a constant photon index and cutoff but with a normalization that changes with time following a powerlaw Step2: These are the times at which the simulated spectra have been observed Step3: This describes the time-varying normalization. If everything works as it should, we should recover from the fit a normalization of 0.23 and a index of -1.2 for the time law. Step4: Now that we have a simple function to create the datasets, let's build them. Step5: Setup the model Now set up the fit and fit it. First we need to tell 3ML that we are going to fit using an independent variable (time in this case). We init it to 1.0 and set the unit to seconds. Step6: Then we load the data that we have generated, tagging them with their time of observation. Step7: Generate the datalist as usual Step8: Now let's generate the spectral model, in this case a point source with a cutoff powerlaw spectrum. Step9: Now we need to tell 3ML that we are going to use the time coordinate to specify a time dependence for some of the parameters of the model. Step10: Now let's specify the time-dependence (a powerlaw) for the normalization of the powerlaw spectrum. Step11: Link the normalization of the cutoff powerlaw spectrum with time through the time law we have just generated. Step12: Performing the fit
Python Code: from threeML import * import matplotlib.pyplot as plt from jupyterthemes import jtplot %matplotlib inline jtplot.style(context="talk", fscale=1, ticks=True, grid=False) plt.style.use("mike") Explanation: Time-energy fit 3ML allows the possibility to model a time-varying source by explicitly fitting the time-dependent part of the model. Let's see this with an example. First we import what we need: End of explanation def generate_one(K, ax): # Let's generate some data with y = Powerlaw(x) gen_function = Cutoff_powerlaw() gen_function.K = K # Generate a dataset using the power law, and a # constant 30% error x = np.logspace(0, 2, 50) xyl_generator = XYLike.from_function( "sim_data", function=gen_function, x=x, yerr=0.3 * gen_function(x) ) y = xyl_generator.y y_err = xyl_generator.yerr ax.loglog(x, gen_function(x)) return x, y, y_err Explanation: Generating the datasets Then we generate a simulated dataset for a source with a cutoff powerlaw spectrum with a constant photon index and cutoff but with a normalization that changes with time following a powerlaw: End of explanation time_tags = np.array([1.0, 2.0, 5.0, 10.0]) Explanation: These are the times at which the simulated spectra have been observed End of explanation normalizations = 0.23 * time_tags ** (-3.5) Explanation: This describes the time-varying normalization. If everything works as it should, we should recover from the fit a normalization of 0.23 and a index of -1.2 for the time law. End of explanation fig, ax = plt.subplots() datasets = [generate_one(k, ax) for k in normalizations] ax.set_xlabel("Energy") ax.set_ylabel("Flux") Explanation: Now that we have a simple function to create the datasets, let's build them. End of explanation time = IndependentVariable("time", 1.0, u.s) Explanation: Setup the model Now set up the fit and fit it. First we need to tell 3ML that we are going to fit using an independent variable (time in this case). We init it to 1.0 and set the unit to seconds. End of explanation plugins = [] for i, dataset in enumerate(datasets): x, y, y_err = dataset xyl = XYLike("data%i" % i, x, y, y_err) # This is the important part: we need to tag the instance of the # plugin so that 3ML will know that this instance corresponds to the # given tag (a time coordinate in this case). If instead of giving # one time coordinate we give two time coordinates, then 3ML will # take the average of the model between the two time coordinates # (computed as the integral of the model between t1 and t2 divided # by t2-t1) xyl.tag = (time, time_tags[i]) # To access the tag we have just set we can use: independent_variable, start, end = xyl.tag # NOTE: xyl.tag will return 3 things: the independent variable, the start and the # end. If like in this case you do not specify an end when assigning the tag, end # will be None plugins.append(xyl) Explanation: Then we load the data that we have generated, tagging them with their time of observation. End of explanation data = DataList(*plugins) Explanation: Generate the datalist as usual End of explanation spectrum = Cutoff_powerlaw() src = PointSource("test", ra=0.0, dec=0.0, spectral_shape=spectrum) model = Model(src) Explanation: Now let's generate the spectral model, in this case a point source with a cutoff powerlaw spectrum. End of explanation model.add_independent_variable(time) Explanation: Now we need to tell 3ML that we are going to use the time coordinate to specify a time dependence for some of the parameters of the model. End of explanation time_po = Powerlaw() time_po.K.bounds = (0.01, 1000) Explanation: Now let's specify the time-dependence (a powerlaw) for the normalization of the powerlaw spectrum. End of explanation model.link(spectrum.K, time, time_po) model Explanation: Link the normalization of the cutoff powerlaw spectrum with time through the time law we have just generated. End of explanation jl = JointLikelihood(model, data) best_fit_parameters, likelihood_values = jl.fit() for p in plugins: p.plot(x_scale='log', y_scale='log'); Explanation: Performing the fit End of explanation
2,467
Given the following text description, write Python code to implement the functionality described below step by step Description: Create evoked objects in delayed SSP mode This script shows how to apply SSP projectors delayed, that is, at the evoked stage. This is particularly useful to support decisions related to the trade-off between denoising and preserving signal. We first will extract Epochs and create evoked objects with the required settings for delayed SSP application. Then we will explore the impact of the particular SSP projectors on the evoked data. Step1: Set parameters Step2: Interactively select / deselect the SSP projection vectors
Python Code: # Authors: Alexandre Gramfort <[email protected]> # Denis Engemann <[email protected]> # # License: BSD (3-clause) import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import sample print(__doc__) data_path = sample.data_path() Explanation: Create evoked objects in delayed SSP mode This script shows how to apply SSP projectors delayed, that is, at the evoked stage. This is particularly useful to support decisions related to the trade-off between denoising and preserving signal. We first will extract Epochs and create evoked objects with the required settings for delayed SSP application. Then we will explore the impact of the particular SSP projectors on the evoked data. End of explanation raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_id, tmin, tmax = 1, -0.2, 0.5 # Setup for reading the raw data raw = io.Raw(raw_fname, preload=True) raw.filter(1, 40, method='iir') events = mne.read_events(event_fname) # pick magnetometer channels picks = mne.pick_types(raw.info, meg='mag', stim=False, eog=True, include=[], exclude='bads') # If we suspend SSP projection at the epochs stage we might reject # more epochs than necessary. To deal with this we set proj to `delayed` # while passing reject parameters. Each epoch will then be projected before # performing peak-to-peak amplitude rejection. If it survives the rejection # procedure the unprojected raw epoch will be employed instead. # As a consequence, the point in time at which the projection is applied will # not have impact on the final results. # We will make use of this function to prepare for interactively selecting # projections at the evoked stage. epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=None, reject=dict(mag=4e-12), proj='delayed') evoked = epochs.average() # average epochs and get an Evoked dataset. Explanation: Set parameters End of explanation # Here we expose the details of how to apply SSPs reversibly title = 'Incremental SSP application' # let's first move the proj list to another location projs, evoked.info['projs'] = evoked.info['projs'], [] fig, axes = plt.subplots(2, 2) # create 4 subplots for our four vectors # As the bulk of projectors was extracted from the same source, we can simply # iterate over our collection of projs and add them step by step to see how # the signals change as a function of the SSPs applied. As this operation # can't be undone we will operate on copies of the original evoked object to # keep things reversible. for proj, ax in zip(projs, axes.flatten()): evoked.add_proj(proj) # add projection vectors loop by loop. evoked.copy().apply_proj().plot(axes=ax) # apply on a copy of evoked ax.set_title('+ %s' % proj['desc']) # extract description. plt.suptitle(title) mne.viz.tight_layout() # We also could have easily visualized the impact of single projection vectors # by deleting the vector directly after visualizing the changes. # E.g. had we appended the following line to our loop: # `evoked.del_proj(-1)` # Often, it is desirable to interactively explore data. To make this more # convenient we can make use of the 'interactive' option. This will open a # check box that allows us to reversibly select projection vectors. Any # modification of the selection will immediately cause the figure to update. evoked.plot(proj='interactive') # Hint: the same works with evoked.plot_topomap Explanation: Interactively select / deselect the SSP projection vectors End of explanation
2,468
Given the following text description, write Python code to implement the functionality described below step by step Description: Step5: Skripta za generiranje kolokvija Skripta generira $\LaTeX$ dokument s slučajno generiranim kolokvijima. Studenti se učitavaju iz datoteke. Najprije definiramo stringove koji sadrže zaglavlje i kraj dokumenta Step6: Učitavanje potrebnih paketa & podataka Step7: Kreiranje datoteke
Python Code: header1 = r\documentclass[a4paper,11pt]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[croatian]{babel} \usepackage{minted} \usepackage{amsmath,amsfonts} \usepackage{graphicx} \usepackage{booktabs} \usepackage[hmargin=1.5cm,vmargin=1cm]{geometry} \pagestyle{empty} \begin{document} header2 = r\begin{center} {\LARGE \textbf{1.\ kolokvij iz Matematičkog sofvera}}\\ {\Large\textbf{12.\ svibnja 2017.}}\\ \end{center} header3=r\begin{enumerate} footer1 = r\end{enumerate} \vspace{5mm} \textbf{Uputa}: Kolokvij se piše u Jupyter bilježnici (unutar direktorija \textit{1.\ kolokvij}) koju sam kreirao u tu svrhu. Drugi zadatak se rješava korištenjem biblioteke \texttt{Numpy}, treći korištenjem biblioteke \texttt{Scipy}, četvrti korištenjem biblioteke \texttt{Matplotlib} a peti korištenjem biblioteke \texttt{Sympy}. \vspace{5mm} \begin{flushright} Potpis studenta: \end{flushright} \newpage footer2=r \end{document} Explanation: Skripta za generiranje kolokvija Skripta generira $\LaTeX$ dokument s slučajno generiranim kolokvijima. Studenti se učitavaju iz datoteke. Najprije definiramo stringove koji sadrže zaglavlje i kraj dokumenta End of explanation from numpy import random with open('studenti.txt','r') as f: studenti = list(f) broj_studenata = len(studenti) broj_zadataka = 30 Explanation: Učitavanje potrebnih paketa & podataka End of explanation datoteka = "ms_kol1.tex" with open(datoteka,'w') as f: f.write(header1+'\n') for i in range(broj_studenata): random.seed() r=random.randint(1,broj_zadataka,5) f.write(header2) f.write("\\begin{center}{\large \\textbf{Student: "+studenti[i][:-1]+"}}\end{center}\n\n") f.write(header3) for j in range(5): z = str(j+1)+str(r[j]).zfill(2) f.write('\\input zadaci-1/z'+z+'\n') f.write(footer1) f.write(footer2) Explanation: Kreiranje datoteke End of explanation
2,469
Given the following text description, write Python code to implement the functionality described below step by step Description: 나이브 베이즈 분류 모형 나이브 베이즈 분류 모형(Naive Bayes classification model)은 대표적인 확률적 생성 모형이다. 타겟 변수 $y$의 각 클래스 ${C_1,\cdots,C_K}$ 에 대한 독립 변수 $x$의 조건부 확률 분포 정보 $p(x \mid y = C_k)$ 를 사용하여 주어진 새로운 독립 변수 값 $x_{\text{new}}$에 대한 타켓 변수의 각 클래스의 조건부 확률 $p(y = C_k \mid x_{\text{new}})$ 를 추정한 후 가장 조건부 확률이 큰 클래스 $k$를 선택하는 방법이다. 조건부 확률의 계산 다음과 같이 베이즈 규칙을 사용하여 조건부 확률 $p(y = C_k \mid x_{\text{new}})$ 을 계산한다. $$ P(y = C_k \mid x_{\text{new}}) = \dfrac{P(x_{\text{new}} \mid y = C_k)\; P(y = C_k)}{P(x_{\text{new}})} $$ 최종적으로는 각 클래스 $k$에 대한 확률을 비교하여 최고값을 계산하기만 하면 되므로 분모에 있는 주변 확률(marginal probability) ${P(x_{\text{new}})}$은 계산하지 않는다. $$ P(y = C_k \mid x_{\text{new}}) \;\; \propto \;\; P(x_{\text{new}} \mid y = C_k) \; P(y = C_k) $$ 여기에서 사전 확률(prior) $P(y = C_k)$는 다음과 같이 쉽게 구할 수 있다. $$ P(y = C_k) \approx \frac{\text{number of samples with }y = C_k}{\text{number of all samples}} $$ $y$에 대한 $x$의 조건부 확률인 우도(likelihood)의 경우에는 일반적으로 정규 분포나 베르누이 분포와 같은 특정한 모형을 가정하여 다음과 같이 계산한다. $P(x \mid y = C_k)$ 가 특정한 확률 분포 모형을 따른다고 가정한다. 트레이닝 데이터 ${x_1, \cdots, x_N}$을 사용하여 이 모형의 모수(parameter)를 구한다. 모수를 알고 있으므로 새로운 독립 변수 값 $x_{\text{new}}$이 어떤 값이 되더라도 $P(x_{\text{new}} \mid y = C_k)$ 를 계산할 수 있다. 우도 모형 우도의 모형으로 많이 사용하는 것은 다음과 같다. 베르누이 분포 $x$가 0 또는 1 값만을 가질 수 있다. $x$가 1 이 될 확률은 고정되어 있다. 예 Step1: 베르누이 분포 나이브 베이즈 모형 베르누이 나이브 베이즈 모형에서는 타겟 변수뿐 아니라 독립 변수도 0 또는 1의 값을 가져야 한다. 예를 들어 전자우편과 같은 문서 내에 특정한 단어가 포함되어 있는지의 여부는 베르누이 확률 변수로 모형화할 수 있으므로 스팸 필터링에 사용할 수 있다. Step2: 다항 분포 나이브 베이즈 모형 Step3: 예 1 Step4: 감성 분석 Sentiment Analysis 서울대 박은정님의 네이버 영화 감상평에 대한 감성 분석 예제 https Step5: CountVectorize 사용 Step6: TfidfVectorizer 사용 Step7: 형태소 분석기 사용 Step8: 최적화
Python Code: np.random.seed(0) X0 = sp.stats.norm(-2, 1).rvs(40) X1 = sp.stats.norm(+2, 1).rvs(60) X = np.hstack([X0, X1])[:, np.newaxis] y0 = np.zeros(40) y1 = np.ones(60) y = np.hstack([y0, y1]) sns.distplot(X0, rug=True, kde=False, norm_hist=True, label="class 0") sns.distplot(X1, rug=True, kde=False, norm_hist=True, label="class 1") plt.legend() plt.xlim(-6,6) plt.show() from sklearn.naive_bayes import GaussianNB clf_norm = GaussianNB().fit(X, y) clf_norm.classes_ clf_norm.class_count_ clf_norm.class_prior_ clf_norm.theta_, clf_norm.sigma_ xx = np.linspace(-6, 6, 100) p0 = sp.stats.norm(clf_norm.theta_[0], clf_norm.sigma_[0]).pdf(xx) p1 = sp.stats.norm(clf_norm.theta_[1], clf_norm.sigma_[1]).pdf(xx) sns.distplot(X0, rug=True, kde=False, norm_hist=True, color="r", label="class 0 histogram") sns.distplot(X1, rug=True, kde=False, norm_hist=True, color="b", label="class 1 histogram") plt.plot(xx, p0, c="r", label="class 0 est. pdf") plt.plot(xx, p1, c="b", label="class 1 est. pdf") plt.legend() plt.show() x_new = -1 clf_norm.predict_proba([[x_new]]) px = sp.stats.norm(clf_norm.theta_, np.sqrt(clf_norm.sigma_)).pdf(x_new) px p = px.flatten() * clf_norm.class_prior_ p clf_norm.class_prior_ p / p.sum() Explanation: 나이브 베이즈 분류 모형 나이브 베이즈 분류 모형(Naive Bayes classification model)은 대표적인 확률적 생성 모형이다. 타겟 변수 $y$의 각 클래스 ${C_1,\cdots,C_K}$ 에 대한 독립 변수 $x$의 조건부 확률 분포 정보 $p(x \mid y = C_k)$ 를 사용하여 주어진 새로운 독립 변수 값 $x_{\text{new}}$에 대한 타켓 변수의 각 클래스의 조건부 확률 $p(y = C_k \mid x_{\text{new}})$ 를 추정한 후 가장 조건부 확률이 큰 클래스 $k$를 선택하는 방법이다. 조건부 확률의 계산 다음과 같이 베이즈 규칙을 사용하여 조건부 확률 $p(y = C_k \mid x_{\text{new}})$ 을 계산한다. $$ P(y = C_k \mid x_{\text{new}}) = \dfrac{P(x_{\text{new}} \mid y = C_k)\; P(y = C_k)}{P(x_{\text{new}})} $$ 최종적으로는 각 클래스 $k$에 대한 확률을 비교하여 최고값을 계산하기만 하면 되므로 분모에 있는 주변 확률(marginal probability) ${P(x_{\text{new}})}$은 계산하지 않는다. $$ P(y = C_k \mid x_{\text{new}}) \;\; \propto \;\; P(x_{\text{new}} \mid y = C_k) \; P(y = C_k) $$ 여기에서 사전 확률(prior) $P(y = C_k)$는 다음과 같이 쉽게 구할 수 있다. $$ P(y = C_k) \approx \frac{\text{number of samples with }y = C_k}{\text{number of all samples}} $$ $y$에 대한 $x$의 조건부 확률인 우도(likelihood)의 경우에는 일반적으로 정규 분포나 베르누이 분포와 같은 특정한 모형을 가정하여 다음과 같이 계산한다. $P(x \mid y = C_k)$ 가 특정한 확률 분포 모형을 따른다고 가정한다. 트레이닝 데이터 ${x_1, \cdots, x_N}$을 사용하여 이 모형의 모수(parameter)를 구한다. 모수를 알고 있으므로 새로운 독립 변수 값 $x_{\text{new}}$이 어떤 값이 되더라도 $P(x_{\text{new}} \mid y = C_k)$ 를 계산할 수 있다. 우도 모형 우도의 모형으로 많이 사용하는 것은 다음과 같다. 베르누이 분포 $x$가 0 또는 1 값만을 가질 수 있다. $x$가 1 이 될 확률은 고정되어 있다. 예: 동전을 던진 결과로 어느 동전을 던졌는지를 찾아내는 모형 $$ P(x_i \mid y = C_k) = \theta_k^x (1-\theta_k)^{(1-x_i)} $$ 다항 분포 $(x_1, \ldots, x_n)$ 이 0 또는 양의 정수 예: 주사위를 던진 결과로 어느 주사위를 던졌는지를 찾아내는 모형 $$ P(x_1, \ldots, x_n \mid y = C_k) = \prod_i \theta_k^{x_i}$$ 가우시안 정규 분포 $x$가 실수로 특정한 값 근처 예: 시험 점수로 학생이 누구인지를 찾아내는 모형 $$ P(x_i \mid y = C_k) = \dfrac{1}{\sqrt{2\pi\sigma_k^2}} \exp \left(-\dfrac{(x_i-\mu_k)^2}{2\sigma_k^2}\right) $$ 나이브 가정 독립 변수 $x$가 다차원(multi-dimensional) $x = (x_1, \ldots, x_n)$ 이면 위에서 사용한 우도 $P(x \mid y = C_k)$ 는 원래 모든 $x_i$에 대한 결합 확률(joint probability) $P(x_1, \ldots, x_n \mid y = C_k)$ 을 사용해야 한다. 그러나 이러한 결합 확률은 실제로 입수하기 어렵기 때문에 모든 차원의 개별 독립 변수 요소들이 서로 독립(independent)이라는 가정을 흔히 사용한다. 이러한 가정을 나이브 가정(Naive assumption)이라고 한다. 나이브 가정하에서는 결합 확률이 개별 확률의 곱으로 나타난다. $$ P(x_1, \ldots, x_n \mid y = C_k) = \prod_{i=1}^n P(x_i \mid y = C_k) $$ $$ P(y = C_k \mid x_{\text{new}}) \;\; \propto \;\; \prod_{i=1}^n P(x_{\text{new},i} \mid y = C_k)\; P(y = C_k) $$ Scikit-Learn에서 제공하는 나이브 베이즈 모형 Scikit-Learn의 naive_bayes 서브패키지에서는 다음과 같은 세가지 나이브 베이즈 모형 클래스를 제공한다. BernoulliNB: 베르누이 분포 나이브 베이즈 MultinomialNB: 다항 분포 나이브 베이즈 GaussianNB: 가우시안 정규 분포 나이브 베이즈 이 클래스들은 다음과 같은 속성값 및 메서드를 가진다. classes_: 공통 타겟 Y의 클래스(라벨) class_count_: 공통 타겟 Y의 값이 특정한 클래스인 표본 데이터의 수 feature_count_: 베르누이 분포나 다항 분포 타겟 Y의 값이 특정한 클래스이면서 독립 변수 X의 값이 1인 표본 데이터의 수 (베르누이 분포). 타겟 Y의 값이 특정한 클래스인 독립 변수 X의 값의 합 (다항 분포). 독립 변수 값이 1또는 0만 가지는 경우에는 표본 데이터의 수가 된다. class_prior_: 가우시안 정규 분포 타겟 Y의 무조건부 확률 분포 $ P(Y) $ class_log_prior_: 베르누이 분포나 다항 분포 타겟 Y의 무조건부 확률 분포의 로그 $ \log P(Y) $ theta_, sigma_ : 가우시안 정규 분포 가우시안 정규 분포의 기댓값 $\mu$ 과 분산 $\sigma^2$ feature_log_prob_: 베르누이 분포나 다항 분포 베르누이 분포 혹은 다항 분포의 모수 벡터의 로그 $$ \log \theta = (\log \theta_1, \ldots, \log \theta_n) = \left( \log \dfrac{N_i}{N}, \ldots, \log \dfrac{N_n}{N} \right)$$ 스무딩(smoothing) $$ \hat{\theta} = \frac{ N_{i} + \alpha}{N + \alpha n} $$ predict_proba(x_new) : 공통 조건부 확률 분포 $ P(Y \mid X_{\text{new}}) $ 가우시안 정규 분포 나이브 베이즈 모형 End of explanation np.random.seed(0) X = np.random.randint(2, size=(10, 4)) y = np.array([0,0,0,0,1,1,1,1,1,1]) print(X) print(y) from sklearn.naive_bayes import BernoulliNB clf_bern = BernoulliNB().fit(X, y) clf_bern.classes_ clf_bern.class_count_ np.exp(clf_bern.class_log_prior_) fc = clf_bern.feature_count_ fc fc / np.repeat(clf_bern.class_count_[:, np.newaxis], 4, axis=1) theta = np.exp(clf_bern.feature_log_prob_) theta x_new = np.array([1, 1, 0, 0]) clf_bern.predict_proba([x_new]) p = ((theta**x_new)*(1-theta)**(1-x_new)).prod(axis=1)*np.exp(clf_bern.class_log_prior_) p / p.sum() x_new = np.array([0, 0, 1, 1]) clf_bern.predict_proba([x_new]) p = ((theta**x_new)*(1-theta)**(1-x_new)).prod(axis=1)*np.exp(clf_bern.class_log_prior_) p / p.sum() Explanation: 베르누이 분포 나이브 베이즈 모형 베르누이 나이브 베이즈 모형에서는 타겟 변수뿐 아니라 독립 변수도 0 또는 1의 값을 가져야 한다. 예를 들어 전자우편과 같은 문서 내에 특정한 단어가 포함되어 있는지의 여부는 베르누이 확률 변수로 모형화할 수 있으므로 스팸 필터링에 사용할 수 있다. End of explanation from sklearn.naive_bayes import MultinomialNB clf_mult = MultinomialNB().fit(X, y) clf_mult.classes_ clf_mult.class_count_ fc = clf_mult.feature_count_ fc fc / np.repeat(fc.sum(axis=1)[:, np.newaxis], 4, axis=1) clf_mult.alpha (fc + clf_mult.alpha) / (np.repeat(fc.sum(axis=1)[:, np.newaxis], 4, axis=1) + clf_mult.alpha * X.shape[1]) theta = np.exp(clf_mult.feature_log_prob_) theta x_new = np.array([21, 35, 29, 14]) clf_mult.predict_proba([x_new]) p = (theta**x_new).prod(axis=1)*np.exp(clf_bern.class_log_prior_) p / p.sum() x_new = np.array([18, 24, 35, 24]) clf_mult.predict_proba([x_new]) Explanation: 다항 분포 나이브 베이즈 모형 End of explanation from sklearn.datasets import fetch_20newsgroups from sklearn.cross_validation import train_test_split news = fetch_20newsgroups(subset="all") X_train, X_test, y_train, y_test = train_test_split(news.data, news.target, test_size=0.1, random_state=1) from sklearn.feature_extraction.text import TfidfVectorizer, HashingVectorizer, CountVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline clf_1 = Pipeline([ ('vect', CountVectorizer()), ('clf', MultinomialNB()), ]) clf_2 = Pipeline([ ('vect', TfidfVectorizer()), ('clf', MultinomialNB()), ]) clf_3 = Pipeline([ ('vect', TfidfVectorizer(token_pattern=r"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b")), ('clf', MultinomialNB()), ]) clf_4 = Pipeline([ ('vect', TfidfVectorizer(stop_words="english", token_pattern=r"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b")), ('clf', MultinomialNB()), ]) clf_5 = Pipeline([ ('vect', TfidfVectorizer(stop_words="english", token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b")), ('clf', MultinomialNB(alpha=0.01)), ]) from sklearn.cross_validation import cross_val_score, KFold from scipy.stats import sem for i, clf in enumerate([clf_1, clf_2, clf_3, clf_4, clf_5]): scores = cross_val_score(clf, X_test, y_test, cv=5) print(("Model {0:d}: Mean score: {1:.3f} (+/-{2:.3f})").format(i, np.mean(scores), sem(scores))) Explanation: 예 1: 뉴스 그룹 End of explanation import codecs def read_data(filename): with codecs.open(filename, encoding='utf-8', mode='r') as f: data = [line.split('\t') for line in f.read().splitlines()] data = data[1:] # header 제외 return data train_data = read_data('/home/dockeruser/data/nsmc/ratings_train.txt') test_data = read_data('/home/dockeruser/data/nsmc/ratings_test.txt') X = zip(*train_data)[1] y = zip(*train_data)[2] y = np.array(y, dtype=int) from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=10000, test_size=10000) len(X_train), len(X_test) from konlpy.utils import pprint pprint((X[0], y[0])) %%time def tokenize(doc): return ['/'.join(t) for t in pos_tagger.pos(doc, norm=True, stem=True)] train_docs = [(tokenize(row[1]), row[2]) for row in train_data[:10000]] tokens = [t for d in train_docs for t in d[0]] import nltk text = nltk.Text(tokens, name='NMSC') mpl.rcParams["font.family"] = "NanumGothic" plt.figure(figsize=(12,10)) text.plot(50) plt.show() Explanation: 감성 분석 Sentiment Analysis 서울대 박은정님의 네이버 영화 감상평에 대한 감성 분석 예제 https://github.com/e9t/nsmc https://www.lucypark.kr/slides/2015-pyconkr/ 데이터 전처리 End of explanation from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.metrics import classification_report clf_1 = Pipeline([ ('vect', CountVectorizer()), ('clf', MultinomialNB()), ]) %%time clf_1.fit(X_train, y_train) pprint(list(clf_1.named_steps["vect"].vocabulary_)[:10]) %%time print(classification_report(y_test, clf_1.predict(X_test))) Explanation: CountVectorize 사용 End of explanation from sklearn.feature_extraction.text import TfidfVectorizer clf_2 = Pipeline([ ('vect', TfidfVectorizer()), ('clf', MultinomialNB()), ]) %%time clf_2.fit(X_train, y_train) %%time print(classification_report(y_test, clf_2.predict(X_test))) Explanation: TfidfVectorizer 사용 End of explanation from konlpy.tag import Twitter pos_tagger = Twitter() def tokenize_pos(doc): return ['/'.join(t) for t in pos_tagger.pos(doc, norm=True, stem=True)] clf_3 = Pipeline([ ('vect', CountVectorizer(tokenizer=tokenize_pos)), ('clf', MultinomialNB()), ]) %%time clf_3.fit(X_train, y_train) pprint(list(clf_3.named_steps["vect"].vocabulary_)[:10]) %%time print(classification_report(y_test, clf_3.predict(X_test), digits=4)) vect3 = clf_3.named_steps["vect"] idx3 = np.array(np.argsort(vect3.transform(X_train).sum(axis=0)))[0] voca3 = np.array(vect3.get_feature_names()).flatten() pprint(voca3[idx3[-20:]].tolist()) Explanation: 형태소 분석기 사용 End of explanation clf_4 = Pipeline([ ('vect', TfidfVectorizer(tokenizer=tokenize_pos, ngram_range=(1,2))), ('clf', MultinomialNB()), ]) %%time clf_4.fit(X_train, y_train) %%time print(classification_report(y_test, clf_4.predict(X_test), digits=4)) Explanation: 최적화 End of explanation
2,470
Given the following text description, write Python code to implement the functionality described below step by step Description: Fitting Models Exercise 1 Imports Step1: Fitting a quadratic curve For this problem we are going to work with the following model Step2: First, generate a dataset using this model using these parameters and the following characteristics Step3: Now fit the model to the dataset to recover estimates for the model's parameters
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt Explanation: Fitting Models Exercise 1 Imports End of explanation a_true = 0.5 b_true = 2.0 c_true = -4.0 Explanation: Fitting a quadratic curve For this problem we are going to work with the following model: $$ y_{model}(x) = a x^2 + b x + c $$ The true values of the model parameters are as follows: End of explanation # YOUR CODE HERE x = np.linspace(-5, 5, 30) y = a_true*(x**2) + b_true*(x) + [c_true]*30 + 2*np.random.randn(30) plt.scatter(x, y) assert True # leave this cell for grading the raw data generation and plot Explanation: First, generate a dataset using this model using these parameters and the following characteristics: For your $x$ data use 30 uniformly spaced points between $[-5,5]$. Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal). After you generate the data, make a plot of the raw data (use points). End of explanation # YOUR CODE HERE def model(x, a, b, c): return a*x**2 + b*x + c theta_best, theta_cov = opt.curve_fit(model, x, y, sigma=2) print("a = ", theta_best[0], " +- ", theta_cov[0,0]) print("b = ", theta_best[1], " +- ", theta_cov[1,1]) print("c = ", theta_best[2], " +- ", theta_cov[2,2]) fitline = theta_best[0]*x**2 + theta_best[1]*x + theta_best[2] plt.plot(x, fitline, color="r") plt.scatter(x, y) plt.xlabel("x") plt.ylabel("y") assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors Explanation: Now fit the model to the dataset to recover estimates for the model's parameters: Print out the estimates and uncertainties of each parameter. Plot the raw data and best fit of the model. End of explanation
2,471
Given the following text description, write Python code to implement the functionality described below step by step Description: Using wrappers for Gensim models for working with Keras This tutorial is about using gensim models as a part of your Keras models. The wrappers available (as of now) are Step1: Next we create a dummy set of sentences to train our Word2Vec model. Step2: Then, we create the Word2Vec model by passing appropriate parameters. Step3: Integration with Keras Step4: We would use the layer returned by the function get_embedding_layer in the Keras model. Step5: Next, we construct the Keras model. Step6: Now, we input the two words which we wish to compare and retrieve the value predicted by the model as the similarity score of the two words. Step7: Integration with Keras Step8: As the first step of the task, we iterate over the folder in which our text samples are stored, and format them into a list of samples. Also, we prepare at the same time a list of class indices matching the samples. Step9: Then, we format our text samples and labels into tensors that can be fed into a neural network. To do this, we rely on Keras utilities keras.preprocessing.text.Tokenizer and keras.preprocessing.sequence.pad_sequences. Step10: As the next step, we prepare the embedding layer to be used in our actual Keras model. Step11: Finally, we create a small 1D convnet to solve our classification problem. Step12: As can be seen from the results above, the accuracy obtained is not that high. This is because of the small size of training data used and we could expect to obtain better accuracy for training data of larger size. Integration with Keras Step17: We now define some global variables and utility functions which would be used in the code further Step18: We create our word2vec model first. We could either train our model or user pre-trained vectors. Step19: We load the training data for the Keras model. Step20: Next, we create out Keras model. Step21: Next, we train the classifier. Step22: Our classifier is now ready to predict classes for input data.
Python Code: from gensim.models import word2vec Explanation: Using wrappers for Gensim models for working with Keras This tutorial is about using gensim models as a part of your Keras models. The wrappers available (as of now) are : * Word2Vec (uses the function get_embedding_layer defined in gensim.models.keyedvectors) Word2Vec To use Word2Vec, we import the corresponding module. End of explanation sentences = [ ['human', 'interface', 'computer'], ['survey', 'user', 'computer', 'system', 'response', 'time'], ['eps', 'user', 'interface', 'system'], ['system', 'human', 'system', 'eps'], ['user', 'response', 'time'], ['trees'], ['graph', 'trees'], ['graph', 'minors', 'trees'], ['graph', 'minors', 'survey'] ] Explanation: Next we create a dummy set of sentences to train our Word2Vec model. End of explanation model = word2vec.Word2Vec(sentences, size=100, min_count=1, hs=1) Explanation: Then, we create the Word2Vec model by passing appropriate parameters. End of explanation import numpy as np from keras.engine import Input from keras.models import Model from keras.layers.merge import dot Explanation: Integration with Keras : Cosine Similarity Task As an example of integration of Gensim's Word2Vec model with Keras, we consider a word similarity task where we compute the cosine distance as a measure of similarity between the two words. End of explanation wv = model.wv embedding_layer = wv.get_embedding_layer() Explanation: We would use the layer returned by the function get_embedding_layer in the Keras model. End of explanation input_a = Input(shape=(1,), dtype='int32', name='input_a') input_b = Input(shape=(1,), dtype='int32', name='input_b') embedding_a = embedding_layer(input_a) embedding_b = embedding_layer(input_b) similarity = dot([embedding_a, embedding_b], axes=2, normalize=True) keras_model = Model(input=[input_a, input_b], output=similarity) keras_model.compile(optimizer='sgd', loss='mse') Explanation: Next, we construct the Keras model. End of explanation word_a = 'graph' word_b = 'trees' # output is the cosine distance between the two words (as a similarity measure) output = keras_model.predict([np.asarray([model.wv.vocab[word_a].index]), np.asarray([model.wv.vocab[word_b].index])]) print output Explanation: Now, we input the two words which we wish to compare and retrieve the value predicted by the model as the similarity score of the two words. End of explanation import os import sys import keras import numpy as np from gensim.models import word2vec from keras.models import Model from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils.np_utils import to_categorical from keras.layers import Input, Dense, Flatten from keras.layers import Conv1D, MaxPooling1D from sklearn.datasets import fetch_20newsgroups Explanation: Integration with Keras : 20NewsGroups Task To see how Gensim's Word2Vec model could be integrated with Keras while dealing with a real supervised (classification) task, we consider the 20NewsGroups task. Here, we take a smaller version of this data by taking a subset of the documents to be classified. First, we import the necessary modules. End of explanation texts = [] # list of text samples texts_w2v = [] # used to train the word embeddings labels = [] # list of label ids #using 3 categories for training the classifier data = fetch_20newsgroups(subset='train', categories=['alt.atheism', 'comp.graphics', 'sci.space']) for index in range(len(data)): label_id = data.target[index] file_data = data.data[index] i = file_data.find('\n\n') # skip header if i > 0: file_data = file_data[i:] try: curr_str = str(file_data) sentence_list = curr_str.split('\n') for sentence in sentence_list: sentence = (sentence.strip()).lower() texts.append(sentence) texts_w2v.append(sentence.split(' ')) labels.append(label_id) except: None Explanation: As the first step of the task, we iterate over the folder in which our text samples are stored, and format them into a list of samples. Also, we prepare at the same time a list of class indices matching the samples. End of explanation MAX_SEQUENCE_LENGTH = 1000 # Vectorize the text samples into a 2D integer tensor tokenizer = Tokenizer() tokenizer.fit_on_texts(texts) sequences = tokenizer.texts_to_sequences(texts) # word_index = tokenizer.word_index data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH) labels = to_categorical(np.asarray(labels)) x_train = data y_train = labels Explanation: Then, we format our text samples and labels into tensors that can be fed into a neural network. To do this, we rely on Keras utilities keras.preprocessing.text.Tokenizer and keras.preprocessing.sequence.pad_sequences. End of explanation Keras_w2v = word2vec.Word2Vec(min_count=1) Keras_w2v.build_vocab(texts_w2v) Keras_w2v.train(texts, total_examples=Keras_w2v.corpus_count, epochs=Keras_w2v.iter) Keras_w2v_wv = Keras_w2v.wv embedding_layer = Keras_w2v_wv.get_embedding_layer() Explanation: As the next step, we prepare the embedding layer to be used in our actual Keras model. End of explanation sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') embedded_sequences = embedding_layer(sequence_input) x = Conv1D(128, 5, activation='relu')(embedded_sequences) x = MaxPooling1D(5)(x) x = Conv1D(128, 5, activation='relu')(x) x = MaxPooling1D(5)(x) x = Conv1D(128, 5, activation='relu')(x) x = MaxPooling1D(35)(x) # global max pooling x = Flatten()(x) x = Dense(128, activation='relu')(x) preds = Dense(y_train.shape[1], activation='softmax')(x) model = Model(sequence_input, preds) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc']) model.fit(x_train, y_train, epochs=5) Explanation: Finally, we create a small 1D convnet to solve our classification problem. End of explanation from keras.models import Sequential from keras.layers import Dropout from keras.regularizers import l2 from keras.models import Model from keras.engine import Input from keras.preprocessing.sequence import pad_sequences from keras.preprocessing.text import Tokenizer from gensim.models import keyedvectors from collections import defaultdict import pandas as pd Explanation: As can be seen from the results above, the accuracy obtained is not that high. This is because of the small size of training data used and we could expect to obtain better accuracy for training data of larger size. Integration with Keras : Another classification task In this task, we train our model to predict the category of the input text. We start by importing the relevant modules and libraries : End of explanation # global variables nb_filters = 1200 # number of filters n_gram = 2 # n-gram, or window size of CNN/ConvNet maxlen = 15 # maximum number of words in a sentence vecsize = 300 # length of the embedded vectors in the model cnn_dropout = 0.0 # dropout rate for CNN/ConvNet final_activation = 'softmax' # activation function. Options: softplus, softsign, relu, tanh, sigmoid, hard_sigmoid, linear. dense_wl2reg = 0.0 # dense_wl2reg: L2 regularization coefficient dense_bl2reg = 0.0 # dense_bl2reg: L2 regularization coefficient for bias optimizer = 'adam' # optimizer for gradient descent. Options: sgd, rmsprop, adagrad, adadelta, adam, adamax, nadam # utility functions def retrieve_csvdata_as_dict(filepath): Retrieve the training data in a CSV file, with the first column being the class labels, and second column the text data. It returns a dictionary with the class labels as keys, and a list of short texts as the value for each key. df = pd.read_csv(filepath) category_col, descp_col = df.columns.values.tolist() shorttextdict = dict() for category, descp in zip(df[category_col], df[descp_col]): if type(descp) == str: shorttextdict.setdefault(category, []).append(descp) return shorttextdict def subjectkeywords(): Return an example data set, with three subjects and corresponding keywords. This is in the format of the training input. data_path = os.path.join(os.getcwd(), 'datasets/keras_classifier_training_data.csv') return retrieve_csvdata_as_dict(data_path) def convert_trainingdata(classdict): Convert the training data into format put into the neural networks. classlabels = classdict.keys() lblidx_dict = dict(zip(classlabels, range(len(classlabels)))) # tokenize the words, and determine the word length phrases = [] indices = [] for label in classlabels: for shorttext in classdict[label]: shorttext = shorttext if type(shorttext) == str else '' category_bucket = [0]*len(classlabels) category_bucket[lblidx_dict[label]] = 1 indices.append(category_bucket) phrases.append(shorttext) return classlabels, phrases, indices def process_text(text): Process the input text by tokenizing and padding it. tokenizer = Tokenizer() tokenizer.fit_on_texts(text) x_train = tokenizer.texts_to_sequences(text) x_train = pad_sequences(x_train, maxlen=maxlen) return x_train Explanation: We now define some global variables and utility functions which would be used in the code further : End of explanation # we are training our Word2Vec model here w2v_training_data_path = os.path.join(os.getcwd(), 'datasets/word_vectors_training_data.txt') input_data = word2vec.LineSentence(w2v_training_data_path) w2v_model = word2vec.Word2Vec(input_data, size=300) w2v_model_wv = w2v_model.wv # Alternatively we could have imported pre-trained word-vectors like : # w2v_model_wv = keyedvectors.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True) # The dataset 'GoogleNews-vectors-negative300.bin.gz' can be downloaded from https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit Explanation: We create our word2vec model first. We could either train our model or user pre-trained vectors. End of explanation trainclassdict = subjectkeywords() nb_labels = len(trainclassdict) # number of class labels Explanation: We load the training data for the Keras model. End of explanation # get embedding layer corresponding to our trained Word2Vec model embedding_layer = w2v_model_wv.get_embedding_layer() # create a convnet to solve our classification task sequence_input = Input(shape=(maxlen,), dtype='int32') embedded_sequences = embedding_layer(sequence_input) x = Conv1D(filters=nb_filters, kernel_size=n_gram, padding='valid', activation='relu', input_shape=(maxlen, vecsize))(embedded_sequences) x = MaxPooling1D(pool_size=maxlen - n_gram + 1)(x) x = Flatten()(x) preds = Dense(nb_labels, activation=final_activation, kernel_regularizer=l2(dense_wl2reg), bias_regularizer=l2(dense_bl2reg))(x) Explanation: Next, we create out Keras model. End of explanation classlabels, x_train, y_train = convert_trainingdata(trainclassdict) tokenizer = Tokenizer() tokenizer.fit_on_texts(x_train) x_train = tokenizer.texts_to_sequences(x_train) x_train = pad_sequences(x_train, maxlen=maxlen) model = Model(sequence_input, preds) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc']) fit_ret_val = model.fit(x_train, y_train, epochs=10) Explanation: Next, we train the classifier. End of explanation input_text = 'artificial intelligence' matrix = process_text(input_text) predictions = model.predict(matrix) # get the actual categories from output scoredict = {} for idx, classlabel in zip(range(len(classlabels)), classlabels): scoredict[classlabel] = predictions[0][idx] print scoredict Explanation: Our classifier is now ready to predict classes for input data. End of explanation
2,472
Given the following text description, write Python code to implement the functionality described below step by step Description: Statistical Data Modeling Some or most of you have probably taken some undergraduate- or graduate-level statistics courses. Unfortunately, the curricula for most introductory statisics courses are mostly focused on conducting statistical hypothesis tests as the primary means for interest Step1: Estimation An recurring statistical problem is finding estimates of the relevant parameters that correspond to the distribution that best represents our data. In parametric inference, we specify a priori a suitable distribution, then choose the parameters that best fit the data. e.g. $\mu$ and $\sigma^2$ in the case of the normal distribution Step2: Fitting data to probability distributions We start with the problem of finding values for the parameters that provide the best fit between the model and the data, called point estimates. First, we need to define what we mean by ‘best fit’. There are two commonly used criteria Step3: The first step is recognixing what sort of distribution to fit our data to. A couple of observations Step4: Now, let's calculate the sample moments of interest, the means and variances by month Step5: We then use these moments to estimate $\alpha$ and $\beta$ for each month Step6: We can use the gamma.pdf function in scipy.stats.distributions to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January Step7: Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution Step8: Maximum Likelihood Maximum likelihood (ML) fitting is usually more work than the method of moments, but it is preferred as the resulting estimator is known to have good theoretical properties. There is a ton of theory regarding ML. We will restrict ourselves to the mechanics here. Say we have some data $y = y_1,y_2,\ldots,y_n$ that is distributed according to some distribution Step9: The product $\prod_{i=1}^n Pr(y_i | \theta)$ gives us a measure of how likely it is to observe values $y_1,\ldots,y_n$ given the parameters $\theta$. Maximum likelihood fitting consists of choosing the appropriate function $l= Pr(Y|\theta)$ to maximize for a given set of observations. We call this function the likelihood function, because it is a measure of how likely the observations are if the model is true. Given these data, how likely is this model? In the above model, the data were drawn from a Poisson distribution with parameter $\lambda =5$. $$L(y|\lambda=5) = \frac{e^{-5} 5^y}{y!}$$ So, for any given value of $y$, we can calculate its likelihood Step10: We can plot the likelihood function for any value of the parameter(s) Step11: How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\lambda=5$. Step12: Why are we interested in the likelihood function? A reasonable estimate of the true, unknown value for the parameter is one which maximizes the likelihood function. So, inference is reduced to an optimization problem. Going back to the rainfall data, if we are using a gamma distribution we need to maximize Step13: Here is a graphical example of how Newtone-Raphson converges on a solution, using an arbitrary function Step14: To apply the Newton-Raphson algorithm, we need a function that returns a vector containing the first and second derivatives of the function with respect to the variable of interest. In our case, this is Step15: where log_mean and mean_log are $\log{\bar{x}}$ and $\overline{\log(x)}$, respectively. psi and polygamma are complex functions of the Gamma function that result when you take first and second derivatives of that function. Step16: Time to optimize! Step17: And now plug this back into the solution for beta Step18: We can compare the fit of the estimates derived from MLE to those from the method of moments Step19: For some common distributions, SciPy includes methods for fitting via MLE Step20: This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit method fits an odd 3-parameter version of the gamma distribution. Example Step21: We can construct a log likelihood for this function using the conditional form Step22: For this example, we will use another optimization algorithm, the Nelder-Mead simplex algorithm. It has a couple of advantages Step23: In general, simulating data is a terrific way of testing your model before using it with real data. Kernel density estimates In some instances, we may not be interested in the parameters of a particular distribution of data, but just a smoothed representation of the data at hand. In this case, we can estimate the disribution non-parametrically (i.e. making no assumptions about the form of the underlying distribution) using kernel density estimation. Step24: SciPy implements a Gaussian KDE that automatically chooses an appropriate bandwidth. Let's create a bi-modal distribution of data that is not easily summarized by a parametric distribution Step25: Exercise Step26: Regression models A general, primary goal of many statistical data analysis tasks is to relate the influence of one variable on another. For example, we may wish to know how different medical interventions influence the incidence or duration of disease, or perhaps a how baseball player's performance varies as a function of age. Step27: We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$. <div style="font-size Step28: Minimizing the sum of squares is not the only criterion we can use; it is just a very popular (and successful) one. For example, we can try to minimize the sum of absolute differences Step29: We are not restricted to a straight-line regression model; we can represent a curved relationship between our variables by introducing polynomial terms. For example, a cubic model Step30: Although polynomial model characterizes a nonlinear relationship, it is a linear problem in terms of estimation. That is, the regression model $f(y | x)$ is linear in the parameters. For some data, it may be reasonable to consider polynomials of order>2. For example, consider the relationship between the number of home runs a baseball player hits and the number of runs batted in (RBI) they accumulate; clearly, the relationship is positive, but we may not expect a linear relationship. Step31: Of course, we need not fit least squares models by hand. The statsmodels package implements least squares models that allow for model fitting in a single line Step32: Exercise Step33: One approach is to use an information-theoretic criterion to select the most appropriate model. For example Akaike's Information Criterion (AIC) balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC as Step34: Hence, we would select the 2-parameter (linear) model. Logistic Regression Fitting a line to the relationship between two variables using the least squares approach is sensible when the variable we are trying to predict is continuous, but what about when the data are dichotomous? male/female pass/fail died/survived Let's consider the problem of predicting survival in the Titanic disaster, based on our available information. For example, lets say that we want to predict survival as a function of the fare paid for the journey. Step35: I have added random jitter on the y-axis to help visualize the density of the points, and have plotted fare on the log scale. Clearly, fitting a line through this data makes little sense, for several reasons. First, for most values of the predictor variable, the line would predict values that are not zero or one. Second, it would seem odd to choose least squares (or similar) as a criterion for selecting the best line. Step36: If we look at this data, we can see that for most values of fare, there are some individuals that survived and some that did not. However, notice that the cloud of points is denser on the "survived" (y=1) side for larger values of fare than on the "died" (y=0) side. Stochastic model Rather than model the binary outcome explicitly, it makes sense instead to model the probability of death or survival in a stochastic model. Probabilities are measured on a continuous [0,1] scale, which may be more amenable for prediction using a regression line. We need to consider a different probability model for this exerciese however; let's consider the Bernoulli distribution as a generative model for our data Step37: And here's the logit function Step38: The inverse of the logit transformation is Step39: Remove null values from variables Step40: ... and fit the model. Step41: As with our least squares model, we can easily fit logistic regression models in statsmodels, in this case using the GLM (generalized linear model) class with a binomial error distribution specified. Step42: Exercise Step43: Similarly, we can use the random.randint method to generate a sample with replacement, which we can use when bootstrapping. Step44: We regard S as an "estimate" of population P population Step45: Bootstrap Estimates From our bootstrapped samples, we can extract estimates of the expectation and its variance Step46: Since we have estimated the expectation of the bootstrapped statistics, we can estimate the bias of T Step47: Bootstrap error There are two sources of error in bootstrap estimates
Python Code: import numpy as np import pandas as pd # Set some Pandas options pd.set_option('display.notebook_repr_html', False) pd.set_option('display.max_columns', 20) pd.set_option('display.max_rows', 25) Explanation: Statistical Data Modeling Some or most of you have probably taken some undergraduate- or graduate-level statistics courses. Unfortunately, the curricula for most introductory statisics courses are mostly focused on conducting statistical hypothesis tests as the primary means for interest: t-tests, chi-squared tests, analysis of variance, etc. Such tests seek to esimate whether groups or effects are "statistically significant", a concept that is poorly understood, and hence often misused, by most practioners. Even when interpreted correctly, statistical significance is a questionable goal for statistical inference, as it is of limited utility. A far more powerful approach to statistical analysis involves building flexible models with the overarching aim of estimating quantities of interest. This section of the tutorial illustrates how to use Python to build statistical models of low to moderate difficulty from scratch, and use them to extract estimates and associated measures of uncertainty. End of explanation x = array([ 1.00201077, 1.58251956, 0.94515919, 6.48778002, 1.47764604, 5.18847071, 4.21988095, 2.85971522, 3.40044437, 3.74907745, 1.18065796, 3.74748775, 3.27328568, 3.19374927, 8.0726155 , 0.90326139, 2.34460034, 2.14199217, 3.27446744, 3.58872357, 1.20611533, 2.16594393, 5.56610242, 4.66479977, 2.3573932 ]) _ = hist(x, bins=8) Explanation: Estimation An recurring statistical problem is finding estimates of the relevant parameters that correspond to the distribution that best represents our data. In parametric inference, we specify a priori a suitable distribution, then choose the parameters that best fit the data. e.g. $\mu$ and $\sigma^2$ in the case of the normal distribution End of explanation precip = pd.read_table("data/nashville_precip.txt", index_col=0, na_values='NA', delim_whitespace=True) precip.head() _ = precip.hist(sharex=True, sharey=True, grid=False) tight_layout() Explanation: Fitting data to probability distributions We start with the problem of finding values for the parameters that provide the best fit between the model and the data, called point estimates. First, we need to define what we mean by ‘best fit’. There are two commonly used criteria: Method of moments chooses the parameters so that the sample moments (typically the sample mean and variance) match the theoretical moments of our chosen distribution. Maximum likelihood chooses the parameters to maximize the likelihood, which measures how likely it is to observe our given sample. Discrete Random Variables $$X = {0,1}$$ $$Y = {\ldots,-2,-1,0,1,2,\ldots}$$ Probability Mass Function: For discrete $X$, $$Pr(X=x) = f(x|\theta)$$ e.g. Poisson distribution The Poisson distribution models unbounded counts: <div style="font-size: 150%;"> $$Pr(X=x)=\frac{e^{-\lambda}\lambda^x}{x!}$$ * $X=\{0,1,2,\ldots\}$ * $\lambda > 0$ $$E(X) = \text{Var}(X) = \lambda$$ ### Continuous Random Variables $$X \in [0,1]$$ $$Y \in (-\infty, \infty)$$ **Probability Density Function**: For continuous $X$, $$Pr(x \le X \le x + dx) = f(x|\theta)dx \, \text{ as } \, dx \rightarrow 0$$ ![Continuous variable](http://upload.wikimedia.org/wikipedia/commons/e/ec/Exponential_pdf.svg) ***e.g. normal distribution*** <div style="font-size: 150%;"> $$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]$$ * $X \in \mathbf{R}$ * $\mu \in \mathbf{R}$ * $\sigma>0$ $$\begin{align}E(X) &= \mu \cr \text{Var}(X) &= \sigma^2 \end{align}$$ ### Example: Nashville Precipitation The dataset `nashville_precip.txt` contains [NOAA precipitation data for Nashville measured since 1871](http://bit.ly/nasvhville_precip_data). The gamma distribution is often a good fit to aggregated rainfall data, and will be our candidate distribution in this case. End of explanation precip.fillna(value={'Oct': precip.Oct.mean()}, inplace=True) Explanation: The first step is recognixing what sort of distribution to fit our data to. A couple of observations: The data are skewed, with a longer tail to the right than to the left The data are positive-valued, since they are measuring rainfall The data are continuous There are a few possible choices, but one suitable alternative is the gamma distribution: <div style="font-size: 150%;"> $$x \sim \text{Gamma}(\alpha, \beta) = \frac{\beta^{\alpha}x^{\alpha-1}e^{-\beta x}}{\Gamma(\alpha)}$$ </div> The method of moments simply assigns the empirical mean and variance to their theoretical counterparts, so that we can solve for the parameters. So, for the gamma distribution, the mean and variance are: <div style="font-size: 150%;"> $$ \hat{\mu} = \bar{X} = \alpha \beta $$ $$ \hat{\sigma}^2 = S^2 = \alpha \beta^2 $$ </div> So, if we solve for these parameters, we can use a gamma distribution to describe our data: <div style="font-size: 150%;"> $$ \alpha = \frac{\bar{X}^2}{S^2}, \, \beta = \frac{S^2}{\bar{X}} $$ </div> Let's deal with the missing value in the October data. Given what we are trying to do, it is most sensible to fill in the missing value with the average of the available values. End of explanation precip_mean = precip.mean() precip_mean precip_var = precip.var() precip_var Explanation: Now, let's calculate the sample moments of interest, the means and variances by month: End of explanation alpha_mom = precip_mean ** 2 / precip_var beta_mom = precip_var / precip_mean alpha_mom, beta_mom Explanation: We then use these moments to estimate $\alpha$ and $\beta$ for each month: End of explanation from scipy.stats.distributions import gamma hist(precip.Jan, normed=True, bins=20) plot(linspace(0, 10), gamma.pdf(linspace(0, 10), alpha_mom[0], beta_mom[0])) Explanation: We can use the gamma.pdf function in scipy.stats.distributions to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January: End of explanation axs = precip.hist(normed=True, figsize=(12, 8), sharex=True, sharey=True, bins=15, grid=False) for ax in axs.ravel(): # Get month m = ax.get_title() # Plot fitted distribution x = linspace(*ax.get_xlim()) ax.plot(x, gamma.pdf(x, alpha_mom[m], beta_mom[m])) # Annotate with parameter estimates label = 'alpha = {0:.2f}\nbeta = {1:.2f}'.format(alpha_mom[m], beta_mom[m]) ax.annotate(label, xy=(10, 0.2)) tight_layout() Explanation: Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution: End of explanation y = np.random.poisson(5, size=100) plt.hist(y, bins=12, normed=True) xlabel('y'); ylabel('Pr(y)') Explanation: Maximum Likelihood Maximum likelihood (ML) fitting is usually more work than the method of moments, but it is preferred as the resulting estimator is known to have good theoretical properties. There is a ton of theory regarding ML. We will restrict ourselves to the mechanics here. Say we have some data $y = y_1,y_2,\ldots,y_n$ that is distributed according to some distribution: <div style="font-size: 120%;"> $$Pr(Y_i=y_i | \theta)$$ </div> Here, for example, is a Poisson distribution that describes the distribution of some discrete variables, typically counts: End of explanation poisson_like = lambda x, lam: np.exp(-lam) * (lam**x) / (np.arange(x)+1).prod() lam = 6 value = 10 poisson_like(value, lam) np.sum(poisson_like(yi, lam) for yi in y) lam = 8 np.sum(poisson_like(yi, lam) for yi in y) Explanation: The product $\prod_{i=1}^n Pr(y_i | \theta)$ gives us a measure of how likely it is to observe values $y_1,\ldots,y_n$ given the parameters $\theta$. Maximum likelihood fitting consists of choosing the appropriate function $l= Pr(Y|\theta)$ to maximize for a given set of observations. We call this function the likelihood function, because it is a measure of how likely the observations are if the model is true. Given these data, how likely is this model? In the above model, the data were drawn from a Poisson distribution with parameter $\lambda =5$. $$L(y|\lambda=5) = \frac{e^{-5} 5^y}{y!}$$ So, for any given value of $y$, we can calculate its likelihood: End of explanation lambdas = np.linspace(0,15) x = 5 plt.plot(lambdas, [poisson_like(x, l) for l in lambdas]) xlabel('$\lambda$') ylabel('L($\lambda$|x={0})'.format(x)) Explanation: We can plot the likelihood function for any value of the parameter(s): End of explanation lam = 5 xvals = arange(15) plt.bar(xvals, [poisson_like(x, lam) for x in xvals]) xlabel('x') ylabel('Pr(X|$\lambda$=5)') Explanation: How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\lambda=5$. End of explanation from scipy.optimize import newton Explanation: Why are we interested in the likelihood function? A reasonable estimate of the true, unknown value for the parameter is one which maximizes the likelihood function. So, inference is reduced to an optimization problem. Going back to the rainfall data, if we are using a gamma distribution we need to maximize: $$\begin{align}l(\alpha,\beta) &= \sum_{i=1}^n \log[\beta^{\alpha} x^{\alpha-1} e^{-x/\beta}\Gamma(\alpha)^{-1}] \cr &= n[(\alpha-1)\overline{\log(x)} - \bar{x}\beta + \alpha\log(\beta) - \log\Gamma(\alpha)]\end{align}$$ (Its usually easier to work in the log scale) where $n = 2012 − 1871 = 141$ and the bar indicates an average over all i. We choose $\alpha$ and $\beta$ to maximize $l(\alpha,\beta)$. Notice $l$ is infinite if any $x$ is zero. We do not have any zeros, but we do have an NA value for one of the October data, which we dealt with above. Finding the MLE To find the maximum of any function, we typically take the derivative with respect to the variable to be maximized, set it to zero and solve for that variable. $$\frac{\partial l(\alpha,\beta)}{\partial \beta} = n\left(\frac{\alpha}{\beta} - \bar{x}\right) = 0$$ Which can be solved as $\beta = \alpha/\bar{x}$. However, plugging this into the derivative with respect to $\alpha$ yields: $$\frac{\partial l(\alpha,\beta)}{\partial \alpha} = \log(\alpha) + \overline{\log(x)} - \log(\bar{x}) - \frac{\Gamma(\alpha)'}{\Gamma(\alpha)} = 0$$ This has no closed form solution. We must use numerical optimization! Numerical optimization alogarithms take an initial "guess" at the solution, and iteratively improve the guess until it gets "close enough" to the answer. Here, we will use Newton-Raphson algorithm: <div style="font-size: 120%;"> $$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$ </div> Which is available to us via SciPy: End of explanation # some function func = lambda x: 3./(1 + 400*np.exp(-2*x)) - 1 xvals = np.linspace(0, 6) plot(xvals, func(xvals)) text(5.3, 2.1, '$f(x)$', fontsize=16) # zero line plot([0,6], [0,0], 'k-') # value at step n plot([4,4], [0,func(4)], 'k:') plt.text(4, -.2, '$x_n$', fontsize=16) # tangent line tanline = lambda x: -0.858 + 0.626*x plot(xvals, tanline(xvals), 'r--') # point at step n+1 xprime = 0.858/0.626 plot([xprime, xprime], [tanline(xprime), func(xprime)], 'k:') plt.text(xprime+.1, -.2, '$x_{n+1}$', fontsize=16) Explanation: Here is a graphical example of how Newtone-Raphson converges on a solution, using an arbitrary function: End of explanation from scipy.special import psi, polygamma dlgamma = lambda m, log_mean, mean_log: np.log(m) - psi(m) - log_mean + mean_log dl2gamma = lambda m, *args: 1./m - polygamma(1, m) Explanation: To apply the Newton-Raphson algorithm, we need a function that returns a vector containing the first and second derivatives of the function with respect to the variable of interest. In our case, this is: End of explanation # Calculate statistics log_mean = precip.mean().apply(log) mean_log = precip.apply(log).mean() Explanation: where log_mean and mean_log are $\log{\bar{x}}$ and $\overline{\log(x)}$, respectively. psi and polygamma are complex functions of the Gamma function that result when you take first and second derivatives of that function. End of explanation # Alpha MLE for December alpha_mle = newton(dlgamma, 2, dl2gamma, args=(log_mean[-1], mean_log[-1])) alpha_mle Explanation: Time to optimize! End of explanation beta_mle = alpha_mle/precip.mean()[-1] beta_mle Explanation: And now plug this back into the solution for beta: <div style="font-size: 120%;"> $$ \beta = \frac{\alpha}{\bar{X}} $$ End of explanation dec = precip.Dec dec.hist(normed=True, bins=10, grid=False) x = linspace(0, dec.max()) plot(x, gamma.pdf(x, alpha_mom[-1], beta_mom[-1]), 'm-') plot(x, gamma.pdf(x, alpha_mle, beta_mle), 'r--') Explanation: We can compare the fit of the estimates derived from MLE to those from the method of moments: End of explanation from scipy.stats import gamma gamma.fit(precip.Dec) Explanation: For some common distributions, SciPy includes methods for fitting via MLE: End of explanation x = np.random.normal(size=10000) a = -1 x_small = x < a while x_small.sum(): x[x_small] = np.random.normal(size=x_small.sum()) x_small = x < a _ = hist(x, bins=100) Explanation: This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit method fits an odd 3-parameter version of the gamma distribution. Example: truncated distribution Suppose that we observe $Y$ truncated below at $a$ (where $a$ is known). If $X$ is the distribution of our observation, then: $$ P(X \le x) = P(Y \le x|Y \gt a) = \frac{P(a \lt Y \le x)}{P(Y \gt a)}$$ (so, $Y$ is the original variable and $X$ is the truncated variable) Then X has the density: $$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$ Suppose $Y \sim N(\mu, \sigma^2)$ and $x_1,\ldots,x_n$ are independent observations of $X$. We can use maximum likelihood to find $\mu$ and $\sigma$. First, we can simulate a truncated distribution using a while statement to eliminate samples that are outside the support of the truncated distribution. End of explanation from scipy.stats.distributions import norm trunc_norm = lambda theta, a, x: -(np.log(norm.pdf(x, theta[0], theta[1])) - np.log(1 - norm.cdf(a, theta[0], theta[1]))).sum() Explanation: We can construct a log likelihood for this function using the conditional form: $$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$ End of explanation from scipy.optimize import fmin fmin(trunc_norm, np.array([1,2]), args=(-1, x)) Explanation: For this example, we will use another optimization algorithm, the Nelder-Mead simplex algorithm. It has a couple of advantages: it does not require derivatives it can optimize (minimize) a vector of parameters SciPy implements this algorithm in its fmin function: End of explanation # Some random data y = np.random.random(15) * 10 y x = np.linspace(0, 10, 100) # Smoothing parameter s = 0.4 # Calculate the kernels kernels = np.transpose([norm.pdf(x, yi, s) for yi in y]) plot(x, kernels, 'k:') plot(x, kernels.sum(1)) plot(y, np.zeros(len(y)), 'ro', ms=10) Explanation: In general, simulating data is a terrific way of testing your model before using it with real data. Kernel density estimates In some instances, we may not be interested in the parameters of a particular distribution of data, but just a smoothed representation of the data at hand. In this case, we can estimate the disribution non-parametrically (i.e. making no assumptions about the form of the underlying distribution) using kernel density estimation. End of explanation # Create a bi-modal distribution with a mixture of Normals. x1 = np.random.normal(0, 3, 50) x2 = np.random.normal(4, 1, 50) # Append by row x = np.r_[x1, x2] plt.hist(x, bins=8, normed=True) from scipy.stats import kde density = kde.gaussian_kde(x) xgrid = np.linspace(x.min(), x.max(), 100) plt.hist(x, bins=8, normed=True) plt.plot(xgrid, density(xgrid), 'r-') Explanation: SciPy implements a Gaussian KDE that automatically chooses an appropriate bandwidth. Let's create a bi-modal distribution of data that is not easily summarized by a parametric distribution: End of explanation cdystonia = pd.read_csv("data/cdystonia.csv") cdystonia[cdystonia.obs==6].hist(column='twstrs', by=cdystonia.treat, bins=8) Explanation: Exercise: Cervical dystonia analysis Recall the cervical dystonia database, which is a clinical trial of botulinum toxin type B (BotB) for patients with cervical dystonia from nine U.S. sites. The response variable is measurements on the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment). One way to check the efficacy of the treatment is to compare the distribution of TWSTRS for control and treatment patients at the end of the study. Use the method of moments or MLE to calculate the mean and variance of TWSTRS at week 16 for one of the treatments and the control group. Assume that the distribution of the twstrs variable is normal: $$f(x \mid \mu, \sigma^2) = \sqrt{\frac{1}{2\pi\sigma^2}} \exp\left{ -\frac{1}{2} \frac{(x-\mu)^2}{\sigma^2} \right}$$ End of explanation x = np.array([2.2, 4.3, 5.1, 5.8, 6.4, 8.0]) y = np.array([0.4, 10.1, 14.0, 10.9, 15.4, 18.5]) plot(x,y,'ro') Explanation: Regression models A general, primary goal of many statistical data analysis tasks is to relate the influence of one variable on another. For example, we may wish to know how different medical interventions influence the incidence or duration of disease, or perhaps a how baseball player's performance varies as a function of age. End of explanation ss = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x) ** 2) ss([0,1],x,y) b0,b1 = fmin(ss, [0,1], args=(x,y)) b0,b1 plot(x, y, 'ro') plot([0,10], [b0, b0+b1*10]) plot(x, y, 'ro') plot([0,10], [b0, b0+b1*10]) for xi, yi in zip(x,y): plot([xi]*2, [yi, b0+b1*xi], 'k:') xlim(2, 9); ylim(0, 20) Explanation: We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$. <div style="font-size: 150%;"> $y_i = f(x_i) + \epsilon_i$ </div> where $f$ is some function, for example a linear function: <div style="font-size: 150%;"> $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$ </div> and $\epsilon_i$ accounts for the difference between the observed response $y_i$ and its prediction from the model $\hat{y_i} = \beta_0 + \beta_1 x_i$. This is sometimes referred to as process uncertainty. We would like to select $\beta_0, \beta_1$ so that the difference between the predictions and the observations is zero, but this is not usually possible. Instead, we choose a reasonable criterion: the smallest sum of the squared differences between $\hat{y}$ and $y$. <div style="font-size: 120%;"> $$R^2 = \sum_i (y_i - [\beta_0 + \beta_1 x_i])^2 = \sum_i \epsilon_i^2 $$ </div> Squaring serves two purposes: (1) to prevent positive and negative values from cancelling each other out and (2) to strongly penalize large deviations. Whether the latter is a good thing or not depends on the goals of the analysis. In other words, we will select the parameters that minimize the squared error of the model. End of explanation sabs = lambda theta, x, y: np.sum(np.abs(y - theta[0] - theta[1]*x)) b0,b1 = fmin(sabs, [0,1], args=(x,y)) print b0,b1 plot(x, y, 'ro') plot([0,10], [b0, b0+b1*10]) Explanation: Minimizing the sum of squares is not the only criterion we can use; it is just a very popular (and successful) one. For example, we can try to minimize the sum of absolute differences: End of explanation ss2 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2)) ** 2) b0,b1,b2 = fmin(ss2, [1,1,-1], args=(x,y)) print b0,b1,b2 plot(x, y, 'ro') xvals = np.linspace(0, 10, 100) plot(xvals, b0 + b1*xvals + b2*(xvals**2)) Explanation: We are not restricted to a straight-line regression model; we can represent a curved relationship between our variables by introducing polynomial terms. For example, a cubic model: <div style="font-size: 150%;"> $y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \epsilon_i$ </div> End of explanation ss3 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2) - theta[3]*(x**3)) ** 2) bb = pd.read_csv("data/baseball.csv", index_col=0) plot(bb.hr, bb.rbi, 'r.') b0,b1,b2,b3 = fmin(ss3, [0,1,-1,0], args=(bb.hr, bb.rbi)) xvals = arange(40) plot(xvals, b0 + b1*xvals + b2*(xvals**2) + b3*(xvals**3)) Explanation: Although polynomial model characterizes a nonlinear relationship, it is a linear problem in terms of estimation. That is, the regression model $f(y | x)$ is linear in the parameters. For some data, it may be reasonable to consider polynomials of order>2. For example, consider the relationship between the number of home runs a baseball player hits and the number of runs batted in (RBI) they accumulate; clearly, the relationship is positive, but we may not expect a linear relationship. End of explanation import statsmodels.api as sm straight_line = sm.OLS(y, sm.add_constant(x)).fit() straight_line.summary() from statsmodels.formula.api import ols as OLS data = pd.DataFrame(dict(x=x, y=y)) cubic_fit = OLS('y ~ x + I(x**2)', data).fit() cubic_fit.summary() Explanation: Of course, we need not fit least squares models by hand. The statsmodels package implements least squares models that allow for model fitting in a single line: End of explanation def calc_poly(params, data): x = np.c_[[data**i for i in range(len(params))]] return np.dot(params, x) ssp = lambda theta, x, y: np.sum((y - calc_poly(theta, x)) ** 2) betas = fmin(ssp, np.zeros(10), args=(x,y), maxiter=1e6) plot(x, y, 'ro') xvals = np.linspace(0, max(x), 100) plot(xvals, calc_poly(betas, xvals)) Explanation: Exercise: Polynomial function Write a function that specified a polynomial of arbitrary degree. Model Selection How do we choose among competing models for a given dataset? More parameters are not necessarily better, from the standpoint of model fit. For example, fitting a 9-th order polynomial to the sample data from the above example certainly results in an overfit. End of explanation n = len(x) aic = lambda rss, p, n: n * np.log(rss/(n-p-1)) + 2*p RSS1 = ss(fmin(ss, [0,1], args=(x,y)), x, y) RSS2 = ss2(fmin(ss2, [1,1,-1], args=(x,y)), x, y) print aic(RSS1, 2, n), aic(RSS2, 3, n) Explanation: One approach is to use an information-theoretic criterion to select the most appropriate model. For example Akaike's Information Criterion (AIC) balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC as: $$AIC = n \log(\hat{\sigma}^2) + 2p$$ where $p$ is the number of parameters in the model and $\hat{\sigma}^2 = RSS/(n-p-1)$. Notice that as the number of parameters increase, the residual sum of squares goes down, but the second term (a penalty) increases. To apply AIC to model selection, we choose the model that has the lowest AIC value. End of explanation titanic = pd.read_excel("data/titanic.xls", "titanic") titanic.name jitter = np.random.normal(scale=0.02, size=len(titanic)) plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3) yticks([0,1]) ylabel("survived") xlabel("log(fare)") Explanation: Hence, we would select the 2-parameter (linear) model. Logistic Regression Fitting a line to the relationship between two variables using the least squares approach is sensible when the variable we are trying to predict is continuous, but what about when the data are dichotomous? male/female pass/fail died/survived Let's consider the problem of predicting survival in the Titanic disaster, based on our available information. For example, lets say that we want to predict survival as a function of the fare paid for the journey. End of explanation x = np.log(titanic.fare[titanic.fare>0]) y = titanic.survived[titanic.fare>0] betas_titanic = fmin(ss, [1,1], args=(x,y)) jitter = np.random.normal(scale=0.02, size=len(titanic)) plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3) yticks([0,1]) ylabel("survived") xlabel("log(fare)") plt.plot([0,7], [betas_titanic[0], betas_titanic[0] + betas_titanic[1]*7.]) Explanation: I have added random jitter on the y-axis to help visualize the density of the points, and have plotted fare on the log scale. Clearly, fitting a line through this data makes little sense, for several reasons. First, for most values of the predictor variable, the line would predict values that are not zero or one. Second, it would seem odd to choose least squares (or similar) as a criterion for selecting the best line. End of explanation logit = lambda p: np.log(p/(1.-p)) unit_interval = np.linspace(0,1) plt.plot(unit_interval/(1-unit_interval), unit_interval) Explanation: If we look at this data, we can see that for most values of fare, there are some individuals that survived and some that did not. However, notice that the cloud of points is denser on the "survived" (y=1) side for larger values of fare than on the "died" (y=0) side. Stochastic model Rather than model the binary outcome explicitly, it makes sense instead to model the probability of death or survival in a stochastic model. Probabilities are measured on a continuous [0,1] scale, which may be more amenable for prediction using a regression line. We need to consider a different probability model for this exerciese however; let's consider the Bernoulli distribution as a generative model for our data: <div style="font-size: 120%;"> $$f(y|p) = p^{y} (1-p)^{1-y}$$ </div> where $y = {0,1}$ and $p \in [0,1]$. So, this model predicts whether $y$ is zero or one as a function of the probability $p$. Notice that when $y=1$, the $1-p$ term disappears, and when $y=0$, the $p$ term disappears. So, the model we want to fit should look something like this: <div style="font-size: 120%;"> $$p_i = \beta_0 + \beta_1 x_i + \epsilon_i$$ However, since $p$ is constrained to be between zero and one, it is easy to see where a linear (or polynomial) model might predict values outside of this range. We can modify this model sligtly by using a **link function** to transform the probability to have an unbounded range on a new scale. Specifically, we can use a **logit transformation** as our link function: <div style="font-size: 120%;"> $$\text{logit}(p) = \log\left[\frac{p}{1-p}\right] = x$$ Here's a plot of $p/(1-p)$ End of explanation plt.plot(logit(unit_interval), unit_interval) Explanation: And here's the logit function: End of explanation invlogit = lambda x: 1. / (1 + np.exp(-x)) def logistic_like(theta, x, y): p = invlogit(theta[0] + theta[1] * x) # Return negative of log-likelihood return -np.sum(y * np.log(p) + (1-y) * np.log(1 - p)) Explanation: The inverse of the logit transformation is: <div style="font-size: 150%;"> $$p = \frac{1}{1 + \exp(-x)}$$ So, now our model is: <div style="font-size: 120%;"> $$\text{logit}(p_i) = \beta_0 + \beta_1 x_i + \epsilon_i$$ We can fit this model using maximum likelihood. Our likelihood, again based on the Bernoulli model is: <div style="font-size: 120%;"> $$L(y|p) = \prod_{i=1}^n p_i^{y_i} (1-p_i)^{1-y_i}$$ which, on the log scale is: <div style="font-size: 120%;"> $$l(y|p) = \sum_{i=1}^n y_i \log(p_i) + (1-y_i)\log(1-p_i)$$ We can easily implement this in Python, keeping in mind that `fmin` minimizes, rather than maximizes functions: End of explanation x, y = titanic[titanic.fare.notnull()][['fare', 'survived']].values.T Explanation: Remove null values from variables End of explanation b0,b1 = fmin(logistic_like, [0.5,0], args=(x,y)) b0, b1 jitter = np.random.normal(scale=0.01, size=len(x)) plot(x, y+jitter, 'r.', alpha=0.3) yticks([0,.25,.5,.75,1]) xvals = np.linspace(0, 600) plot(xvals, invlogit(b0+b1*xvals)) Explanation: ... and fit the model. End of explanation logistic = sm.GLM(y, sm.add_constant(x), family=sm.families.Binomial()).fit() logistic.summary() Explanation: As with our least squares model, we can easily fit logistic regression models in statsmodels, in this case using the GLM (generalized linear model) class with a binomial error distribution specified. End of explanation np.random.permutation(titanic.name)[:5] Explanation: Exercise: multivariate logistic regression Which other variables might be relevant for predicting the probability of surviving the Titanic? Generalize the model likelihood to include 2 or 3 other covariates from the dataset. Bootstrapping Parametric inference can be non-robust: inaccurate if parametric assumptions are violated if we rely on asymptotic results, we may not achieve an acceptable level of accuracy Parmetric inference can be difficult: derivation of sampling distribution may not be possible An alternative is to estimate the sampling distribution of a statistic empirically without making assumptions about the form of the population. We have seen this already with the kernel density estimate. Non-parametric Bootstrap The bootstrap is a resampling method discovered by Brad Efron that allows one to approximate the true sampling distribution of a dataset, and thereby obtain estimates of the mean and variance of the distribution. Bootstrap sample: <div style="font-size: 120%;"> $$S_1^* = \{x_{11}^*, x_{12}^*, \ldots, x_{1n}^*\}$$ </div> $S_i^$ is a sample of size $n$, with* replacement. In Python, we have already seen the NumPy function permutation that can be used in conjunction with Pandas' take method to generate a random sample of some data without replacement: End of explanation random_ind = np.random.randint(0, len(titanic), 5) titanic.name[random_ind] Explanation: Similarly, we can use the random.randint method to generate a sample with replacement, which we can use when bootstrapping. End of explanation n = 10 R = 1000 # Original sample (n=10) x = np.random.normal(size=n) # 1000 bootstrap samples of size 10 s = [x[np.random.randint(0,n,n)].mean() for i in range(R)] _ = hist(s, bins=30) Explanation: We regard S as an "estimate" of population P population : sample :: sample : bootstrap sample The idea is to generate replicate bootstrap samples: <div style="font-size: 120%;"> $$S^* = \{S_1^*, S_2^*, \ldots, S_R^*\}$$ </div> Compute statistic $t$ (estimate) for each bootstrap sample: <div style="font-size: 120%;"> $$T_i^* = t(S^*)$$ </div> End of explanation boot_mean = np.sum(s)/R boot_mean boot_var = ((np.array(s) - boot_mean) ** 2).sum() / (R-1) boot_var Explanation: Bootstrap Estimates From our bootstrapped samples, we can extract estimates of the expectation and its variance: $$\bar{T}^ = \hat{E}(T^) = \frac{\sum_i T_i^*}{R}$$ $$\hat{\text{Var}}(T^) = \frac{\sum_i (T_i^ - \bar{T}^*)^2}{R-1}$$ End of explanation boot_mean - np.mean(x) Explanation: Since we have estimated the expectation of the bootstrapped statistics, we can estimate the bias of T: $$\hat{B}^ = \bar{T}^ - T$$ End of explanation s_sorted = np.sort(s) s_sorted[:10] s_sorted[-10:] alpha = 0.05 s_sorted[[(R+1)*alpha/2, (R+1)*(1-alpha/2)]] Explanation: Bootstrap error There are two sources of error in bootstrap estimates: Sampling error from the selection of $S$. Bootstrap error from failing to enumerate all possible bootstrap samples. For the sake of accuracy, it is prudent to choose at least R=1000 Bootstrap Percentile Intervals An attractive feature of bootstrap statistics is the ease with which you can obtain an estimate of uncertainty for a given statistic. We simply use the empirical quantiles of the bootstrapped statistics to obtain percentiles corresponding to a confidence interval of interest. This employs the ordered bootstrap replicates: $$T_{(1)}^, T_{(2)}^, \ldots, T_{(R)}^*$$ Simply extract the $100(\alpha/2)$ and $100(1-\alpha/2)$ percentiles: $$T_{[(R+1)\alpha/2]}^ \lt \theta \lt T_{[(R+1)(1-\alpha/2)]}^$$ End of explanation
2,473
Given the following text description, write Python code to implement the functionality described below step by step Description: Autoregressions This notebook introduces autoregression modeling using the AutoReg model. It also covers aspects of ar_select_order assists in selecting models that minimize an information criteria such as the AIC. An autoregressive model has dynamics given by $$ y_t = \delta + \phi_1 y_{t-1} + \ldots + \phi_p y_{t-p} + \epsilon_t. $$ AutoReg also permits models with Step1: This cell sets the plotting style, registers pandas date converters for matplotlib, and sets the default figure size. Step2: The first set of examples uses the month-over-month growth rate in U.S. Housing starts that has not been seasonally adjusted. The seasonality is evident by the regular pattern of peaks and troughs. We set the frequency for the time series to "MS" (month-start) to avoid warnings when using AutoReg. Step3: We can start with an AR(3). While this is not a good model for this data, it demonstrates the basic use of the API. Step4: AutoReg supports the same covariance estimators as OLS. Below, we use cov_type="HC0", which is White's covariance estimator. While the parameter estimates are the same, all of the quantities that depend on the standard error change. Step5: plot_predict visualizes forecasts. Here we produce a large number of forecasts which show the string seasonality captured by the model. Step6: plot_diagnositcs indicates that the model captures the key features in the data. Step7: Seasonal Dummies AutoReg supports seasonal dummies which are an alternative way to model seasonality. Including the dummies shortens the dynamics to only an AR(2). Step8: The seasonal dummies are obvious in the forecasts which has a non-trivial seasonal component in all periods 10 years in to the future. Step9: Seasonal Dynamics While AutoReg does not directly support Seasonal components since it uses OLS to estimate parameters, it is possible to capture seasonal dynamics using an over-parametrized Seasonal AR that does not impose the restrictions in the Seasonal AR. Step10: We start by selecting a model using the simple method that only chooses the maximum lag. All lower lags are automatically included. The maximum lag to check is set to 13 since this allows the model to next a Seasonal AR that has both a short-run AR(1) component and a Seasonal AR(1) component, so that $$ (1-\phi_s L^{12})(1-\phi_1 L)y_t = \epsilon_t $$ which becomes $$ y_t = \phi_1 y_{t-1} +\phi_s Y_{t-12} - \phi_1\phi_s Y_{t-13} + \epsilon_t $$ when expanded. AutoReg does not enforce the structure, but can estimate the nesting model $$ y_t = \phi_1 y_{t-1} +\phi_{12} Y_{t-12} - \phi_{13} Y_{t-13} + \epsilon_t. $$ We see that all 13 lags are selected. Step11: It seems unlikely that all 13 lags are required. We can set glob=True to search all $2^{13}$ models that include up to 13 lags. Here we see that the first three are selected, as is the 7th, and finally the 12th and 13th are selected. This is superficially similar to the structure described above. After fitting the model, we take a look at the diagnostic plots that indicate that this specification appears to be adequate to capture the dynamics in the data. Step12: We can also include seasonal dummies. These are all insignificant since the model is using year-over-year changes. Step13: Industrial Production We will use the industrial production index data to examine forecasting. Step14: We will start by selecting a model using up to 12 lags. An AR(13) minimizes the BIC criteria even though many coefficients are insignificant. Step15: We can also use a global search which allows longer lags to enter if needed without requiring the shorter lags. Here we see many lags dropped. The model indicates there may be some seasonality in the data. Step16: plot_predict can be used to produce forecast plots along with confidence intervals. Here we produce forecasts starting at the last observation and continuing for 18 months. Step17: The forecasts from the full model and the restricted model are very similar. I also include an AR(5) which has very different dynamics Step18: The diagnostics indicate the model captures most of the the dynamics in the data. The ACF shows a patters at the seasonal frequency and so a more complete seasonal model (SARIMAX) may be needed. Step19: Forecasting Forecasts are produced using the predict method from a results instance. The default produces static forecasts which are one-step forecasts. Producing multi-step forecasts requires using dynamic=True. In this next cell, we produce 12-step-heard forecasts for the final 24 periods in the sample. This requires a loop. Note Step20: Comparing to SARIMAX SARIMAX is an implementation of a Seasonal Autoregressive Integrated Moving Average with eXogenous regressors model. It supports Step21: Custom Deterministic Processes The deterministic parameter allows a custom DeterministicProcess to be used. This allows for more complex deterministic terms to be constructed, for example one that includes seasonal components with two periods, or, as the next example shows, one that uses a Fourier series rather than seasonal dummies.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import pandas_datareader as pdr import seaborn as sns from statsmodels.tsa.api import acf, graphics, pacf from statsmodels.tsa.ar_model import AutoReg, ar_select_order Explanation: Autoregressions This notebook introduces autoregression modeling using the AutoReg model. It also covers aspects of ar_select_order assists in selecting models that minimize an information criteria such as the AIC. An autoregressive model has dynamics given by $$ y_t = \delta + \phi_1 y_{t-1} + \ldots + \phi_p y_{t-p} + \epsilon_t. $$ AutoReg also permits models with: Deterministic terms (trend) n: No deterministic term c: Constant (default) ct: Constant and time trend t: Time trend only Seasonal dummies (seasonal) True includes $s-1$ dummies where $s$ is the period of the time series (e.g., 12 for monthly) Custom deterministic terms (deterministic) Accepts a DeterministicProcess Exogenous variables (exog) A DataFrame or array of exogenous variables to include in the model Omission of selected lags (lags) If lags is an iterable of integers, then only these are included in the model. The complete specification is $$ y_t = \delta_0 + \delta_1 t + \phi_1 y_{t-1} + \ldots + \phi_p y_{t-p} + \sum_{i=1}^{s-1} \gamma_i d_i + \sum_{j=1}^{m} \kappa_j x_{t,j} + \epsilon_t. $$ where: $d_i$ is a seasonal dummy that is 1 if $mod(t, period) = i$. Period 0 is excluded if the model contains a constant (c is in trend). $t$ is a time trend ($1,2,\ldots$) that starts with 1 in the first observation. $x_{t,j}$ are exogenous regressors. Note these are time-aligned to the left-hand-side variable when defining a model. $\epsilon_t$ is assumed to be a white noise process. This first cell imports standard packages and sets plots to appear inline. End of explanation sns.set_style("darkgrid") pd.plotting.register_matplotlib_converters() # Default figure size sns.mpl.rc("figure", figsize=(16, 6)) sns.mpl.rc("font", size=14) Explanation: This cell sets the plotting style, registers pandas date converters for matplotlib, and sets the default figure size. End of explanation data = pdr.get_data_fred("HOUSTNSA", "1959-01-01", "2019-06-01") housing = data.HOUSTNSA.pct_change().dropna() # Scale by 100 to get percentages housing = 100 * housing.asfreq("MS") fig, ax = plt.subplots() ax = housing.plot(ax=ax) Explanation: The first set of examples uses the month-over-month growth rate in U.S. Housing starts that has not been seasonally adjusted. The seasonality is evident by the regular pattern of peaks and troughs. We set the frequency for the time series to "MS" (month-start) to avoid warnings when using AutoReg. End of explanation mod = AutoReg(housing, 3, old_names=False) res = mod.fit() print(res.summary()) Explanation: We can start with an AR(3). While this is not a good model for this data, it demonstrates the basic use of the API. End of explanation res = mod.fit(cov_type="HC0") print(res.summary()) sel = ar_select_order(housing, 13, old_names=False) sel.ar_lags res = sel.model.fit() print(res.summary()) Explanation: AutoReg supports the same covariance estimators as OLS. Below, we use cov_type="HC0", which is White's covariance estimator. While the parameter estimates are the same, all of the quantities that depend on the standard error change. End of explanation fig = res.plot_predict(720, 840) Explanation: plot_predict visualizes forecasts. Here we produce a large number of forecasts which show the string seasonality captured by the model. End of explanation fig = plt.figure(figsize=(16, 9)) fig = res.plot_diagnostics(fig=fig, lags=30) Explanation: plot_diagnositcs indicates that the model captures the key features in the data. End of explanation sel = ar_select_order(housing, 13, seasonal=True, old_names=False) sel.ar_lags res = sel.model.fit() print(res.summary()) Explanation: Seasonal Dummies AutoReg supports seasonal dummies which are an alternative way to model seasonality. Including the dummies shortens the dynamics to only an AR(2). End of explanation fig = res.plot_predict(720, 840) fig = plt.figure(figsize=(16, 9)) fig = res.plot_diagnostics(lags=30, fig=fig) Explanation: The seasonal dummies are obvious in the forecasts which has a non-trivial seasonal component in all periods 10 years in to the future. End of explanation yoy_housing = data.HOUSTNSA.pct_change(12).resample("MS").last().dropna() _, ax = plt.subplots() ax = yoy_housing.plot(ax=ax) Explanation: Seasonal Dynamics While AutoReg does not directly support Seasonal components since it uses OLS to estimate parameters, it is possible to capture seasonal dynamics using an over-parametrized Seasonal AR that does not impose the restrictions in the Seasonal AR. End of explanation sel = ar_select_order(yoy_housing, 13, old_names=False) sel.ar_lags Explanation: We start by selecting a model using the simple method that only chooses the maximum lag. All lower lags are automatically included. The maximum lag to check is set to 13 since this allows the model to next a Seasonal AR that has both a short-run AR(1) component and a Seasonal AR(1) component, so that $$ (1-\phi_s L^{12})(1-\phi_1 L)y_t = \epsilon_t $$ which becomes $$ y_t = \phi_1 y_{t-1} +\phi_s Y_{t-12} - \phi_1\phi_s Y_{t-13} + \epsilon_t $$ when expanded. AutoReg does not enforce the structure, but can estimate the nesting model $$ y_t = \phi_1 y_{t-1} +\phi_{12} Y_{t-12} - \phi_{13} Y_{t-13} + \epsilon_t. $$ We see that all 13 lags are selected. End of explanation sel = ar_select_order(yoy_housing, 13, glob=True, old_names=False) sel.ar_lags res = sel.model.fit() print(res.summary()) fig = plt.figure(figsize=(16, 9)) fig = res.plot_diagnostics(fig=fig, lags=30) Explanation: It seems unlikely that all 13 lags are required. We can set glob=True to search all $2^{13}$ models that include up to 13 lags. Here we see that the first three are selected, as is the 7th, and finally the 12th and 13th are selected. This is superficially similar to the structure described above. After fitting the model, we take a look at the diagnostic plots that indicate that this specification appears to be adequate to capture the dynamics in the data. End of explanation sel = ar_select_order(yoy_housing, 13, glob=True, seasonal=True, old_names=False) sel.ar_lags res = sel.model.fit() print(res.summary()) Explanation: We can also include seasonal dummies. These are all insignificant since the model is using year-over-year changes. End of explanation data = pdr.get_data_fred("INDPRO", "1959-01-01", "2019-06-01") ind_prod = data.INDPRO.pct_change(12).dropna().asfreq("MS") _, ax = plt.subplots(figsize=(16, 9)) ind_prod.plot(ax=ax) Explanation: Industrial Production We will use the industrial production index data to examine forecasting. End of explanation sel = ar_select_order(ind_prod, 13, "bic", old_names=False) res = sel.model.fit() print(res.summary()) Explanation: We will start by selecting a model using up to 12 lags. An AR(13) minimizes the BIC criteria even though many coefficients are insignificant. End of explanation sel = ar_select_order(ind_prod, 13, "bic", glob=True, old_names=False) sel.ar_lags res_glob = sel.model.fit() print(res.summary()) Explanation: We can also use a global search which allows longer lags to enter if needed without requiring the shorter lags. Here we see many lags dropped. The model indicates there may be some seasonality in the data. End of explanation ind_prod.shape fig = res_glob.plot_predict(start=714, end=732) Explanation: plot_predict can be used to produce forecast plots along with confidence intervals. Here we produce forecasts starting at the last observation and continuing for 18 months. End of explanation res_ar5 = AutoReg(ind_prod, 5, old_names=False).fit() predictions = pd.DataFrame( { "AR(5)": res_ar5.predict(start=714, end=726), "AR(13)": res.predict(start=714, end=726), "Restr. AR(13)": res_glob.predict(start=714, end=726), } ) _, ax = plt.subplots() ax = predictions.plot(ax=ax) Explanation: The forecasts from the full model and the restricted model are very similar. I also include an AR(5) which has very different dynamics End of explanation fig = plt.figure(figsize=(16, 9)) fig = res_glob.plot_diagnostics(fig=fig, lags=30) Explanation: The diagnostics indicate the model captures most of the the dynamics in the data. The ACF shows a patters at the seasonal frequency and so a more complete seasonal model (SARIMAX) may be needed. End of explanation import numpy as np start = ind_prod.index[-24] forecast_index = pd.date_range(start, freq=ind_prod.index.freq, periods=36) cols = ["-".join(str(val) for val in (idx.year, idx.month)) for idx in forecast_index] forecasts = pd.DataFrame(index=forecast_index, columns=cols) for i in range(1, 24): fcast = res_glob.predict( start=forecast_index[i], end=forecast_index[i + 12], dynamic=True ) forecasts.loc[fcast.index, cols[i]] = fcast _, ax = plt.subplots(figsize=(16, 10)) ind_prod.iloc[-24:].plot(ax=ax, color="black", linestyle="--") ax = forecasts.plot(ax=ax) Explanation: Forecasting Forecasts are produced using the predict method from a results instance. The default produces static forecasts which are one-step forecasts. Producing multi-step forecasts requires using dynamic=True. In this next cell, we produce 12-step-heard forecasts for the final 24 periods in the sample. This requires a loop. Note: These are technically in-sample since the data we are forecasting was used to estimate parameters. Producing OOS forecasts requires two models. The first must exclude the OOS period. The second uses the predict method from the full-sample model with the parameters from the shorter sample model that excluded the OOS period. End of explanation from statsmodels.tsa.api import SARIMAX sarimax_mod = SARIMAX(ind_prod, order=((1, 5, 12, 13), 0, 0), trend="c") sarimax_res = sarimax_mod.fit() print(sarimax_res.summary()) sarimax_params = sarimax_res.params.iloc[:-1].copy() sarimax_params.index = res_glob.params.index params = pd.concat([res_glob.params, sarimax_params], axis=1, sort=False) params.columns = ["AutoReg", "SARIMAX"] params Explanation: Comparing to SARIMAX SARIMAX is an implementation of a Seasonal Autoregressive Integrated Moving Average with eXogenous regressors model. It supports: Specification of seasonal and nonseasonal AR and MA components Inclusion of Exogenous variables Full maximum-likelihood estimation using the Kalman Filter This model is more feature rich than AutoReg. Unlike SARIMAX, AutoReg estimates parameters using OLS. This is faster and the problem is globally convex, and so there are no issues with local minima. The closed-form estimator and its performance are the key advantages of AutoReg over SARIMAX when comparing AR(P) models. AutoReg also support seasonal dummies, which can be used with SARIMAX if the user includes them as exogenous regressors. End of explanation from statsmodels.tsa.deterministic import DeterministicProcess dp = DeterministicProcess(housing.index, constant=True, period=12, fourier=2) mod = AutoReg(housing, 2, trend="n", seasonal=False, deterministic=dp) res = mod.fit() print(res.summary()) fig = res.plot_predict(720, 840) Explanation: Custom Deterministic Processes The deterministic parameter allows a custom DeterministicProcess to be used. This allows for more complex deterministic terms to be constructed, for example one that includes seasonal components with two periods, or, as the next example shows, one that uses a Fourier series rather than seasonal dummies. End of explanation
2,474
Given the following text description, write Python code to implement the functionality described below step by step Description: final_df 완성하기 Step1: 1. 이동진 평점 및 코멘트만 불러오기 (new_df3) Step2: 최종적으로 뽑고 싶은 features(dataframe보여주기위한) y값
Python Code: import pandas as pd df1 = pd.read_csv('../resource/raw_df1.csv') df2 = pd.read_csv('../resource/raw_df2.csv') df3 = pd.read_csv('../resource/lee_df.csv') df1.tail(1) df2.tail(1) df3.tail(1) Explanation: final_df 완성하기 End of explanation lee = df3['name'] == '이동진 평론가' lee lee_df = df3[lee] new_df3 = pd.concat([df3, lee_df], axis=1).ix[:,4:] new_df3 new_df3.describe() Explanation: 1. 이동진 평점 및 코멘트만 불러오기 (new_df3) End of explanation preprocess_df1 = pd.concat([df1.ix[:,:3], new_df3.ix[:,:1], df2.ix[:,3:6], df2.ix[:,1:3], df1.ix[:,4:], df2.ix[:,6:], new_df3.ix[:,1:]], axis=1) preprocess_df1.tail(1) preprocess_df1.to_csv('../resource/preprocess_df1.csv', index=False, encoding='utf8') Explanation: 최종적으로 뽑고 싶은 features(dataframe보여주기위한) y값 : 내 별점(y) X값 : 영화, 평균별점, 이동진 별점, 평가자수, 보고싶어요수, 코멘트수, 감독, 배우, 등급, 장르, 국가, 상영시간, 년도, 별점분포, 이동진 코멘트 End of explanation
2,475
Given the following text description, write Python code to implement the functionality described below step by step Description: Part of Speech Tagging Part of speech tagging task aims to assign every word/token in plain text a category that identifies the syntactic functionality of the word occurrence. Polyglot recognizes 17 parts of speech, this set is called the universal part of speech tag set Step1: Download Necessary Models Step3: Example We tag each word in the text with one part of speech. Step4: We can query all the tagged words Step5: After calling the pos_tags property once, the words objects will carry the POS tags. Step6: Command Line Interface
Python Code: from polyglot.downloader import downloader print(downloader.supported_languages_table("pos2")) Explanation: Part of Speech Tagging Part of speech tagging task aims to assign every word/token in plain text a category that identifies the syntactic functionality of the word occurrence. Polyglot recognizes 17 parts of speech, this set is called the universal part of speech tag set: ADJ: adjective ADP: adposition ADV: adverb AUX: auxiliary verb CONJ: coordinating conjunction DET: determiner INTJ: interjection NOUN: noun NUM: numeral PART: particle PRON: pronoun PROPN: proper noun PUNCT: punctuation SCONJ: subordinating conjunction SYM: symbol VERB: verb X: other Languages Coverage The models were trained on a combination of: Original CONLL datasets after the tags were converted using the universal POS tables. Universal Dependencies 1.0 corpora whenever they are available. End of explanation %%bash polyglot download embeddings2.en pos2.en Explanation: Download Necessary Models End of explanation from polyglot.text import Text blob = We will meet at eight o'clock on Thursday morning. text = Text(blob) Explanation: Example We tag each word in the text with one part of speech. End of explanation text.pos_tags Explanation: We can query all the tagged words End of explanation text.words[0].pos_tag Explanation: After calling the pos_tags property once, the words objects will carry the POS tags. End of explanation !polyglot --lang en tokenize --input testdata/cricket.txt | polyglot --lang en pos | tail -n 30 Explanation: Command Line Interface End of explanation
2,476
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Toplevel MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required Step7: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required Step8: 3.2. CMIP3 Parent Is Required Step9: 3.3. CMIP5 Parent Is Required Step10: 3.4. Previous Name Is Required Step11: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required Step12: 4.2. Code Version Is Required Step13: 4.3. Code Languages Is Required Step14: 4.4. Components Structure Is Required Step15: 4.5. Coupler Is Required Step16: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required Step17: 5.2. Atmosphere Double Flux Is Required Step18: 5.3. Atmosphere Fluxes Calculation Grid Is Required Step19: 5.4. Atmosphere Relative Winds Is Required Step20: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required Step21: 6.2. Global Mean Metrics Used Is Required Step22: 6.3. Regional Metrics Used Is Required Step23: 6.4. Trend Metrics Used Is Required Step24: 6.5. Energy Balance Is Required Step25: 6.6. Fresh Water Balance Is Required Step26: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required Step27: 7.2. Atmos Ocean Interface Is Required Step28: 7.3. Atmos Land Interface Is Required Step29: 7.4. Atmos Sea-ice Interface Is Required Step30: 7.5. Ocean Seaice Interface Is Required Step31: 7.6. Land Ocean Interface Is Required Step32: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required Step33: 8.2. Atmos Ocean Interface Is Required Step34: 8.3. Atmos Land Interface Is Required Step35: 8.4. Atmos Sea-ice Interface Is Required Step36: 8.5. Ocean Seaice Interface Is Required Step37: 8.6. Runoff Is Required Step38: 8.7. Iceberg Calving Is Required Step39: 8.8. Endoreic Basins Is Required Step40: 8.9. Snow Accumulation Is Required Step41: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required Step42: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required Step43: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required Step44: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required Step45: 12.2. Additional Information Is Required Step46: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required Step47: 13.2. Additional Information Is Required Step48: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required Step49: 14.2. Additional Information Is Required Step50: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required Step51: 15.2. Additional Information Is Required Step52: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required Step53: 16.2. Additional Information Is Required Step54: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required Step55: 17.2. Equivalence Concentration Is Required Step56: 17.3. Additional Information Is Required Step57: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required Step58: 18.2. Additional Information Is Required Step59: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required Step60: 19.2. Additional Information Is Required Step61: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required Step62: 20.2. Additional Information Is Required Step63: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required Step64: 21.2. Additional Information Is Required Step65: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required Step66: 22.2. Aerosol Effect On Ice Clouds Is Required Step67: 22.3. Additional Information Is Required Step68: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required Step69: 23.2. Aerosol Effect On Ice Clouds Is Required Step70: 23.3. RFaci From Sulfate Only Is Required Step71: 23.4. Additional Information Is Required Step72: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required Step73: 24.2. Additional Information Is Required Step74: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step76: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required Step77: 25.4. Additional Information Is Required Step78: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step80: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required Step81: 26.4. Additional Information Is Required Step82: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required Step83: 27.2. Additional Information Is Required Step84: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required Step85: 28.2. Crop Change Only Is Required Step86: 28.3. Additional Information Is Required Step87: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required Step88: 29.2. Additional Information Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'thu', 'ciesm', 'toplevel') Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: THU Source ID: CIESM Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:40 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation
2,477
Given the following text description, write Python code to implement the functionality described below step by step Description: <header class="w3-container w3-teal"> <img src="images/utfsm.png" alt="" align="left"/> <img src="images/inf.png" alt="" align="right"/> </header> <br/><br/><br/><br/><br/> IWI131 Programación de Computadores Sebastián Flores http Step1: ¿Que pasa si necesitamos cambiar el texto? ¿O aumentar la precisión de $\pi$? Ejemplo motivador Calcule el área de circulos con radios Step2: Las funciones permiten encapsular código, permitiendo modificarlo más facilmente y reutilizarlo. Funciones Una función (en informática) es una agrupación de acciones. Nada más. Una función (en informática) no es una función matemática Step3: Funciones Una función (en informática) no es una función matemática Step4: Problema La fuerza de atracción gravitacional entre 2 planetas de masas $m_1$ y $m_2$ separados por una distancia de $r$ kilómetros esta dada por la fórmula Step5: Uso de funciones El cálculo de la fuerza de atracción gravitacional puede ser encapsulado en una función, para poder ser utilizado en otras oportunidades y modificar el código en un sólo lugar. Step6: Funciones Ejercicio 1 - Promedio Escriba la función promedio(n1,n2,n3) que reciba las notas de los 3 certamenes del ramo y retorne el valor del promedio final. promedio(100,100,100) # Debería regresar el valor 100 promedio(80, 66, 31) # Debería regresar el valor 59 promedio(0, 0, 0) # Debería regresar el valor 0 Step7: Funciones Ejercicio 1 - Promedio Escriba la función promedio(n1,n2,n3) que reciba las notas de los 3 certamenes del ramo y retorne el valor del promedio final. promedio(100,100,100) # Debería regresar el valor 100 promedio(80, 66, 31) # Debería regresar el valor 59.0 promedio(0, 0, 0) # Debería regresar el valor 0 Step8: Siempre hay que probar más casos de los otorgados. Siempre tener cuidado con la mezcla de int y float Funciones Ejercicio 1 - Promedio Escriba la función promedio(n1,n2,n3) que reciba las notas de los 3 certamenes del ramo y retorne el valor del promedio final. promedio(100,100,100) # Debería regresar el valor 100 promedio(80, 66, 31) # Debería regresar el valor 59 promedio(0, 0, 0) # Debería regresar el valor 0 Step9: Conceptos def nombre_de_mi_funcion(var_1, var_2, ..., var_n) Step10: Funciones Variables Globales vs Variables Locales Cuando se ejecuta una función, se crea un "espacio protegido" en el cual se ejecutan las acciones de la función. * Las variables que existen antes de la ejecución de la función se llaman variables globales * Las variables que se pasan a la función o que se crean en el contexto de la función, se llaman variables locales (a la función). * Todo lo que no se regrese al final de la función se perderá. * Las variables locales desaparecerán luego de la ejecución de la función. * Los tipos básicos que se pasen a la función no se ven alterados por la función Variables Globales vs Variables Locales Ejemplo ¿Que pasa en el siguiente código?
Python Code: r = 0.2 area = 3.14*r**2 print "Circulo de radio", r, "[m] tiene area", area, "[m2]" r = 1.0 area = 3.14*r**2 print "Circulo de radio", r, "[m] tiene area", area, "[m2]" r = 42.0 area = 3.14*r**2 print "Circulo de radio", r, "[m] tiene area", area, "[m2]" Explanation: <header class="w3-container w3-teal"> <img src="images/utfsm.png" alt="" align="left"/> <img src="images/inf.png" alt="" align="right"/> </header> <br/><br/><br/><br/><br/> IWI131 Programación de Computadores Sebastián Flores http://progra.usm.cl/ https://www.github.com/sebastiandres/iwi131 Clase pasada Asignación y Expresión Entrada y Salida de Datos Tipos de Datos ¿Qué contenido aprenderemos hoy? Trabajar en python Funciones Ruteo ¿Por qué aprenderemos ese contenido? Trabajar en python Porque ya tienen suficientes herramientas para comenzar a escribir (y tener problemas). Funciones Porque permiten reutilizar código, reduciendo el largo de un programa computacional y disminuyendo posibilidad de errores. Ruteo Porque para hacer trabajar al computador necesitamos saber como funciona. Ejemplo motivador Calcule el área de circulos con radios: 0.2 [m], 1.0 [m], 42.0 [m]. Imprima el radio y el área obtenida Output deseado: Circulo de radio 0.2 [m] tiene area 0.1256 [m2] Circulo de radio 1.0 [m] tiene area 3.14 [m2] Circulo de radio 42.0 [m] tiene area 5538.96 [m2] Ejemplo motivador Calcule el área de circulos con radios: 0.2 [m], 1.0 [m], 42.0 [m]. Imprima el radio y el área obtenida End of explanation def area(r): area = 3.14159236 * r**2 print "Circulo de radio", r, "[m] tiene area", area, "[m2]" return area(0.2) area(1.0) area(42.0) Explanation: ¿Que pasa si necesitamos cambiar el texto? ¿O aumentar la precisión de $\pi$? Ejemplo motivador Calcule el área de circulos con radios: 0.2 [m], 1.0 [m], 42.0 [m]. Imprima el radio y el área obtenida End of explanation def f(x): return int(x), 2*x-1, 3*int(x), x>0, "Cool" val = f(0.0) print val Explanation: Las funciones permiten encapsular código, permitiendo modificarlo más facilmente y reutilizarlo. Funciones Una función (en informática) es una agrupación de acciones. Nada más. Una función (en informática) no es una función matemática: puede regresar más de un valor, puede regresar cualquier tipo de dato o no regresar valores. End of explanation # Es primo no es una función matematica def es_primo(n): n_es_primo = all(n%j!=0 for j in range(1,int(n**0.5))) if n_es_primo: print n, "es primo" else: print n, "no es primo" es_primo(10000) Explanation: Funciones Una función (en informática) no es una función matemática: * Puede regresar más de un valor o no regresar valores. * Puede regresar cualquier tipo de dato. * Puede llamarse a sí misma o a otras funciones. End of explanation # constante de gravitacion universal G = 6.67428e-11 m1 = float(raw_input('m1 [kg]: ')) m2 = float(raw_input('m2 [kg]: ')) r = float(raw_input('Distancia [m]: ')) F = G * m1 * m2 / (r ** 2) print 'La fuerza de atraccion es', F, "[N]" Explanation: Problema La fuerza de atracción gravitacional entre 2 planetas de masas $m_1$ y $m_2$ separados por una distancia de $r$ kilómetros esta dada por la fórmula: $$F = G \frac{m_1 m_2}{r^2}$$ donde $G=6.67428 \cdot 10^{-11}$ [m3, kg-1, s-2]. Problema Escriba una función que pregunte las masas de los planetas y su distancia, y entregue la fuerza de atracción entre ellos. End of explanation def cgu(masa1, masa2, radio): G = 6.67428e-11 G * masa1 * masa2 / (radio ** 2) return m1 = float(raw_input('m1: ')) m2 = float(raw_input('m2: ')) r = float(raw_input('Distancia: ')) print 'La fuerza de atraccion es', cgu(m1, m2, r) Explanation: Uso de funciones El cálculo de la fuerza de atracción gravitacional puede ser encapsulado en una función, para poder ser utilizado en otras oportunidades y modificar el código en un sólo lugar. End of explanation def promedio(n1, n2, n3): prom=(n1+n2+n3)/3.0 return prom print promedio(100,100,100) # Debería regresar 100 print promedio(80, 66, 31) # Debería regresar 59 print promedio(0, 0., 1.) # Debería regresar 0.333333 print promedio(0, 0, 0) # Debería regresar 0 Explanation: Funciones Ejercicio 1 - Promedio Escriba la función promedio(n1,n2,n3) que reciba las notas de los 3 certamenes del ramo y retorne el valor del promedio final. promedio(100,100,100) # Debería regresar el valor 100 promedio(80, 66, 31) # Debería regresar el valor 59 promedio(0, 0, 0) # Debería regresar el valor 0 End of explanation def promedio(n1, n2, n3): suma = n1 + n2 + n3 prom = suma/3. return prom print promedio(100,100,100) # Debería regresar 100 print promedio(80, 66, 31) # Debería regresar el valor 59 print promedio(0, 0, 1) # Debería regresar 0.33333333 print promedio(0, 0, 0) # Debería regresar 0 Explanation: Funciones Ejercicio 1 - Promedio Escriba la función promedio(n1,n2,n3) que reciba las notas de los 3 certamenes del ramo y retorne el valor del promedio final. promedio(100,100,100) # Debería regresar el valor 100 promedio(80, 66, 31) # Debería regresar el valor 59.0 promedio(0, 0, 0) # Debería regresar el valor 0 End of explanation def promedio(n1, n2, n3): return (n1 + n2 + n3)/3.0 print promedio(100,100,100) # Debería regresar 100 print promedio(80, 66, 31) # Debería regresar 59 print promedio(0, 0, 0) # Debería regresar 0 Explanation: Siempre hay que probar más casos de los otorgados. Siempre tener cuidado con la mezcla de int y float Funciones Ejercicio 1 - Promedio Escriba la función promedio(n1,n2,n3) que reciba las notas de los 3 certamenes del ramo y retorne el valor del promedio final. promedio(100,100,100) # Debería regresar el valor 100 promedio(80, 66, 31) # Debería regresar el valor 59 promedio(0, 0, 0) # Debería regresar el valor 0 End of explanation def que_hace(a,b): print a*b print a + b return a + b + a*b A = que_hace(1.,2) #print a Explanation: Conceptos def nombre_de_mi_funcion(var_1, var_2, ..., var_n): # Definicion de parámetros # Inicializacion de variables # Calculo de expresiones y valores return val_1, val_2, ..., val_m var_1, var_2, ..., var_n son los argumentos de la función. No es necesario definir que tipo de dato se pasa a la función. Para entregar al usuario información: Imprimir valores con print Regresar los valores Funciones Ejemplo End of explanation def mi_funcion(a,b,c): print "a =", a, "; b =", b, "; c =", c a = b b = c c = True print "a =", a, "; b =", b, "; c =", c return a, b, c x = 1.0 y = "hola" z = 42 print "x =", x, "; y =", y, "; z =", z X, Y, Z = mi_funcion(x,y, z) print "x =", x, "; y =", y, "; z =", z print "X =", X, "; Y =", Y, "; Z =", Z #print "a=", a, "; b=", b, "; c=", c Explanation: Funciones Variables Globales vs Variables Locales Cuando se ejecuta una función, se crea un "espacio protegido" en el cual se ejecutan las acciones de la función. * Las variables que existen antes de la ejecución de la función se llaman variables globales * Las variables que se pasan a la función o que se crean en el contexto de la función, se llaman variables locales (a la función). * Todo lo que no se regrese al final de la función se perderá. * Las variables locales desaparecerán luego de la ejecución de la función. * Los tipos básicos que se pasen a la función no se ven alterados por la función Variables Globales vs Variables Locales Ejemplo ¿Que pasa en el siguiente código? End of explanation
2,478
Given the following text description, write Python code to implement the functionality described below step by step Description: You are trying to measure a difference in the $K_{D}$ of two proteins binding to a ligand. From previous experiments, you know that the values of replicate measurements of $K_{D}$ follow a normal distribution with $\sigma = 2\ \mu M$. How many measurements would you need to make to confidently tell the difference between two proteins with $K_{D} = 10 \mu M$ and $K_{D} = 12 \mu M$? Goals Know how to use basic numpy.random functions to sample from distributions Begin to understand how to write a simulation to probe possible experimental outcomes Create a new notebook with this cell at the top Step1: Figure out how to use np.random.choice to simulate 1,000 tosses of a fair coin np.random uses a "pseudorandom" number generator to simulate choices String of numbers that has the same statistical properties as random numbers Numbers are actually generated deterministically Numbers look random... Step2: But numbers are actually deterministic... Step3: python uses the Mersenne Twister to generate pseudorandom numbers What does the seed do? Step4: What will we see if I run this cell twice in a row? Step5: What will we see if I run this cell twice in a row? Step6: A seed lets you specify which pseudo-random numbers you will use. If you use the same seed, you will get identical samples. If you use a different seed, you will get wildly different samples. matplotlib.pyplot.hist Step7: Basic histogram plotting syntax python COUNTS, BIN_EDGES, GRAPHICS_BIT = plt.hist(ARRAY_TO_BIN,BINS_TO_USE) Figure out how the function works and report back to the class What the function does Arguments normal people would care about What it returns
Python Code: %matplotlib inline import numpy as np from matplotlib import pyplot as plt Explanation: You are trying to measure a difference in the $K_{D}$ of two proteins binding to a ligand. From previous experiments, you know that the values of replicate measurements of $K_{D}$ follow a normal distribution with $\sigma = 2\ \mu M$. How many measurements would you need to make to confidently tell the difference between two proteins with $K_{D} = 10 \mu M$ and $K_{D} = 12 \mu M$? Goals Know how to use basic numpy.random functions to sample from distributions Begin to understand how to write a simulation to probe possible experimental outcomes Create a new notebook with this cell at the top End of explanation numbers = np.random.random(100000) plt.hist(numbers) Explanation: Figure out how to use np.random.choice to simulate 1,000 tosses of a fair coin np.random uses a "pseudorandom" number generator to simulate choices String of numbers that has the same statistical properties as random numbers Numbers are actually generated deterministically Numbers look random... End of explanation def simple_psuedo_random(current_value, multiplier=13110243, divisor=13132): return current_value*multiplier % divisor seed = 10218888 out = [] current = seed for i in range(1000): current = simple_psuedo_random(current) out.append(current) plt.hist(out) Explanation: But numbers are actually deterministic... End of explanation seed = 1021888 out = [] current = seed for i in range(1000): current = simple_psuedo_random(current) out.append(current) Explanation: python uses the Mersenne Twister to generate pseudorandom numbers What does the seed do? End of explanation s1 = np.random.random(10) print(s1) Explanation: What will we see if I run this cell twice in a row? End of explanation np.random.seed(5235412) s1 = np.random.random(10) print(s1) Explanation: What will we see if I run this cell twice in a row? End of explanation numbers = np.random.normal(size=10000) counts, bins, junk = plt.hist(numbers, range(-10,10)) Explanation: A seed lets you specify which pseudo-random numbers you will use. If you use the same seed, you will get identical samples. If you use a different seed, you will get wildly different samples. matplotlib.pyplot.hist End of explanation np.random.normal np.random.binomial np.random.uniform np.random.poisson np.random.choice np.random.shuffle Explanation: Basic histogram plotting syntax python COUNTS, BIN_EDGES, GRAPHICS_BIT = plt.hist(ARRAY_TO_BIN,BINS_TO_USE) Figure out how the function works and report back to the class What the function does Arguments normal people would care about What it returns End of explanation
2,479
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Say, I have an array:
Problem: import numpy as np a = np.array([0, 1, 2, 5, 6, 7, 8, 8, 8, 10, 29, 32, 45]) result = (a.mean()-3*a.std(), a.mean()+3*a.std())
2,480
Given the following text description, write Python code to implement the functionality described below step by step Description: Non-Personalized Recommenders Assignment Overview This assignment will explore non-personalized recommendations. You will be given a 20x20 matrix where columns represent movies, rows represent users, and each cell represents a user-movie rating. Deliverables There are 4 deliverables for this assignment. Each deliverable represents a different analysis of the data provided to you. For each deliverable, you will submit a list of the top 5 movies as ranked by a particular metric. The 4 metrics are Step1: Loading the Data Step2: Non-Personalized Recommenders for Raiders of the Lost Ark Step3: Mean rating for Raiders of the Lost Ark (1981) Step4: Number of non-NA ratings for Raiders of the Lost Ark (1981) Step5: Percentage of ratings >=4 for Raiders of the Lost Ark (1981) Step6: Finding Association of Raiders of the Lost Ark (1981) with Star Wars Episode IV. The association with Star Wars Episode IV is defined as the number of users that rated BOTH Raiders of the Lost Ark (1981) and Star Wars Episode IV divided by the number of users that rated Star Wars Episode IV. Step7: Printing the Association of Raiders of the Lost Ark (1981) and Star Wars Episode IV Step8: Finding top 5 movies with the highest ratings Making a Pandas Series with the index name equal to the movie and the entry equal to the mean rating for each movie. Sliced the column names from of the movie_data dataframe from [1 Step9: Printing the top 5 rated movies Step10: Finding top 5 movies with the most ratings Making a Pandas Series with the index name equal to the movie and the entry equal to the number of non-Na ratings for each movie. Sliced the column names from of the movie_data dataframe from [1 Step11: Printing the top 5 movies with the most ratings Step12: Top 5 movies with Percentage of ratings >=4 Making a Pandas Series with the index name equal to the movie and the entry equal to the number of non-Na ratings for each movie. Sliced the column names from of the movie_data dataframe from [1 Step13: Printing Top 5 movies with Percentage of ratings >=4 Step14: Top 5 movies most similar to Star Wars (movie id =260) Step15: Finding Association of all movies with Star Wars Episode IV. The association with Star Wars Episode IV is defined as the number of users that rated BOTH movie i and Star Wars Episode IV divided by the number of users that rated Star Wars Episode IV. Below, we are looping over [2 Step16: Printing Top 5 movies most similar to Star Wars (movie id =260)
Python Code: import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns sns.set_style('darkgrid') %matplotlib inline Explanation: Non-Personalized Recommenders Assignment Overview This assignment will explore non-personalized recommendations. You will be given a 20x20 matrix where columns represent movies, rows represent users, and each cell represents a user-movie rating. Deliverables There are 4 deliverables for this assignment. Each deliverable represents a different analysis of the data provided to you. For each deliverable, you will submit a list of the top 5 movies as ranked by a particular metric. The 4 metrics are: Mean Rating: Calculate the mean rating for each movie, order with the highest rating listed first, and submit the top 5. % of ratings 4+: Calculate the percentage of ratings for each movie that are 4 or higher. Order with the highest percentage first, and submit the top 5. Rating Count: Count the number of ratings for each movie, order with the most number of ratings first, and submit the top 5. Top 5 Star Wars: Calculate movies that most often occur with Star Wars: Episode IV - A New Hope (1977) using the (x+y)/x method described in class. In other words, for each movie, calculate the percentage of Star Wars raters who also rated that movie. Order with the highest percentage first, and submit the top 5. Importing Libraries End of explanation # Loading the data into a Pandas dataframe movie_data = pd.read_csv('A1Ratings.csv') # Looking at the first 5 rows of the dataframe movie_data.head() #printing the column names of the dataframe movie_data.columns # Summarizing the data in the movie_data dataframe movie_data.describe() Explanation: Loading the Data End of explanation # Storing the "1198: Raiders of the Lost Ark (1981)" data into an array raid_lost_arc = movie_data["1198: Raiders of the Lost Ark (1981)"] raid_lost_arc Explanation: Non-Personalized Recommenders for Raiders of the Lost Ark End of explanation print '%.2f' % ( raid_lost_arc.mean() ) Explanation: Mean rating for Raiders of the Lost Ark (1981) End of explanation raid_lost_arc.count() Explanation: Number of non-NA ratings for Raiders of the Lost Ark (1981) End of explanation print '%.1f' % ( (len(raid_lost_arc[raid_lost_arc>=4])/float(raid_lost_arc.count()))*100.0 ) Explanation: Percentage of ratings >=4 for Raiders of the Lost Ark (1981) End of explanation # First, storing the Star Wars count star_wars_count = movie_data["260: Star Wars: Episode IV - A New Hope (1977)"].count() # Then multiply the Raiders of the Lost Ark and Star Wars data. # non-NA values will be the ones where both entries do not have NA. Then, count these entries rad_arc_star_wars_count = (movie_data["1198: Raiders of the Lost Ark (1981)"]*movie_data["260: Star Wars: Episode IV - A New Hope (1977)"]).count() Explanation: Finding Association of Raiders of the Lost Ark (1981) with Star Wars Episode IV. The association with Star Wars Episode IV is defined as the number of users that rated BOTH Raiders of the Lost Ark (1981) and Star Wars Episode IV divided by the number of users that rated Star Wars Episode IV. End of explanation print '%.1f' % ( (rad_arc_star_wars_count/float(star_wars_count))*100.0 ) Explanation: Printing the Association of Raiders of the Lost Ark (1981) and Star Wars Episode IV End of explanation rating_means = pd.Series([movie_data[col_name].mean() for col_name in movie_data.columns[1:]], index=movie_data.columns[1:]) Explanation: Finding top 5 movies with the highest ratings Making a Pandas Series with the index name equal to the movie and the entry equal to the mean rating for each movie. Sliced the column names from of the movie_data dataframe from [1:] since the first column is the user id. End of explanation rating_means.sort_values(ascending=False)[0:5] Explanation: Printing the top 5 rated movies End of explanation rating_count = pd.Series([movie_data[col_name].count() for col_name in movie_data.columns[1:]], index=movie_data.columns[1:]) Explanation: Finding top 5 movies with the most ratings Making a Pandas Series with the index name equal to the movie and the entry equal to the number of non-Na ratings for each movie. Sliced the column names from of the movie_data dataframe from [1:] since the first column is the user id. End of explanation rating_count.sort_values(ascending=False)[0:5] Explanation: Printing the top 5 movies with the most ratings End of explanation rating_positive = pd.Series([sum(movie_data[col_name]>=4)/float(movie_data[col_name].count()) for col_name in movie_data.columns[1:]], index=movie_data.columns[1:]) Explanation: Top 5 movies with Percentage of ratings >=4 Making a Pandas Series with the index name equal to the movie and the entry equal to the number of non-Na ratings for each movie. Sliced the column names from of the movie_data dataframe from [1:] since the first column is the user id. End of explanation rating_positive.sort_values(ascending=False)[0:5] Explanation: Printing Top 5 movies with Percentage of ratings >=4 End of explanation # First, storing the Star Wars ratings and the count of non-NA Star Wars ratings star_wars_rat = movie_data["260: Star Wars: Episode IV - A New Hope (1977)"] star_wars_count = float(movie_data["260: Star Wars: Episode IV - A New Hope (1977)"].count()) print star_wars_count Explanation: Top 5 movies most similar to Star Wars (movie id =260) End of explanation sim_val = pd.Series( [ (movie_data[col_name]*star_wars_rat).count()/star_wars_count for col_name in movie_data.columns[2:] ], index=movie_data.columns[2:] ) Explanation: Finding Association of all movies with Star Wars Episode IV. The association with Star Wars Episode IV is defined as the number of users that rated BOTH movie i and Star Wars Episode IV divided by the number of users that rated Star Wars Episode IV. Below, we are looping over [2:] to not include Star Wars Episode IV in the Association calculation. End of explanation sim_val.sort_values(ascending=False)[0:5] Explanation: Printing Top 5 movies most similar to Star Wars (movie id =260) End of explanation
2,481
Given the following text description, write Python code to implement the functionality described below step by step Description: Homework 2 - Classification Dataset Two datasets are included, related to red and white vinho verde wine samples, from the north of Portugal. The goal is to model wine quality based on physicochemical tests (see [Cortez et al., 2009]) Step1: Importing the datasets Step2: Braking datasets into training and testing sets Step3: Saving the train and test datasets Step4: Checking the saved data and their shapes
Python Code: import pandas as pd import numpy as np from sklearn.model_selection import train_test_split Explanation: Homework 2 - Classification Dataset Two datasets are included, related to red and white vinho verde wine samples, from the north of Portugal. The goal is to model wine quality based on physicochemical tests (see [Cortez et al., 2009]) End of explanation redSetPath = "classification/winequality-red.csv" # whiteSetPath = "classification/winequality-white.csv" #Reading in the raw data. Note that the features are seperated by ';' character redSet = pd.read_csv(redSetPath, sep=';') # whiteSet = pd.read_csv(whiteSetPath, sep=';') # redSet.drop(['index'], axis=1, inplace=True) redSet.head() # whiteSet.head() Explanation: Importing the datasets End of explanation #Breaking the datasets into 70% training and 30% testing red_train, red_test = train_test_split(redSet,test_size=0.30) red_train, red_valid = train_test_split(red_train,test_size=0.20) # white_train, white_test = train_test_split(whiteSet,test_size=0.30) # white_train, white_valid = train_test_split(white_train,test_size=0.20) Explanation: Braking datasets into training and testing sets End of explanation # Red Wine red_train_path = "classification/red_train.csv" red_valid_path = "classification/red_valid.csv" red_test_path = "classification/red_test.csv" # # White Wine # white_train_path = "classification/white_train.csv" # white_valid_path = "classification/white_valid.csv" # white_test_path = "classification/white_test.csv" red_train.to_csv(path_or_buf=red_train_path, index=False) red_valid.to_csv(path_or_buf=red_valid_path, index=False) red_test.to_csv(path_or_buf=red_test_path, index=False) # white_train.to_csv(path_or_buf=white_train_path, sep=';') # white_valid.to_csv(path_or_buf=white_valid_path, sep=';') # white_test.to_csv(path_or_buf=white_test_path, sep=';') Explanation: Saving the train and test datasets End of explanation print 'Red Wine - Number of Instances Per Set' print 'Training Set: %d'%(len(red_train)) print 'Validation Set: %d'%(len(red_valid)) print 'Testing Set: %d'%(len(red_test)) # print '' # print '' # print 'White Wine - Number of Instances Per Set' # print 'Training Set: %d'%(len(white_train)) # print 'Validation Set: %d'%(len(white_valid)) # print 'Testing Set: %d'%(len(white_test)) Explanation: Checking the saved data and their shapes: End of explanation
2,482
Given the following text description, write Python code to implement the functionality described below step by step Description: Healpix pixelization of DR72 SDSS Database First import all the modules such as healpy and astropy needed for analyzing the structure Step1: Read the data file Sorted and reduced column set data can now be 'read' to reduce RAM requirements of the table reading. Step2: Create a healpix map with NSIDE=64 (no. of pixels = 49152 as $NPIX=12\times NSIDE^2$) because the no. of galaxies in the survey are less. For higher resolution (later for dr12) we will consider NSIDE=512 or even 1024. For now, we will create a 64 NSIDE map. Step3: We have data of galaxies with redshifts between 0 and 0.5 ($0<z<0.5$). To look at a time slice/at a certain epoch we need to choose the list of galaxies within a redshift window. As, measurement of redshift has $\pm 0.05$ error. We can bin all the data into redshifts with range limited to 0.05 variation each. So, we have 10 databins with almost identical redshifts. We save each databin in a different file. Step4: We now, take each databin and assign the total no. of galaxies as the value of each pixel. The following routine will calculate the no. of galaxies by couting the occurence of pixel numbers in the file.
Python Code: import healpix_util as hu import astropy as ap import numpy as np from astropy.io import fits from astropy.table import Table import astropy.io.ascii as ascii from astropy.constants import c import matplotlib.pyplot as plt import math import scipy.special as sp Explanation: Healpix pixelization of DR72 SDSS Database First import all the modules such as healpy and astropy needed for analyzing the structure End of explanation sdssdr72=ascii.read('/home/rohin/Desktop/healpix/sdssdr72_sorted_z.dat') Explanation: Read the data file Sorted and reduced column set data can now be 'read' to reduce RAM requirements of the table reading. End of explanation NSIDE=64 dt72hpix=hu.HealPix("ring",NSIDE) Explanation: Create a healpix map with NSIDE=64 (no. of pixels = 49152 as $NPIX=12\times NSIDE^2$) because the no. of galaxies in the survey are less. For higher resolution (later for dr12) we will consider NSIDE=512 or even 1024. For now, we will create a 64 NSIDE map. End of explanation j=0 for i in range(1,17): pixdata = open("/home/rohin/Desktop/healpix/binned1/pixdata%d_%d.dat"%(NSIDE,i),'w') pixdata.write("ra\t dec\t z\t pix \n") #for j in range(len(sdssdr72)): try: while sdssdr72[j]['z']<0.03*i: pixdata.write("%f\t" %sdssdr72[j]['ra']) pixdata.write("%f\t" %sdssdr72[j]['dec']) pixdata.write("%f\t" %sdssdr72[j]['z']) pixdata.write("%d\n" %dt72hpix.eq2pix(sdssdr72[j]['ra'],sdssdr72[j]['dec'])) #print dt72hpix.eq2pix(sdssdr72[j]['ra'],sdssdr72[j]['dec']) j=j+1 except: pass pixdata.close() for i in range(1,17): pixdata = ascii.read("/home/rohin/Desktop/healpix/binned1/pixdata%d_%d.dat"%(NSIDE,i)) mpixdata = open("/home/rohin/Desktop/healpix/binned1/masked/pixdata%d_%d.dat"%(NSIDE,i),'w') mpixdata.write("ra\t dec\t z\t pix \n") for j in range((len(pixdata)-1)): if 100<pixdata[j]['ra']<250: mpixdata.write("%f\t" %pixdata[j]['ra']) mpixdata.write("%f\t" %pixdata[j]['dec']) mpixdata.write("%f\t" %pixdata[j]['z']) mpixdata.write("%d\n" %pixdata[j]['pix']) #pixdata.write("/home/rohin/Desktop/healpix/binned1/masked/pixdata_%d.dat"%i,format='ascii') #print dt72hpix.eq2pix(sdssdr72[j]['ra'],sdssdr72[j]['dec']) mpixdata.close() Explanation: We have data of galaxies with redshifts between 0 and 0.5 ($0<z<0.5$). To look at a time slice/at a certain epoch we need to choose the list of galaxies within a redshift window. As, measurement of redshift has $\pm 0.05$ error. We can bin all the data into redshifts with range limited to 0.05 variation each. So, we have 10 databins with almost identical redshifts. We save each databin in a different file. End of explanation pixdata = ascii.read("/home/rohin/Desktop/healpix/binned1/masked/pixdata%d_2.dat"%NSIDE) hpixdata=np.array(np.zeros(hu.nside2npix(NSIDE))) for j in range(len(pixdata)): hpixdata[pixdata[j]['pix']]+=1 hpixdata hu.orthview(hpixdata,rot=180) pixcl=hu.anafast(hpixdata,lmax=300) ell = np.arange(len(pixcl)) plt.figure() plt.plot(ell,np.log(pixcl)) plt.show() pixcl=hu.anafast(hpixdata,lmax=300) ell = np.arange(len(pixcl)) plt.figure() plt.plot(ell,np.sqrt(ell*(ell+1)*pixcl/(4*math.pi))) plt.show() theta=np.arange(0,np.pi,0.001) correldat = np.polynomial.legendre.legval(np.cos(theta),(2*ell+1)*np.absolute(pixcl))/(4*math.pi) plt.figure() plt.plot(theta[0:600]*180/math.pi,correldat[0:600]) plt.show() plt.figure() plt.plot(theta*180/math.pi,correldat) plt.show() randra,randdec=hu.randsphere(2200000) randhp=hu.HealPix("RING",NSIDE) randhppix=randhp.eq2pix(randra,randdec) randpixdat=np.array(np.zeros(hu.nside2npix(NSIDE))) for j in range(len(randhppix)): randpixdat[randhppix[j]]+=1 randmaphp=hu.mollview(randpixdat) randcl=hu.anafast(randpixdat,lmax=300) ell = np.arange(len(randcl)) plt.figure() plt.plot(ell,np.sqrt(ell*(ell+1)*randcl/(4*math.pi))) plt.show() correlrand = np.polynomial.legendre.legval(np.cos(theta),(2*ell+1)*np.absolute(randcl))/(4*math.pi) plt.figure() plt.plot(theta[0:600]*180/math.pi,correlrand[0:600]) plt.show() finalcorrel=correldat-correlrand plt.figure() plt.plot(theta[0:600]*180/math.pi,finalcorrel[0:600]) plt.show() finalpix=hpixdata-randpixdat hu.mollview(finalpix,rot=180) cl=hu.anafast(finalpix,lmax=300) ell = np.arange(len(cl)) plt.figure() plt.plot(ell,np.sqrt(ell*(ell+1)*cl/(4*math.pi))) plt.show() correlrand = np.polynomial.legendre.legval(np.cos(theta),(2*ell+1)*np.absolute(cl))/(4*math.pi) plt.figure() plt.plot(theta[0:600]*180/math.pi,correlrand[0:600]) plt.show() finalcl=pixcl-randcl correlrand = np.polynomial.legendre.legval(np.cos(theta),(2*ell+1)*np.absolute(finalcl))/(4*math.pi) plt.figure() plt.plot(theta[0:600]*180/math.pi,correlrand[0:600]) plt.show() Explanation: We now, take each databin and assign the total no. of galaxies as the value of each pixel. The following routine will calculate the no. of galaxies by couting the occurence of pixel numbers in the file. End of explanation
2,483
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: how to plot confusion matrix using python
Python Code:: from sklearn.metrics import confusion_matrix from sklearn.preprocessing import normalize import seaborn as sns cm = confusion_matrix(target, pred) normed_confusion_matrix = normalize(cm, axis = 1, norm = 'l1') cm_df = pd.DataFrame(normed_confusion_matrix,index, columns) sns.heatmap(cm_df, annot=True)
2,484
Given the following text description, write Python code to implement the functionality described below step by step Description: Exercises for Chapter 1 Training Machine Learning Algorithms for Classification Question 1. In the file algos/perceptron.py, implement Rosenblatt's perceptron algorithm by fleshing out the class Perceptron. When you're finished, run the code in the block below to test your implementation. Step1: Question 2. Raschka claims that without an epoch or a threshold of acceptable misclassification, the perceptron may not ever stop updating. Explain why this can happen, and give an example. Answer Step2: Answer Step3: Question 6. Implement stochastic gradient descent as an option for the Adaline class. Then, run the test code below. Step4: Question 7. Describe a situation in which you would choose to use batch gradient descent, a situation in which you would choose to use stochastic gradient descent, and a situation in which you would choose to use mini-batch gradient descent. Answer Step5: Question 9. Raschka claims that stochastic gradient descent could result in "cycles" if the order in which the samples were read (and corresponding weights updated) wasn't randomized, or "shuffled," before every iteration. Explain the intuition behind this idea, and describe what a "cycle" might look like. Answer
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd from algos.perceptron import Perceptron df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', header=None) y = df.iloc[0:100, 4].values y = np.where(y == 'Iris-setosa', 1, -1) X = df.iloc[0:100, [0, 2]].values ppn = Perceptron() ppn.fit(X, y) if (ppn.errors[-1] == 0): print('Looks good!') else: print("Looks like your classifier didn't converge to 0 :(") Explanation: Exercises for Chapter 1 Training Machine Learning Algorithms for Classification Question 1. In the file algos/perceptron.py, implement Rosenblatt's perceptron algorithm by fleshing out the class Perceptron. When you're finished, run the code in the block below to test your implementation. End of explanation X_std = (X - np.mean(X, axis=0)) / np.std(X, axis=0) plt.scatter(X[:, 0], X[:, 1], color='red', marker='o', label='X') plt.scatter(X_std[:, 0], X_std[:, 1], color='blue', marker='x', label="X'") plt.xlabel('sepal length') plt.ylabel('petal length') plt.legend(loc='upper left') plt.show() Explanation: Question 2. Raschka claims that without an epoch or a threshold of acceptable misclassification, the perceptron may not ever stop updating. Explain why this can happen, and give an example. Answer: For the perceptron error to converge to 0, the algorithm must reach a point where it does not misclassify any of the training data. If the training data is not completely linearly separable, however, this convergence is impossible: there simply does not exist a linear combination that cleanly separates the data into two classes. In the Iris dataset example that Raschka gives, we can completely separate setosa from versicolor based on petal length and sepal length (the two variables we test in the code sample above), but in most real-world data, such clean linear separation is unrealistic, and there is usually slight overlap between the two classes, even if they are generally linearly separable. By setting an epoch (maximum number of iterations) or threshold (maximum number of misclassifications that we can accept), we can ensure that the algorithm will stop training even if pure linear separation is impossible in the data. Question 3. The following diagram comes from Raschka's book. Try to answer the questions about it without looking back at the text. What is being depicted in the diagram on the left? How about the diagram on the right? Answer: The diagram on the left plots the activation function $\phi(w^{T}x)$ against the net input function $w^{T}x$ in a perceptron, demonstrating the classification effect of the activation function: when the combination of the weights and inputs is greater than 0, the classifier returns 1 (a "positive" class label), and otherwise it returns -1 (a "negative" class label). The diagram on the right plots two features $X_{1}$ and $X_{2}$ from an input vector against one another, showing that two classes (the red circles and blue checks) are linearly separable by the line $\phi(w^{T}x) = 0 $. Describe in words what the following symbols represent in the diagram on the left: The axes, $w^{T}x$ and $\phi(w^{T}x)$ The thick black line Answer: The $x$ axis, $w^{T}x$, represents the net input function of a perceptron (the linear combination of weight and input vectors). The $y$ axis, $\phi(w^{T}x)$, represents the activation function (the function that classifies the net input and returns a positive or negative class label). The thick black line represents the range of possible values taken on by the activation function $\phi(w^{T}x)$. When the net input input is greater than or equal to 0, the activation funtion returns 1; otherwise it returns -1. Since it is a step function, the vertical portion of the line (shown when $w^{T}x = 0$) does not actually represent possible values in the range of $\phi(w^{T}x)$. Describe in words what the following symbols represent in the diagram on the right: The red circles The blue pluses The axes, $X_{1}$ and $X_{2}$ The vertical dashed line Answer: The red circles represent samples that are classified by a negative class label. The blue pluses represent samples that are classified by a positive class label. The axes $X_{1}$ and $X_{2}$ represent two features of each sample that we can use to determine their class. The vertical dashed line represents the decision boundary between the two classes. True or False: In the diagram on the right, $X_{1} = \phi(w^{T}x) = 0$. Explain your reasoning. Answer: False. While the right half of this equality (that the decision surface can be modelled by $\phi(w^{T}x) = 0$) is true, the left half (that the feature $X_{1}$ is equivalent to $\phi(w^{T}x)$, or to $0$) is not. While the figure makes it seem as if the dashed line corresponding to the decision surface is related to the feature $X_{1}$, they are in fact unrelated, and their graphical overlap is a coincidence. True or False: in the general relationship depicted by the diagram on the right ($X_{1}$ vs. $X_{2}$), the dashed line must always be vertical. Explain your reasoning. Answer: False. As the line representing the decision surface for the classifier, any line that cleanly separates the data into the two classes (positive and negative) will work. Question 4. Plot $X$ and its standardized form $X'$ following the feature scaling algorithm that Raschka uses in the book. How does scaling the feature using the $t$-statistic change the sample distribution? End of explanation from algos.adaline import Adaline ada = Adaline() ada.fit(X_std, y) if (ada.cost[-1] < 5): print('Looks good!') else: print("Looks like your classifier didn't find the minimum :(") Explanation: Answer: As we can see in the figure above, scaling $X$ using the $t$-statistic maintains the overall shape of the data while centering the axes around the origin and transforming the samples such that the axes correspond to the number of standard deviations that each sample is from the sample means $\mu_{x}$ and $\mu_{y}$ (as opposed to the unit measurements for length present in the original data). Question 5. In the file algos/adaline.py, implement the Adaline rule in the class Adaline. When you're finished, run the code in the block below to test your implementation. End of explanation ada_sgd = Adaline(stochastic=True) ada_sgd.fit(X_std, y) if (ada_sgd.cost[1] < 1): print('Looks good!') else: print("Looks like your stochastic model isn't performing well enough :(") Explanation: Question 6. Implement stochastic gradient descent as an option for the Adaline class. Then, run the test code below. End of explanation new_X = df.iloc[100, [0, 2]] new_X = new_X - (np.mean(X, axis=0)) / np.std(X, axis=0) new_y = df.iloc[100, 4] new_y = np.where(new_y == 'Iris-setosa', 1, -1) ada_sgd.partial_fit(new_X, new_y) Explanation: Question 7. Describe a situation in which you would choose to use batch gradient descent, a situation in which you would choose to use stochastic gradient descent, and a situation in which you would choose to use mini-batch gradient descent. Answer: I would want to use batch gradient descent in a situation where my dataset was relatively small, but the need for precision in the final approximation is high. I would want to use mini-batch gradient descent when my dataset is quite large, and computation power is limited. I would want to use stochastic gradient descent if I wanted to stream my training data. Question 8. Implement online learning as an option for the Adaline class. Then, run the test code below. End of explanation ada = Adaline(eta=0.01) ada_sgd = Adaline(eta=0.01, stochastic=True) ada.fit(X_std, y) ada_sgd.fit(X_std, y) plt.plot(range(1, len(ada.cost) + 1), ada.cost, color='red', marker='o', label='Standard') plt.xlabel('Epoch') plt.ylabel('Cost') plt.legend(loc='upper left') plt.show() plt.close() plt.plot(range(1, len(ada_sgd.cost) + 1), ada_sgd.cost, color='blue', marker='x', label='SGD') plt.xlabel('Epoch') plt.ylabel('Cost') plt.legend(loc='upper left') plt.show() Explanation: Question 9. Raschka claims that stochastic gradient descent could result in "cycles" if the order in which the samples were read (and corresponding weights updated) wasn't randomized, or "shuffled," before every iteration. Explain the intuition behind this idea, and describe what a "cycle" might look like. Answer: Since SGD updates the weights with respect to each sample, and calculates the gradient separately for any given sample, it is sensitive to outliers (that is, samples with unusually large or small feature values). Intuitively, we can imagine that an outlier sample positioned at an inopportune index – close to the point where the model would converge, say – could consistently cause a model updating via SGD to "overshoot" the global minimum. Randomizing the order of the inputs helps make this kind of overshooting less likely as the iterations progress. Question 10. Verify that stochastic gradient descent improves the speed of convergence for Adaline in the case of the Iris dataset by plotting the errors against the iteration epoch in both cases. Then, briefly explain why this is the case. End of explanation
2,485
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step6: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing Step8: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step10: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step12: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU Step15: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below Step18: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch. Step21: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). Step24: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. Step27: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Step30: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note Step33: Build the Neural Network Apply the functions you implemented above to Step34: Neural Network Training Hyperparameters Tune the following parameters Step36: Build the Graph Build the graph using the neural network you implemented. Step39: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. Step41: Save Parameters Save the batch_size and save_path parameters for inference. Step43: Checkpoint Step46: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. Step48: Translate This will translate translate_sentence from English to French.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation def sentence_to_id_list(sentence, dictionary): return [dictionary[word] for word in sentence.split()] def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) # TODO: Implement Function source_sentences = source_text.split('\n') source_lists = [sentence_to_id_list(sentence, source_vocab_to_int) for sentence in source_sentences] target_sentences = target_text.split('\n') target_lists = [sentence_to_id_list(sentence, target_vocab_to_int) + [target_vocab_to_int['<EOS>']] for sentence in target_sentences] return source_lists, target_lists DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation def model_inputs(): Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='targets') keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, learning_rate, keep_prob DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) End of explanation def process_decoding_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for dencoding :param target_data: Target Placeholder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) decoding_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return decoding_input DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_decoding_input(process_decoding_input) Explanation: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) _, state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32) return state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). End of explanation def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits # TODO: Implement Function fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) outputs, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, fn_train, inputs=dec_embed_input, sequence_length=sequence_length, scope=decoding_scope) logits = output_fn(outputs) return logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) Explanation: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. End of explanation def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits # TODO: Implement Function decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder_fn, scope=decoding_scope) return logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) Explanation: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). End of explanation def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) # TODO: Implement Function decoding_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) decoding_cell = tf.contrib.rnn.DropoutWrapper(decoding_cell, output_keep_prob=keep_prob) start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] with tf.variable_scope("decoding", reuse=None) as decoding_scope: # Output Layer output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) training_logits = decoding_layer_train( encoder_state, decoding_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) decoding_scope.reuse_variables() inference_logits = decoding_layer_infer( encoder_state, decoding_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return training_logits, inference_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) # TODO: Implement Function embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encoder_state = encoding_layer(embed_input, rnn_size, num_layers, keep_prob) # Decoder Embedding processed_target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, processed_target_data) logits_tuple = decoding_layer( dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return logits_tuple DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) Explanation: Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). End of explanation # Number of Epochs epochs = 3 # Batch Size batch_size = 128 # RNN Size rnn_size = 128 # Number of Layers num_layers = 3 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 1e-2 # Dropout Keep Probability keep_probability = 0.7 Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import time def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() Explanation: Checkpoint End of explanation import re def sentence_to_words(sentence): # casefold() is unicode-safe, unlike lower() # from the Python docs: "Casefolded strings may be used for caseless matching." # https://docs.python.org/3/library/stdtypes.html return re.sub('\.', ' .', sentence.casefold()).split() def word_to_int(word, vocab_to_int): if vocab_to_int.get(word): return vocab_to_int[word] else: return vocab_to_int['<UNK>'] def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids # TODO: Implement Function return [word_to_int(word, vocab_to_int) for word in sentence_to_words(sentence)] DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation translate_sentence = 'he saw a big truck .' DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) Explanation: Translate This will translate translate_sentence from English to French. End of explanation
2,486
Given the following text description, write Python code to implement the functionality described below step by step Description: Water Classification and Analysis of Lake Chad The previous tutorial introduced Landsat 7 imagery. The Lake Chad dataset was split into pre and post rainy season data-sets. The datasets were then cleaned up to produce a cloud-free and SLC-gap-free composite. This tutorial will focus on analyzing bodies of water using the results of a water classification algorithm called WOFS What to expect from this notebook Step1: Load Pre Rainy Season composite Step2: Lets print its contents as a high level check that data is loaded. Step3: <br> The pre_rain xarray should represents an area that looks somewhat like this Step4: Lets print this one as well . Step5: The post xarray represents an area that looks somewhat like this Step6: The structure of Wofs Data An interesting feature of Xarrays is their built-in support for plotting. Any data-arrays can plot its values using a plot function. Let's see what data-arrays come with wofs classifiers Step7: <br> The printout shows that wofs produced a dataset with a single data-array called wofs. Lets see what sort of values are in wofs by running an np.unique command on it. <br> Step8: So wofs only ever assumes one of two values. 1 for water, 0 for not water. This should produce a highly contrasted images when plotted using Xarrays built in plotting feature. Pre-Rain Water Classifcations Step9: Post-Rain Water Classifications Step10: Differencing Water products to reveal Water change The two images rendered above aren't too revealing when it comes to observing significant trends in water change. Perhaps we should take advantage of Xarrays arithmetic capabilities to detect or highlight change in our water classes. Arithmetic operations like addition and subtraction can be applied to xarray datasets that share the same shape. For example, the following differencing operation .... Step11: ... applies the difference operator to all values within the wofs data-array with extreme efficiency. If we were, to check unique values again... Step12: ... then we should encounter three values. 1, 0, -1. These values can be interpreted as values indicating change in water. The table below should serve as an incredibly clear reference Step13: Interpreting the plot. Relying on non-visual results The plot above shows a surprisingly different story from our expectation of water growth. Large sections of lake chad seem to have dis-appeared after the rainy season. The recommended next step would be to explore change by methods of counting. Step14: The results... Step15: How to interpret these results Several guesses can be made here as to why water was lost after the rainy season. Since that is out scope for this lecture(and beyond the breadth of this developer's knowledge) I'll leave definitive answers to the right researchers in this field. What can be provided, however, is an addititional figure regarding trends precipitation. Bringing back more GPM Data Lets bring back the GPM data one more time and increase the time range by one year in both directions. Instead of spanning the year of 2015 to 2016, let's do 2014 to 2017. Load GPM Using the same code from our first gpm tutorial, let's load in three years of rainfall data Step16: Display Data We'll aggregate spatial axis so that we're left with a mean value of the region for each point in time. Let's plot those points in a time series.
Python Code: import xarray as xr Explanation: Water Classification and Analysis of Lake Chad The previous tutorial introduced Landsat 7 imagery. The Lake Chad dataset was split into pre and post rainy season data-sets. The datasets were then cleaned up to produce a cloud-free and SLC-gap-free composite. This tutorial will focus on analyzing bodies of water using the results of a water classification algorithm called WOFS What to expect from this notebook: Loading in NETCDF files Introduction to WOFS for water classification Built in plotting utilities of xarrays Band arithmetic using xarrays Analysis of lake chad; pre and post rainy season Algorithmic Process <br> The algorithmic process is fairly simple. It is a chain of operations on our composite imagery. The goal here is to use water classifiers on our composite imagery to create comparabe water-products. Then to use the difference between the water products as a change classifier. <br> load composites for pre and post rainy season(genereated in previous notebook) run WOFS water classifier on both composites. (This should xarrays where where 1 is water, 0 is not water) calculate the difference between post and pre water products to generate a water change product. count all the positive values for water gain estimate counnt all the negative values for water loss estimate <br> Loading in composites <br> In our previous notebook two composites were created to represent cloud and SLC-gap imagery of pre-rainy season and post rainy season Landsat7 imagery. They were saved NETCDF files to use in this tutorial. Xarrays were designed with NETCDF as it's primary storage format so loading them should be a synch. Start with the import: <br> End of explanation pre_rain = xr.open_dataset('../demo/pre_rain.nc') Explanation: Load Pre Rainy Season composite End of explanation pre_rain Explanation: Lets print its contents as a high level check that data is loaded. End of explanation post_rain = xr.open_dataset('../demo/post_rain.nc') Explanation: <br> The pre_rain xarray should represents an area that looks somewhat like this: Note: figure above is cached result Load Post Rainy Season Composite End of explanation post_rain Explanation: Lets print this one as well . End of explanation from utils.data_cube_utilities.dc_water_classifier import wofs_classify import numpy as np clean_mask = np.ones((pre_rain.sizes['latitude'],pre_rain.sizes['longitude'])).astype(np.bool) pre_water = wofs_classify(pre_rain, clean_mask = clean_mask, mosaic = True) print(pre_water) post_water = wofs_classify(post_rain, clean_mask = clean_mask, mosaic = True) Explanation: The post xarray represents an area that looks somewhat like this: Note: figure above is cached result Water classification The goal of water classification is to classify each pixel as water or not water. The applications of water classification can range from identifying flood-plains or coastal boundaries, to observing trends like coastal erosion or the seasonal fluctuations of water. The purpose of this section is to classify bodies of water on pre and post rainy season composites so that we can start analyzing change in lake-chad's surface area. <br> <br> WOFS Water classifier WOFS( Water Observations From Space) is a water classifier developed by the Australian government following extreme flooding in 2011. It uses a regression tree machine learning model trained on several geographically and geologically varied sections of the Australian continent on over 25 years of Landsat imagery. While details of its implementation are outside of the scope of this tutorial, you can: access the Wofs code we're about to use on our github read the original research here Running the wofs classifier Running the wofs classifier is as simple as running a function call. It is typically good practice to create simple functions that accept an Xarray Dataset and return a processed XARRAY Dataset with new data-arrays within it. End of explanation pre_water Explanation: The structure of Wofs Data An interesting feature of Xarrays is their built-in support for plotting. Any data-arrays can plot its values using a plot function. Let's see what data-arrays come with wofs classifiers: End of explanation np.unique(pre_water.wofs) Explanation: <br> The printout shows that wofs produced a dataset with a single data-array called wofs. Lets see what sort of values are in wofs by running an np.unique command on it. <br> End of explanation pre_water.wofs.plot(cmap = "Blues") Explanation: So wofs only ever assumes one of two values. 1 for water, 0 for not water. This should produce a highly contrasted images when plotted using Xarrays built in plotting feature. Pre-Rain Water Classifcations End of explanation post_water.wofs.plot(cmap = "Blues") Explanation: Post-Rain Water Classifications End of explanation water_change = post_water - pre_water Explanation: Differencing Water products to reveal Water change The two images rendered above aren't too revealing when it comes to observing significant trends in water change. Perhaps we should take advantage of Xarrays arithmetic capabilities to detect or highlight change in our water classes. Arithmetic operations like addition and subtraction can be applied to xarray datasets that share the same shape. For example, the following differencing operation .... End of explanation np.unique(water_change.wofs) Explanation: ... applies the difference operator to all values within the wofs data-array with extreme efficiency. If we were, to check unique values again... End of explanation water_change.wofs.plot() Explanation: ... then we should encounter three values. 1, 0, -1. These values can be interpreted as values indicating change in water. The table below should serve as an incredibly clear reference: <br> \begin{array}{|c|c|} \hline post & pre & diff & interpretation \\hline 1 & 0 & 1-0 = +1 & water gain \\hline 0 & 1 & 0-1 = -1 & water loss \\hline 1 & 1 & 1-1= 0 & no-change \\hline 0 & 0 & 0-0=0 & no-change \\hline \end{array} <br> Understanding the intuition and logic behind this differencing, I think we're ready to take a look at a plot of water change over the area... End of explanation ## Create a boolean xarray water_growth = (water_change.wofs == 1) water_loss = (water_change.wofs == -1) ## Casting a 'boolean' to an 'int' makes 'True' values '1' and 'False' Values '0'. Summing should give us our totals total_growth = water_growth.astype(np.int8).sum() total_loss = water_loss.astype(np.int8).sum() Explanation: Interpreting the plot. Relying on non-visual results The plot above shows a surprisingly different story from our expectation of water growth. Large sections of lake chad seem to have dis-appeared after the rainy season. The recommended next step would be to explore change by methods of counting. End of explanation print("Growth:", int(total_growth.values)) print("Loss:", int(total_loss.values)) print("Net Change:", int(total_growth - total_loss)) Explanation: The results... End of explanation import datacube dc = datacube.Datacube(app = "chad_rainfall") ## Define Geographic boundaries using a (min,max) tuple. latitude = (12.75, 13.0) longitude = (14.25, 14.5) ## Specify a date range using a (min,max) tuple from datetime import datetime time = (datetime(2014,1,1), datetime(2017,1,2)) ## define the name you gave your data while it was being "ingested", as well as the platform it was captured on. product = 'gpm_imerg_gis_daily_global' platform = 'GPM' measurements = ['total_precipitation'] gpm_data = dc.load(latitude = latitude, longitude = longitude, product = product, platform = platform, measurements=measurements) Explanation: How to interpret these results Several guesses can be made here as to why water was lost after the rainy season. Since that is out scope for this lecture(and beyond the breadth of this developer's knowledge) I'll leave definitive answers to the right researchers in this field. What can be provided, however, is an addititional figure regarding trends precipitation. Bringing back more GPM Data Lets bring back the GPM data one more time and increase the time range by one year in both directions. Instead of spanning the year of 2015 to 2016, let's do 2014 to 2017. Load GPM Using the same code from our first gpm tutorial, let's load in three years of rainfall data: End of explanation times = gpm_data.time.values values = gpm_data.mean(['latitude', 'longitude']).total_precipitation.values import matplotlib.pyplot as plt plt.plot(times, values) Explanation: Display Data We'll aggregate spatial axis so that we're left with a mean value of the region for each point in time. Let's plot those points in a time series. End of explanation
2,487
Given the following text description, write Python code to implement the functionality described below step by step Description: Autoregressive Moving Average (ARMA) Step1: Sunpots Data Step2: Does our model obey the theory? Step3: This indicates a lack of fit. In-sample dynamic prediction. How good does our model do? Step4: Exercise Step5: Let's make sure this model is estimable. Step6: What does this mean? Step7: For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags. The partial autocorrelation function is a mixture of exponentials and dampened sine waves after (p-q) lags. Step8: Exercise Step9: Hint Step10: P-value of the unit-root test, resoundingly rejects the null of a unit-root.
Python Code: %matplotlib inline from __future__ import print_function import numpy as np from scipy import stats import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.graphics.api import qqplot Explanation: Autoregressive Moving Average (ARMA): Sunspots data End of explanation print(sm.datasets.sunspots.NOTE) dta = sm.datasets.sunspots.load_pandas().data dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008')) del dta["YEAR"] dta.plot(figsize=(12,8)); fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2) arma_mod20 = sm.tsa.ARMA(dta, (2,0)).fit(disp=False) print(arma_mod20.params) arma_mod30 = sm.tsa.ARMA(dta, (3,0)).fit(disp=False) print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic) print(arma_mod30.params) print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic) Explanation: Sunpots Data End of explanation sm.stats.durbin_watson(arma_mod30.resid.values) fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax = arma_mod30.resid.plot(ax=ax); resid = arma_mod30.resid stats.normaltest(resid) fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) fig = qqplot(resid, line='q', ax=ax, fit=True) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2) r,q,p = sm.tsa.acf(resid.values.squeeze(), qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) Explanation: Does our model obey the theory? End of explanation predict_sunspots = arma_mod30.predict('1990', '2012', dynamic=True) print(predict_sunspots) fig, ax = plt.subplots(figsize=(12, 8)) ax = dta.loc['1950':].plot(ax=ax) fig = arma_mod30.plot_predict('1990', '2012', dynamic=True, ax=ax, plot_insample=False) def mean_forecast_err(y, yhat): return y.sub(yhat).mean() mean_forecast_err(dta.SUNACTIVITY, predict_sunspots) Explanation: This indicates a lack of fit. In-sample dynamic prediction. How good does our model do? End of explanation from statsmodels.tsa.arima_process import arma_generate_sample, ArmaProcess np.random.seed(1234) # include zero-th lag arparams = np.array([1, .75, -.65, -.55, .9]) maparams = np.array([1, .65]) Explanation: Exercise: Can you obtain a better fit for the Sunspots model? (Hint: sm.tsa.AR has a method select_order) Simulated ARMA(4,1): Model Identification is Difficult End of explanation arma_t = ArmaProcess(arparams, maparams) arma_t.isinvertible arma_t.isstationary Explanation: Let's make sure this model is estimable. End of explanation fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax.plot(arma_t.generate_sample(nsample=50)); arparams = np.array([1, .35, -.15, .55, .1]) maparams = np.array([1, .65]) arma_t = ArmaProcess(arparams, maparams) arma_t.isstationary arma_rvs = arma_t.generate_sample(nsample=500, burnin=250, scale=2.5) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(arma_rvs, lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(arma_rvs, lags=40, ax=ax2) Explanation: What does this mean? End of explanation arma11 = sm.tsa.ARMA(arma_rvs, (1,1)).fit(disp=False) resid = arma11.resid r,q,p = sm.tsa.acf(resid, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) arma41 = sm.tsa.ARMA(arma_rvs, (4,1)).fit(disp=False) resid = arma41.resid r,q,p = sm.tsa.acf(resid, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) Explanation: For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags. The partial autocorrelation function is a mixture of exponentials and dampened sine waves after (p-q) lags. End of explanation macrodta = sm.datasets.macrodata.load_pandas().data macrodta.index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3')) cpi = macrodta["cpi"] Explanation: Exercise: How good of in-sample prediction can you do for another series, say, CPI End of explanation fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax = cpi.plot(ax=ax); ax.legend(); Explanation: Hint: End of explanation print(sm.tsa.adfuller(cpi)[1]) Explanation: P-value of the unit-root test, resoundingly rejects the null of a unit-root. End of explanation
2,488
Given the following text description, write Python code to implement the functionality described below step by step Description: Generating Weak Labels for Image Datasets (e.g. Person Riding Bike) Note Step1: Note Step2: 1. Load and Visualize Dataset First, we load the dataset and associated bounding box objects and labels for people riding bikes. Step3: We can visualize some positive examples of people riding bikes... Step4: ...and some negative examples. Step5: 2. Generate Primitives We write labeling functions (LFs) over primitives, instead of raw pixel values, becaues they are practical and interpretable. For this dataset, each image comes with object bounding boxes, extracted using off-the-shelf tools. We use the labels, positions, and sizes of each object as simple primitives and also combine them into more complex primitives, as seen in primitive_helpers.py. Note Step6: Object Relationship Based Primitives These primitives look at the relations among bikes and people in the images. They capture the relative... * position of bikes vs people (via bike_human_distance) * number of bikes vs people (via bike_human_nums) * size of bikes vs people (via bike_human_size) The code for these primitives can be found primitive_helpers.py. Step7: Assign and Name Primitives We assign the primitives and name them according to the variables we will use to refer to them in the labeling functions we develop next. For example, primitive_mtx[ Step8: 3. Write Labeling Functions (LFs) We now develop LFs that take different primitives in as inputs and apply a label based on the value of those primitives. Notice that each of these LFs are "weak"— they aren't fully precise, and they don't have complete coverage. Below, we have incldue the intuition that explains each of the LFs Step9: Assign Labeling Functions We create a list of the functions we used in L_fns and apply the labeling functions to the appropriate primitives to generate L, a labeling matrix. Note Step10: Calculate and Show Accuracy and Coverage of Labeling Functions Notice that while the labeling functions were intuitive for humans to write, they do not perform particularly well on their own. Hint Step11: 4. Generate Training Set At this point, we can take advantage of Snorkel's generative model to aggregate labels from our noisy labeling functions. Step12: Majority Vote To get a sense of how well our labeling functions perform when aggregated, we calcuate the accuracy of the training set labels if we took the majority vote label for each data point. This gives us a baseline for comparison against Snorkel's generative model. Step13: Generative Model For the Snorkel generative model, we assume that the labeling functions are conditionally independent given the true label. We train the generative model using the labels assigned by the labeling functions. For more advanced modeling of generative structure (i.e. using dependencies between primitives), refer to the Coral paradigm, as described in Varma et. al 2017. Step14: Probabilistic Label Statistics We view the distribution of weak labels produced by our generative model. Step15: We can also compare the empirical accuracies of our labeling functions to the learned accuracies of our generative model over the validation data. Step16: Note
Python Code: %load_ext autoreload %autoreload 2 import numpy as np import matplotlib.pyplot as plt %matplotlib inline import os Explanation: Generating Weak Labels for Image Datasets (e.g. Person Riding Bike) Note: This notebook assumes that Snorkel is installed. If not, see the Quick Start guide in the Snorkel README. In this tutorial, we write labeling functions over a set of unlabeled images to create a weakly-labeled dataset for person riding bike. Load and visualize dataset — Build intuition about heuristics and weak supervision for our task. Generate Primitives — Writing labeling functions over raw image pixels is quite difficult for most tasks. Instead, we first create low level primitives (e.g. bounding box sizes/positions), which we can then write LFs over. Write Labeling Functions — Express our heuristics as labeling functions over user-defined primitives. Generate Training Set — Aggregate our heuristic-based lableing functions to create a training set using the Snorkel paradigm. This process can be viewed as a way of leveraging off-the-shelf tools we already have (e.g. pretrained models, object detectors, etc.) for new tasks, which is very similar to the way we leveraged text processing tools in the other tutorials! In this approach, we show that incorporating Snorkel's generative model for labeling function aggregation shows a significant lift in accuracy over majority vote. While the tutorial we show takes advantage of very basic primitives and models, there's a lot of room to experiment here. For more, see recent work (Varma et. al 2017) that incorporates static analysis + primitive dependencies to infer structure in generative models. End of explanation %%capture import sys !{sys.executable} -m pip install scikit-image Explanation: Note: In this tutorial, we use scikit-image, which isn't a snorkel dependency. If you don't already have it installed, please run the following cell. End of explanation from data_loader import DataLoader loader = DataLoader() Explanation: 1. Load and Visualize Dataset First, we load the dataset and associated bounding box objects and labels for people riding bikes. End of explanation loader.show_examples(annotated=False, label=1) Explanation: We can visualize some positive examples of people riding bikes... End of explanation loader.show_examples(annotated=False, label=-1) Explanation: ...and some negative examples. End of explanation def has_bike(object_names): if ('cycle' in object_names) or ('bike' in object_names) or ('bicycle' in object_names): return 1 else: return 0 def has_human(object_names): if (('person' in object_names) or ('woman' in object_names) or ('man' in object_names)) \ and (('bicycle' in object_names) or 'bicycles' in object_names): return 1 else: return 0 def has_road(object_names): if ('road' in object_names) or ('street' in object_names) or ('concrete' in object_names): return 1 else: return 0 def has_cars(object_names): if ('car' in object_names) or ('cars' in object_names) or \ ('bus' in object_names) or ('buses' in object_names) or \ ('truck' in object_names) or ('trucks' in object_names): return 1 else: return 0 Explanation: 2. Generate Primitives We write labeling functions (LFs) over primitives, instead of raw pixel values, becaues they are practical and interpretable. For this dataset, each image comes with object bounding boxes, extracted using off-the-shelf tools. We use the labels, positions, and sizes of each object as simple primitives and also combine them into more complex primitives, as seen in primitive_helpers.py. Note: For this tutorial, we generate very simple primitives using off-the-shelf methods, but there is a lot of room for development and exploration here! Membership-based Primitives These primitives check whether certain objects appear in the images. End of explanation from primitive_helpers import bike_human_distance, bike_human_size, bike_human_nums def create_primitives(loader): m = 7 # number of primitives primitive_mtx = np.zeros((loader.train_num,m)) for i in range(loader.train_num): primitive_mtx[i,0] = has_human(loader.train_object_names[i]) primitive_mtx[i,1] = has_road(loader.train_object_names[i]) primitive_mtx[i,2] = has_cars(loader.train_object_names[i]) primitive_mtx[i,3] = has_bike(loader.train_object_names[i]) primitive_mtx[i,4] = bike_human_distance(loader.train_object_names[i], loader.train_object_x[i], loader.train_object_y[i]) area = np.multiply(loader.train_object_height[i], loader.train_object_width[i]) primitive_mtx[i,5] = bike_human_size(loader.train_object_names[i], area) primitive_mtx[i,6] = bike_human_nums(loader.train_object_names[i]) return primitive_mtx Explanation: Object Relationship Based Primitives These primitives look at the relations among bikes and people in the images. They capture the relative... * position of bikes vs people (via bike_human_distance) * number of bikes vs people (via bike_human_nums) * size of bikes vs people (via bike_human_size) The code for these primitives can be found primitive_helpers.py. End of explanation primitive_mtx = create_primitives(loader) p_keys = { 'has_human': primitive_mtx[:,0], 'has_road': primitive_mtx[:, 1], 'has_cars': primitive_mtx[:, 2], 'has_bike': primitive_mtx[:, 3], 'bike_human_distance': primitive_mtx[:, 4], 'bike_human_size': primitive_mtx[:, 5], 'bike_human_num': primitive_mtx[:, 6] } Explanation: Assign and Name Primitives We assign the primitives and name them according to the variables we will use to refer to them in the labeling functions we develop next. For example, primitive_mtx[:,0] is referred to as has_human. End of explanation def LF_street(has_human, has_road): if has_human >= 1: if has_road >= 1: return 1 else: return -1 return 0 def LF_vehicles(has_human, has_cars): if has_human >= 1: if has_cars >= 1: return 1 else: return -1 return 0 def LF_distance(has_human, has_bike, bike_human_distance): if has_human >= 1: if has_bike >= 1: if bike_human_distance <= np.sqrt(8): return 1 else: return 0 else: return -1 def LF_size(has_human, has_bike, bike_human_size): if has_human >= 1: if has_bike >= 1: if bike_human_size <= 1000: return -1 else: return 0 else: return -1 def LF_number(has_human, has_bike, bike_human_num): if has_human >= 1: if has_bike >= 1: if bike_human_num >= 2: return 1 if bike_human_num >= 1: return 0 if bike_human_num >= 0: return 1 else: return -1 Explanation: 3. Write Labeling Functions (LFs) We now develop LFs that take different primitives in as inputs and apply a label based on the value of those primitives. Notice that each of these LFs are "weak"— they aren't fully precise, and they don't have complete coverage. Below, we have incldue the intuition that explains each of the LFs: * LF_street: If the image has a human and a road, we think a person might be riding a bike. * LF_vechicles: If the image has a human and a vehicle, we think a person might be riding a bike. * LF_distance: If the image has a human and bike close to one another, we think that a person might be riding a bike. * LF_size: If the image has a human/bike around the same size (perhaps they're both in the foreground or background), we think a person might be riding a bike. * LF_number: If the image has the same number of bicycles and humans (i.e. primitive categorized as bike_human_num=2) or there are fewer humans than bikes (i.e. bike_human_num=0), we think a person might be riding a bike. End of explanation L_fns = [LF_street,LF_vehicles,LF_distance,LF_size,LF_number] L = np.zeros((len(L_fns),loader.train_num)).astype(int) for i in range(loader.train_num): L[0,i] = L_fns[0](p_keys['has_human'][i], p_keys['has_road'][i]) L[1,i] = L_fns[1](p_keys['has_human'][i], p_keys['has_cars'][i]) L[2,i] = L_fns[2](p_keys['has_human'][i], p_keys['has_bike'][i], p_keys['bike_human_distance'][i]) L[3,i] = L_fns[3](p_keys['has_human'][i], p_keys['has_bike'][i], p_keys['bike_human_size'][i]) L[4,i] = L_fns[4](p_keys['has_human'][i], p_keys['has_bike'][i], p_keys['bike_human_num'][i]) Explanation: Assign Labeling Functions We create a list of the functions we used in L_fns and apply the labeling functions to the appropriate primitives to generate L, a labeling matrix. Note: We usually have Snorkel manage our data using its ORM database backend, in which case we use the LabelAnnotator from the snorkel.annotations module. In this tutorial, we show how to explicitly construct labeling matrices manually, which can be useful when managing your data outside of Snorkel, as is the case with our image data! End of explanation total = float(loader.train_num) stats_table = np.zeros((len(L),2)) for i in range(len(L)): # coverage: (num labeled) / (total) stats_table[i,0] = np.sum(L[i,:] != 0)/ total # accuracy: (num correct assigned labels) / (total assigned labels) stats_table[i,1] = np.sum(L[i,:] == loader.train_ground)/float(np.sum(L[i,:] != 0)) import pandas as pd stats_table = pd.DataFrame(stats_table, index = [lf.__name__ for lf in L_fns], columns = ["Coverage", "Accuracy"]) stats_table Explanation: Calculate and Show Accuracy and Coverage of Labeling Functions Notice that while the labeling functions were intuitive for humans to write, they do not perform particularly well on their own. Hint: this is where the magic of Snorkel's generative model comes in! Note: we define coverage as the proportion of samples from which an LF does not abstain. Recall that each "uncertain" labeling function assigns 0. End of explanation from snorkel.learning import GenerativeModel from scipy import sparse import matplotlib.pyplot as plt L_train = sparse.csr_matrix(L.T) Explanation: 4. Generate Training Set At this point, we can take advantage of Snorkel's generative model to aggregate labels from our noisy labeling functions. End of explanation mv_labels = np.sign(np.sum(L.T,1)) print ('Coverage of Majority Vote on Train Set: ', np.sum(np.sign(np.sum(np.abs(L.T),1)) != 0)/float(loader.train_num)) print ('Accuracy of Majority Vote on Train Set: ', np.sum(mv_labels == loader.train_ground)/float(loader.train_num)) Explanation: Majority Vote To get a sense of how well our labeling functions perform when aggregated, we calcuate the accuracy of the training set labels if we took the majority vote label for each data point. This gives us a baseline for comparison against Snorkel's generative model. End of explanation gen_model = GenerativeModel() gen_model.train(L.T, epochs=100, decay=0.95, step_size= 0.01/ L.shape[1], reg_param=1e-6) train_marginals = gen_model.marginals(L_train) Explanation: Generative Model For the Snorkel generative model, we assume that the labeling functions are conditionally independent given the true label. We train the generative model using the labels assigned by the labeling functions. For more advanced modeling of generative structure (i.e. using dependencies between primitives), refer to the Coral paradigm, as described in Varma et. al 2017. End of explanation plt.hist(train_marginals, bins=20) plt.show() Explanation: Probabilistic Label Statistics We view the distribution of weak labels produced by our generative model. End of explanation learned_table = gen_model.learned_lf_stats() empirical_acc = stats_table.values[:, 1] learned_acc = learned_table.values[:,0] compared_stats = pd.DataFrame(np.stack((empirical_acc, learned_acc)).T, index = [lf.__name__ for lf in L_fns], columns=['Empirical Acc.', 'Learned Acc.']) compared_stats Explanation: We can also compare the empirical accuracies of our labeling functions to the learned accuracies of our generative model over the validation data. End of explanation labels = 2 * (train_marginals > 0.9) - 1 print ('Coverage of Generative Model on Train Set:', np.sum(train_marginals != 0.5)/float(len(train_marginals))) print ('Accuracy of Generative Model on Train Set:', np.mean(labels == loader.train_ground)) Explanation: Note: Coverage still refers to our model's tendency abstention from assigning labels to examples in the dataset. In this case, Snorkel's generative model has full coverage as it generalizes the labeling functions— it never assigns the "uncertain" label of 0.5. End of explanation
2,489
Given the following text description, write Python code to implement the functionality described below step by step Description: Visualizing the stock market structure This example employs several unsupervised learning techniques to extract the stock market structure from variations in historical quotes. The quantity that we use is the daily variation in quote price Step1: Retrieve the data from Internet Step2: Learn a graphical structure from the correlations Step3: Quote More precisely if one uses assume_centered=False, then the test set is supposed to have the same mean vector as the training set. If not so, both should be centered by the user, and assume_centered=True should be used. Jethro Step4: Find a low-dimension embedding for visualization Step5: Visualization
Python Code: print(__doc__) # Author: Gael Varoquaux [email protected] # License: BSD 3 clause import datetime import numpy as np import matplotlib.pyplot as plt try: from matplotlib.finance import quotes_historical_yahoo_ochl except ImportError: # quotes_historical_yahoo_ochl was named quotes_historical_yahoo before matplotlib 1.4 from matplotlib.finance import quotes_historical_yahoo as quotes_historical_yahoo_ochl from matplotlib.collections import LineCollection from sklearn import cluster, covariance, manifold Explanation: Visualizing the stock market structure This example employs several unsupervised learning techniques to extract the stock market structure from variations in historical quotes. The quantity that we use is the daily variation in quote price: quotes that are linked tend to cofluctuate during a day. .. _stock_market: Learning a graph structure We use sparse inverse covariance estimation to find which quotes are correlated conditionally on the others. Specifically, sparse inverse covariance gives us a graph, that is a list of connection. For each symbol, the symbols that it is connected too are those useful to explain its fluctuations. Clustering We use clustering to group together quotes that behave similarly. Here, amongst the :ref:various clustering techniques &lt;clustering&gt; available in the scikit-learn, we use :ref:affinity_propagation as it does not enforce equal-size clusters, and it can choose automatically the number of clusters from the data. Note that this gives us a different indication than the graph, as the graph reflects conditional relations between variables, while the clustering reflects marginal properties: variables clustered together can be considered as having a similar impact at the level of the full stock market. Embedding in 2D space For visualization purposes, we need to lay out the different symbols on a 2D canvas. For this we use :ref:manifold techniques to retrieve 2D embedding. Visualization The output of the 3 models are combined in a 2D graph where nodes represents the stocks and edges the: cluster labels are used to define the color of the nodes the sparse covariance model is used to display the strength of the edges the 2D embedding is used to position the nodes in the plan This example has a fair amount of visualization-related code, as visualization is crucial here to display the graph. One of the challenge is to position the labels minimizing overlap. For this we use an heuristic based on the direction of the nearest neighbor along each axis. End of explanation # Choose a time period reasonably calm (not too long ago so that we get # high-tech firms, and before the 2008 crash) d1 = datetime.datetime(2003, 1, 1) d2 = datetime.datetime(2008, 1, 1) # kraft symbol has now changed from KFT to MDLZ in yahoo symbol_dict = { 'TOT': 'Total', 'XOM': 'Exxon', 'CVX': 'Chevron', 'COP': 'ConocoPhillips', 'VLO': 'Valero Energy', 'MSFT': 'Microsoft', 'IBM': 'IBM', 'TWX': 'Time Warner', 'CMCSA': 'Comcast', 'CVC': 'Cablevision', 'YHOO': 'Yahoo', 'DELL': 'Dell', 'HPQ': 'HP', 'AMZN': 'Amazon', 'TM': 'Toyota', 'CAJ': 'Canon', 'MTU': 'Mitsubishi', 'SNE': 'Sony', 'F': 'Ford', 'HMC': 'Honda', 'NAV': 'Navistar', 'NOC': 'Northrop Grumman', 'BA': 'Boeing', 'KO': 'Coca Cola', 'MMM': '3M', 'MCD': 'Mc Donalds', 'PEP': 'Pepsi', 'MDLZ': 'Kraft Foods', 'K': 'Kellogg', 'UN': 'Unilever', 'MAR': 'Marriott', 'PG': 'Procter Gamble', 'CL': 'Colgate-Palmolive', 'GE': 'General Electrics', 'WFC': 'Wells Fargo', 'JPM': 'JPMorgan Chase', 'AIG': 'AIG', 'AXP': 'American express', 'BAC': 'Bank of America', 'GS': 'Goldman Sachs', 'AAPL': 'Apple', 'SAP': 'SAP', 'CSCO': 'Cisco', 'TXN': 'Texas instruments', 'XRX': 'Xerox', 'LMT': 'Lookheed Martin', 'WMT': 'Wal-Mart', 'WBA': 'Walgreen', 'HD': 'Home Depot', 'GSK': 'GlaxoSmithKline', 'PFE': 'Pfizer', 'SNY': 'Sanofi-Aventis', 'NVS': 'Novartis', 'KMB': 'Kimberly-Clark', 'R': 'Ryder', 'GD': 'General Dynamics', 'RTN': 'Raytheon', 'CVS': 'CVS', 'CAT': 'Caterpillar', 'DD': 'DuPont de Nemours'} symbols, names = np.array(list(symbol_dict.items())).T quotes = [quotes_historical_yahoo_ochl(symbol, d1, d2, asobject=True) for symbol in symbols] print(quotes) open = np.array([q.open for q in quotes]).astype(np.float) close = np.array([q.close for q in quotes]).astype(np.float) # The daily variations of the quotes are what carry most information variation = close - open Explanation: Retrieve the data from Internet End of explanation edge_model = covariance.GraphLassoCV() # standardize the time series: using correlations rather than covariance # is more efficient for structure recovery X = variation.copy().T X /= X.std(axis=0) edge_model.fit(X) Explanation: Learn a graphical structure from the correlations End of explanation _, labels = cluster.affinity_propagation(edge_model.covariance_) n_labels = labels.max() for i in range(n_labels + 1): print('Cluster %i: %s' % ((i + 1), ', '.join(names[labels == i]))) Explanation: Quote More precisely if one uses assume_centered=False, then the test set is supposed to have the same mean vector as the training set. If not so, both should be centered by the user, and assume_centered=True should be used. Jethro: It means that here the test case should habe same mean vector as the training set. Cluster using affinity propagation End of explanation # We use a dense eigen_solver to achieve reproducibility (arpack is # initiated with random vectors that we don't control). In addition, we # use a large number of neighbors to capture the large-scale structure. node_position_model = manifold.LocallyLinearEmbedding( n_components=2, eigen_solver='dense', n_neighbors=6) embedding = node_position_model.fit_transform(X.T).T Explanation: Find a low-dimension embedding for visualization: find the best position of the nodes (the stocks) on a 2D plane End of explanation plt.figure(1, facecolor='w', figsize=(10, 8)) plt.clf() ax = plt.axes([0., 0., 1., 1.]) plt.axis('off') # Display a graph of the partial correlations partial_correlations = edge_model.precision_.copy() d = 1 / np.sqrt(np.diag(partial_correlations)) partial_correlations *= d partial_correlations *= d[:, np.newaxis] non_zero = (np.abs(np.triu(partial_correlations, k=1)) > 0.02) # Plot the nodes using the coordinates of our embedding plt.scatter(embedding[0], embedding[1], s=100 * d ** 2, c=labels, cmap=plt.cm.spectral) # Plot the edges start_idx, end_idx = np.where(non_zero) #a sequence of (*line0*, *line1*, *line2*), where:: # linen = (x0, y0), (x1, y1), ... (xm, ym) segments = [[embedding[:, start], embedding[:, stop]] for start, stop in zip(start_idx, end_idx)] values = np.abs(partial_correlations[non_zero]) lc = LineCollection(segments, zorder=0, cmap=plt.cm.hot_r, norm=plt.Normalize(0, .7 * values.max())) lc.set_array(values) lc.set_linewidths(15 * values) ax.add_collection(lc) # Add a label to each node. The challenge here is that we want to # position the labels to avoid overlap with other labels for index, (name, label, (x, y)) in enumerate( zip(names, labels, embedding.T)): dx = x - embedding[0] dx[index] = 1 dy = y - embedding[1] dy[index] = 1 this_dx = dx[np.argmin(np.abs(dy))] this_dy = dy[np.argmin(np.abs(dx))] if this_dx > 0: horizontalalignment = 'left' x = x + .002 else: horizontalalignment = 'right' x = x - .002 if this_dy > 0: verticalalignment = 'bottom' y = y + .002 else: verticalalignment = 'top' y = y - .002 plt.text(x, y, name, size=10, horizontalalignment=horizontalalignment, verticalalignment=verticalalignment, bbox=dict(facecolor='w', edgecolor=plt.cm.spectral(label / float(n_labels)), alpha=.6)) plt.xlim(embedding[0].min() - .15 * embedding[0].ptp(), embedding[0].max() + .10 * embedding[0].ptp(),) plt.ylim(embedding[1].min() - .03 * embedding[1].ptp(), embedding[1].max() + .03 * embedding[1].ptp()) plt.show() Explanation: Visualization End of explanation
2,490
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: Conversão de tipo Step2: Operações com String (texto) Step3: Faça um programa que leia o nome, a qtde e o valor de um produto qualquer, apresente os dados do produto e o valor total. Ao final peça para o usuário informar se é cliente (bronze, prata ou ouro) e apresente um novo valor total com o desconto equivalente a sua categoria (5%, 7%, 10%) Step4: Altere o exemplo anterior para utilizar funções. Step5: Tamanho de strings. Faça um programa que leia 2 strings e informe o conteúdo delas seguido do seu comprimento. Informe também se as duas strings possuem o mesmo comprimento e são iguais ou diferentes no conteúdo. Compara duas strings String 1 Step6: Nome ao contrário em maiúsculas. Faça um programa que permita ao usuário digitar o seu nome e em seguida mostre o nome do usuário de trás para frente utilizando somente letras maiúsculas. Dica Step7: Nome na vertical em escada. Modifique o programa anterior de forma a mostrar o nome em formato de escada. F FU FUL FULA FULAN FULANO Step8: Faça um programa dentro de uma função para imprimir Step9: Faça um programa dentro de uma função para imprimir Step10: Faça um programa, com uma função que necessite de três argumentos, e que forneça a soma desses três argumentos. Step11: Faça um programa, com uma função que necessite de um argumento. A função retorna o valor de caractere ‘P’, se seu argumento for positivo, e ‘N’, se seu argumento for zero ou negativo. Step12: Faça um programa com uma função chamada somaImposto. A função possui dois parâmetros formais Step13: Faça um programa que converta da notação de 24 horas para a notação de 12 horas. Por exemplo, o programa deve converter 14 Step14: Faça um programa que use a função valor_pagamento para determinar o valor a ser pago por uma prestação de uma conta. O programa deverá solicitar ao usuário o valor da prestação e o número de dias em atraso e passar estes valores para a função valorPagamento, que calculará o valor a ser pago e devolverá este valor ao programa que a chamou. O programa deverá então exibir o valor a ser pago na tela. Após a execução o programa deverá voltar a pedir outro valor de prestação e assim continuar até que seja informado um valor igual a zero para a prestação. Neste momento o programa deverá ser encerrado, exibindo o relatório do dia, que conterá a quantidade e o valor total de prestações pagas no dia. O cálculo do valor a ser pago é feito da seguinte forma. Para pagamentos sem atraso, cobrar o valor da prestação. Quando houver atraso, cobrar 3% de multa, mais 0,1% de juros por dia de atraso. Step15: Dada uma lista LISTA1 com 50 números aleatórios (de 0 a 50), criar uma lista LISTA2 com apenas os valores pares de LISTA1. Ao final o programa deverá apresentar os elementos de LISTA1 e LISTA2 para conferência. Step16: Dada uma lista LISTA1 com 50 números aleatórios (de 0 a 50), criar uma lista LISTA2 com os valores de LISTA1 nas posições pares de LISTA2. Ao final o programa deverá apresentar os elementos de LISTA1 e os elementos de todas as posições de LISTA2 para conferência. Step17: Dada uma lista com 50 números aleatórios (de 0 a 9), apresentar ao final do programa o número que mais se repetiu e o número que menos se repetiu (caso um número não tenha aparecido, este deve ser o que menos se repetiu). Step18: Dada uma lista com 50 números aleatórios (de 0 a 20), criar um dicionário com a contagem de cada elemento da lista. Ao final o programa deverá apresentar os elementos da lista e o dicionário (chaves e valores) para conferência. Step19: Dada uma lista com 20 palavras, faça um programa que crie um dicionário onde a chave representa a primeira letra das palavras da lista e os valores representam todas as palavras da lista que começam com a letra da chave. Ao final do programa apresentar apenas o dicionário (chaves e valores). Step20: Dados dois conjuntos SET1 e SET2, cada qual contendo números aleatórios (de 0 a 50) verificar quais números estão presentes nos dois conjuntos e quais números não aparecem em nenhum dos dois conjuntos. Ao final do programa apresentar todos esses dados. Step21: Dados três conjuntos SET1, SET2 e SET3, cada qual contendo números aleatórios (de 0 a 50) verificar quais números estão presentes nos três conjuntos e quais números estão presentes apenas em SET1, ou seja, estes números não podem estar aparecendo em SET2 ou SET3. Ao final do programa apresentar todos esses dados. Step22: Faça um programa para manipular um cadastro de produtos de informática. Para cada produto deverá ser registrado o nome, qtde em estoque, preço unitário e tipo. O programa deverá permitir operações de inclusão, exclusão, busca por nome, busca por tipo e exibição ordenada dos dados. Step23: Faça um programa que receba do usuário um arquivo texto e mostre na tela quantas linhas esse arquivo possui. Step24: Faça um programa que receba do usuário um arquivo texto e mostre na tela quantas letras são vogais. Step25: Faça um programa que receba do usuário um arquivo texto e um caracter. Mostre na tela quantas vezes aquele caractere ocorre dentro do arquivo. Step26: Faça um programa que receba do usuário um arquivo texto e mostre na tela quantas vezes cada letra do alfabeto aparece dentro do arquivo. Step27: Faça um programa que receba do usuário um arquivo texto. Crie outro arquivo texto contendo o texto do arquivo de entrada, mas com as vogais substituídas por *. Step28: Expressões regulares em python Step29: Verificar se a entrada do usuário é um CPF em formato válido utilizando expressões regulares. Step30: Faça um programa que peça um número IP válido. O programa não deve terminar enquanto o número IP for inválido. A solução deve utilizar expressão regular. Step31: Faça um programa que peça uma data no formato (DD/MM/YYYY). O programa não deve terminar enquanto o formato da data for inválido. A solução deve utilizar expressão regular. Step32: Faça um programa que exiba todas as palavras que começam com uma determinada letra (maiúscula ou minúscula) de um texto qualquer. A solução deve utilizar expressão regular. Step33: Refaça o programa de cadastro de produtos de informática para validar os dados de entrada utilizando expressões regulares. Step34: Tratamento de Exceções com python Step35: Gráfico com Python Step36: Faça um programa que gere 50 números aleatórios de 0-50 e plote um gráfico de linha apresentando os 50 valores. Refaça o programa anterior em que o usuário informa um arquivo texto e mostra na tela quantas vezes cada letra do alfabeto aparece dentro do arquivo. Na nova versão os dados devem ser exibidos em um gráfico de barra.
Python Code: nome1 = "Maria" # criando uma variável texto idade1 = 42 nome2 = input("Digite seu nome: ") # pedindo dados ao usuário idade2 = int(input("Digite sua idade: ")) print(type(idade2)) # verificando o tipo da variável print(idade2+10) print("Olá " + nome1 + "! Sua idade é: " + str(idade1)) # imprimindo a mensagem de olá print(f"Olá {nome2} - {idade2+10}, seja bem vindo(a)!") # imprimindo mensagem formatada Explanation: <a href="https://colab.research.google.com/github/jppreti/AlphaCosmeticos/blob/master/Copy_of_LP20221.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Códigos em Python apresentados na disciplina de Laboratório de Programação da turma de 2022/1. Primeiro exemplo em python: End of explanation x = 5 # criamos uma variável com valor inteiro y = 4.3 # criamos uma variável com valor float (real) print(type(x)) # apresentando o tipo de x print(type(y)) # apresentando o tipo de y print(f"x: {x}") # exibindo o valor de x print(f"y: {y}") # exibindo o valor de y z = float(x) # conversão do valor de x para float w = int(y) # conversão do valor de y para int print(f"z: {z}") # exibindo o valor de z que é o valor de x convertido em float print(f"w: {w}") # exibindo o valor de w que é o valor de y convertido em float Explanation: Conversão de tipo: End of explanation nome = " João Paulo " print(len(nome)) #tamanho da string nome = nome.strip() #remove os espaços em branco nas extremidades print(len(nome)) print(nome[1]) #o primeiro caractere é o de posição 0 print(nome[2:4]) #de 2 a 3 (último não incluso) print(nome[-5:-1]) #começa a contagem pelo final da string print(nome[0:10:2]) ##de 0 a 10 (último não incluso) saltando duas posições a cada caractere print(nome.lower()) #transforma em minúsculo print(nome.upper()) #transforma em maiúsculo print(nome.replace("o","0")) #substitui a string print(nome.split(" ")) #returns ["João","Paulo"] print("au" in nome) #retorna True se existe na string print("au" not in nome) #retorna True se não existe na string idade = 40 texto = "Meu nome é {} e tenho {} anos." print(texto.format(nome,idade)) nota = float(input("Digite a nota do estudante: ")) if nota == 10: print("Aprovado com Mérito") elif nota >= 6: print("Aprovado") elif nota >= 5: print("Em recuperação") notapf = float(input("Digite a nota da PF: ")) media = (nota + notapf)/2 if (media >= 5): print("Aprovado na Recuperação") else: print("Reprovado na Recuperação") else: print("Reprovado") Explanation: Operações com String (texto) End of explanation nome = input("Digite o nome do produto: ") qtde = int(input("Digite a qtde do produto: ")) valor_produto = float(input("Digite o valor unitário do produto: ")) print(f"Nome: {nome}") print(f"Qtde: {qtde}") print(f"Valor Unitário: {valor_produto}") valor_total = qtde*valor_produto print(f"Valor Total: {valor_total}") categoria_cliente = input("Informe sua categoria(bronze, prata , ouro): ") categoria_cliente = categoria_cliente.strip().lower() desconto = 0.0 if categoria_cliente == "bronze": desconto = 5/100 # desconto = valor_total * (5/100) elif categoria_cliente == "prata": desconto = 7/100 elif categoria_cliente == "ouro": desconto - 10/100 else: desconto = 0.0 novo_valor_total = valor_total * (1-desconto) # valor_total - desconto print(f"Valor total com desconto para cliente {categoria_cliente}: {novo_valor_total}") Explanation: Faça um programa que leia o nome, a qtde e o valor de um produto qualquer, apresente os dados do produto e o valor total. Ao final peça para o usuário informar se é cliente (bronze, prata ou ouro) e apresente um novo valor total com o desconto equivalente a sua categoria (5%, 7%, 10%) End of explanation def valor_total(valor_unitario, qtde, desconto): return valor_unitario * qtde * (1-desconto) nome = input("Digite o nome do produto: ") qtde = int(input("Digite a qtde do produto: ")) valor_produto = float(input("Digite o valor unitário do produto: ")) print(f"Nome: {nome}") print(f"Qtde: {qtde}") print(f"Valor Unitário: {valor_produto}") print(f"Valor Total: {valor_total(valor_produto,qtde,0)}") categoria_cliente = input("Informe sua categoria(bronze, prata , ouro): ") categoria_cliente = categoria_cliente.strip().lower() desconto = 0.0 if categoria_cliente == "bronze": desconto = 5/100 # desconto = valor_total * (5/100) elif categoria_cliente == "prata": desconto = 7/100 elif categoria_cliente == "ouro": desconto - 10/100 else: desconto = 0.0 print(f"Valor total com desconto para cliente {categoria_cliente}: {valor_total(valor_produto,qtde,desconto)}") Explanation: Altere o exemplo anterior para utilizar funções. End of explanation string1 = input("Digite o primeiro texto: ") string2 = input("Digite o segundo texto: ") tam1 = len(string1) tam2 = len(string2) print(f"Tamanho de {string1}: {tam1}") print(f"Tamanho de {string2}: {tam2}") if (tam1 == tam2): print("São de tamanhos iguais") else: print("São de tamanhos diferentes") if (string1 == string2): print("O conteúdo dos dois textos é igual") else: print("O conteúdo dos dois textos é diferente") Explanation: Tamanho de strings. Faça um programa que leia 2 strings e informe o conteúdo delas seguido do seu comprimento. Informe também se as duas strings possuem o mesmo comprimento e são iguais ou diferentes no conteúdo. Compara duas strings String 1: Brasil Hexa 2022 String 2: Brasil! Hexa 2022! Tamanho de "Brasil Hexa 2022": 16 caracteres Tamanho de "Brasil! Hexa 2022!": 18 caracteres As duas strings são de tamanhos diferentes. As duas strings possuem conteúdo diferente. End of explanation nome = input("Nome: ").upper() print(nome[::-1]) Explanation: Nome ao contrário em maiúsculas. Faça um programa que permita ao usuário digitar o seu nome e em seguida mostre o nome do usuário de trás para frente utilizando somente letras maiúsculas. Dica: lembre−se que ao solicitar o nome, o usuário pode digitar letras maiúsculas ou minúsculas. End of explanation nome = input("Nome: ").upper() for x in range(len(nome)+1): print(nome[0:x]) Explanation: Nome na vertical em escada. Modifique o programa anterior de forma a mostrar o nome em formato de escada. F FU FUL FULA FULAN FULANO End of explanation def sequencia(numero): for i in range(1,n+1): for r in range (i): print(i, end=" ") print("") n = int(input("digite o valor de n: ")) sequencia(n) Explanation: Faça um programa dentro de uma função para imprimir: 1 2 2 3 3 3 ..... n n n n n n ... n para um n informado pelo usuário. Use uma função que receba um valor n inteiro e imprima até a n-ésima linha. End of explanation def sequencia2(numero): for i in range(1,n+1): for r in range (1,i+1): print(r, end=" ") # não quebra linha print("") # quebra linha n = int(input("digite o valor de n: ")) sequencia2(n) Explanation: Faça um programa dentro de uma função para imprimir: 1 1 2 1 2 3 ..... 1 2 3 ... n para um n informado pelo usuário. Use uma função que receba um valor n inteiro imprima até a n-ésima linha. End of explanation def soma (valor1, valor2, valor3): return valor1+valor2+valor3 print(soma(1,2,3)) Explanation: Faça um programa, com uma função que necessite de três argumentos, e que forneça a soma desses três argumentos. End of explanation def sinal(numero): if numero > 0: return "P" else: return "N" n = int(input("Digite um número: ")) print(f"Este número é {sinal(n)}") Explanation: Faça um programa, com uma função que necessite de um argumento. A função retorna o valor de caractere ‘P’, se seu argumento for positivo, e ‘N’, se seu argumento for zero ou negativo. End of explanation def somaImposto(taxaImposto, custo): return (1 + taxaImposto/100)*custo imposto = float(input('Digite a taxa de imposto: ')) custo = float(input('Digite o custo: ')) print(f'Valor com imposto: {somaImposto(imposto,custo)}') Explanation: Faça um programa com uma função chamada somaImposto. A função possui dois parâmetros formais: taxaImposto, que é a quantia de imposto sobre vendas expressa em porcentagem e custo, que é o custo de um item antes do imposto. A função retorna o valor de custo acrescido do imposto. End of explanation def converta(hora, minuto): if 0 < hora <= 12 and 0 <= minuto < 60: print(f'{hora}:{minuto} AM') elif 12 < hora < 24 and 0 < minuto < 60: print(f'{hora - 12}:{minuto} PM') else: print('Valor inválido') while True: #condição de saída hora = 999 horario = input("Digite o horário (HH:mm): ") if horario == "999": break; hora = int(horario.split(":")[0]) minuto = int(horario.split(":")[1]) #hora = = int(input('Hora: ')) #minuto = int(input('Minuto: ')) converta(hora,minuto) print('='*12) # imprimir 12 vezes o mesmo caractere = Explanation: Faça um programa que converta da notação de 24 horas para a notação de 12 horas. Por exemplo, o programa deve converter 14:25 em 2:25 P.M. A entrada é dada em dois inteiros. Deve haver pelo menos duas funções: uma para fazer a conversão e uma para a saída. Registre a informação A.M./P.M. como um valor ‘A’ para A.M. e ‘P’ para P.M. Assim, a função para efetuar as conversões terá um parâmetro formal para registrar se é A.M. ou P.M. Inclua um loop que permita que o usuário repita esse cálculo para novos valores de entrada todas as vezes que desejar. End of explanation def valor_pagamento(valorPrestacao, diasAtraso): if diasAtraso<=0: multa = 0 else: multa = 3/100+ (0.1/100)*diasAtraso return valorPrestacao*(1+multa) texto = "" qtde = 0 totalPagamento = 0 while True: valorPrestacao = float(input("Valor da prestação: ")) if valorPrestacao == 0: break; qtde += 1 diasAtraso = int(input("Dias de atraso: ")) totalPagamento += valor_pagamento(valorPrestacao,diasAtraso) texto += f"Prestação de R$ {valorPrestacao}, com {diasAtraso} dias de atraso ficou em R$ {str(valor_pagamento(valorPrestacao,diasAtraso))}\n" print(texto) print(f"Qtde de pagamentos realizados: {qtde}") print(f"Valor total dos pagamentos: {totalPagamento}") Explanation: Faça um programa que use a função valor_pagamento para determinar o valor a ser pago por uma prestação de uma conta. O programa deverá solicitar ao usuário o valor da prestação e o número de dias em atraso e passar estes valores para a função valorPagamento, que calculará o valor a ser pago e devolverá este valor ao programa que a chamou. O programa deverá então exibir o valor a ser pago na tela. Após a execução o programa deverá voltar a pedir outro valor de prestação e assim continuar até que seja informado um valor igual a zero para a prestação. Neste momento o programa deverá ser encerrado, exibindo o relatório do dia, que conterá a quantidade e o valor total de prestações pagas no dia. O cálculo do valor a ser pago é feito da seguinte forma. Para pagamentos sem atraso, cobrar o valor da prestação. Quando houver atraso, cobrar 3% de multa, mais 0,1% de juros por dia de atraso. End of explanation import random lista1 = [] for x in range(50): numero_aleatorio = random.randrange(0,51) lista1.append(numero_aleatorio) # lista1+=numero_aleatorio lista2 = [] for numero in lista1: if numero%2 == 0: # se o resto da divisão por 2 for zero o número é par lista2.append(numero) print (lista1) print (lista2) Explanation: Dada uma lista LISTA1 com 50 números aleatórios (de 0 a 50), criar uma lista LISTA2 com apenas os valores pares de LISTA1. Ao final o programa deverá apresentar os elementos de LISTA1 e LISTA2 para conferência. End of explanation import random lista1 = [] for x in range(50): numero_aleatorio = random.randrange(0,51) lista1.append(numero_aleatorio) # lista1+=numero_aleatorio lista2 = [] for numero in lista1: lista2.append(numero) # insere o número na posição par lista2.append(None) # insere vazio na posição ímpar print(lista1) print(lista2) Explanation: Dada uma lista LISTA1 com 50 números aleatórios (de 0 a 50), criar uma lista LISTA2 com os valores de LISTA1 nas posições pares de LISTA2. Ao final o programa deverá apresentar os elementos de LISTA1 e os elementos de todas as posições de LISTA2 para conferência. End of explanation import logging import random logging.basicConfig(level=logging.DEBUG, format='%(levelname)s - %(message)s') numeros = [] logging.debug(f"numeros[]: {numeros}") logging.info("Criando 50 números aleatórios") for x in range(50): logging.debug(f"x: {x}") numeros.append(random.randrange(0,10)) # adicionando 50 números de 0-9 logging.debug(f"numeros[]: {numeros}") numero = [] # lista que guarda a qtde de repetições de cada número logging.info("Verificando a qtde de repetições de cada número") logging.debug(f"numero[]: {numero}") for n in range(10): # percorrendo os 10 números (0-9) logging.debug(f"n: {n}") logging.debug(f"count({n}): {numeros.count(n)}") numero.append(numeros.count(n)) # contando as repetições de cada número (0-9) logging.debug(f"numero[]: {numero}") print(f"50 números aleatórios: {numeros}") print(f"Qtde de repetições de cada número:{numero}") qtdeMaiorRepeticao = -1 logging.debug(f"qtdeMaiorRepeticao: {qtdeMaiorRepeticao}") qtdeMenorRepeticao = 1000000 logging.debug(f"qtdeMenorRepeticao: {qtdeMenorRepeticao}") logging.info("Descobrindo a menor e a maior repetição") for n in numero: logging.debug(f"n: {n}") logging.debug(f"qtdeMenorRepeticao: {qtdeMenorRepeticao}") logging.debug(f"qtdeMaiorRepeticao: {qtdeMaiorRepeticao}") if n > qtdeMaiorRepeticao: qtdeMaiorRepeticao = n if n < qtdeMenorRepeticao: qtdeMenorRepeticao = n print(f"Número que mais se repetiu: {numero.index(qtdeMaiorRepeticao)}") print(f"Número que menos se repetiu: {numero.index(qtdeMenorRepeticao)}") Explanation: Dada uma lista com 50 números aleatórios (de 0 a 9), apresentar ao final do programa o número que mais se repetiu e o número que menos se repetiu (caso um número não tenha aparecido, este deve ser o que menos se repetiu). End of explanation import logging import random logging.basicConfig(level=logging.DEBUG, format='%(levelname)s - %(message)s') numeros = [] logging.debug(f"numeros[]: {numeros}") logging.info("Criando 50 números aleatórios") for x in range(50): logging.debug(f"x: {x}") numeros.append(random.randrange(0,21)) # adicionando 50 números de 0-9 logging.debug(f"numeros[]: {numeros}") numero = {} # dicionário que guarda a qtde de repetições de cada número logging.info("Verificando a qtde de repetições de cada número") logging.debug(f"numero{{}}: {numero}") for n in range(21): # percorrendo os 10 números (0-9) logging.debug(f"n: {n}") logging.debug(f"count({n}): {numeros.count(n)}") numero[n] = numeros.count(n) # contando as repetições de cada número (0-9) logging.debug(f"numero{{}}: {numero}") print(f"50 números aleatórios: {numeros}") print(f"Qtde de repetições de cada número:{numero}") Explanation: Dada uma lista com 50 números aleatórios (de 0 a 20), criar um dicionário com a contagem de cada elemento da lista. Ao final o programa deverá apresentar os elementos da lista e o dicionário (chaves e valores) para conferência. End of explanation import logging logging.basicConfig(level=logging.DEBUG, format='%(levelname)s - %(message)s') logging.info("Carregando 20 palavras") palavras = ["armário","cadeira","computador","caneta","lápis", "borracha","monitor","aleatório","número","final", "programa","papel","hardware","conjunto","presente", "mouse","teclado","relógio","tênis","camiseta"] logging.debug(f"palavras[]: {palavras}") logging.debug(f"len(palavras): {len(palavras)}") logging.info("Indexando palavras") dicionario = {} logging.debug(f"dicionario: {dicionario}") for palavra in palavras: letra = palavra[0] logging.debug(f"palavra: {palavra}") logging.debug(f"letra: {letra}") if letra in dicionario.keys(): dicionario[letra].append(palavra) # se já existir a letra no dicionário, basta fazer o append para adicionar a nova palavra else: # se não existir a letra no dicionário, é preciso criar uma lista e guardar a palavra nela dicionario[letra] = [palavra] logging.debug(f"dicionario{{}}: {dicionario}") Explanation: Dada uma lista com 20 palavras, faça um programa que crie um dicionário onde a chave representa a primeira letra das palavras da lista e os valores representam todas as palavras da lista que começam com a letra da chave. Ao final do programa apresentar apenas o dicionário (chaves e valores). End of explanation import logging import random logging.basicConfig(level=logging.DEBUG, format='%(levelname)s - %(message)s') set1 = set() set2 = set() todos_numeros = set(range(51)) logging.info("Gerando os conjuntos set1 e set2 com 20 números aleatórios (0-50)") for numero in range(20): set1.add(random.randrange(0,51)) # adicionando número de 0-50 set2.add(random.randrange(0,51)) # adicionando número de 0-50 logging.debug(f"set1{{}}: {set1}") logging.debug(f"set2{{}}: {set2}") logging.debug(f"todos_numeros{{}}: {todos_numeros}") logging.info(f"Números presentes nos dois conjuntos: {set1 & set2}") logging.info(f"Números não presentes nos dois conjuntos: {todos_numeros - (set1 | set2)}") Explanation: Dados dois conjuntos SET1 e SET2, cada qual contendo números aleatórios (de 0 a 50) verificar quais números estão presentes nos dois conjuntos e quais números não aparecem em nenhum dos dois conjuntos. Ao final do programa apresentar todos esses dados. End of explanation import logging import random logging.basicConfig(level=logging.DEBUG, format='%(levelname)s - %(message)s') set1 = set() set2 = set() set3 = set() logging.info("Gerando os conjuntos set1, set2 e set3 com 20 números aleatórios (0-50)") for numero in range(20): set1.add(random.randrange(0,51)) # adicionando número de 0-50 set2.add(random.randrange(0,51)) set3.add(random.randrange(0,51)) logging.debug(f"set1{{}}: {set1}") logging.debug(f"set2{{}}: {set2}") logging.debug(f"set3{{}}: {set3}") logging.info(f"Números presentes nos 3 conjuntos: {set1&set2&set3}") logging.info(f"Números presentes apenas em set1: {set1 - (set2|set3)}") Explanation: Dados três conjuntos SET1, SET2 e SET3, cada qual contendo números aleatórios (de 0 a 50) verificar quais números estão presentes nos três conjuntos e quais números estão presentes apenas em SET1, ou seja, estes números não podem estar aparecendo em SET2 ou SET3. Ao final do programa apresentar todos esses dados. End of explanation import logging logging.basicConfig(level=logging.DEBUG, format='%(levelname)s - %(message)s') def menu(): print("="*50) print(" SISTEMA DE CONTROLE DE PRODUTOS DE INFORMÁTICA") print("="*50) print(" Opções: ") print(" [1] - Cadastrar") print(" [2] - Procurar por nome") print(" [3] - Procurar por tipo") print(" [4] - Excluir") print(" [5] - Exibir produtos") print(" [0] - Sair") print(f" Qtde de produtos: {len(produtos)}") print("="*50) def show(produto): print(f"{produto.get('nome')}\t{produto.get('qtde')}\t{produto.get('preco')}\t{produto.get('tipo')}") def showAll(produtos): print("NOME\tQTDE\tPRECO\tTIPO") produtos_ordenados = sorted(produtos, key=lambda produto:produto["nome"]) for produto in produtos_ordenados: show(produto) def cadastrar(): logging.info("Cadastrando produto de informática") nome = input("Nome do produto: ") qtde = int(input("Qtde: ")) preco = float(input("Preço: ")) tipo = input("Tipo: ") produto = { "nome":nome, "qtde":qtde, "preco":preco, "tipo":tipo } produtos.append(produto) def procurar_nome(): logging.info("Pesquisando por nome do produto") nome = input("Nome do produto a ser localizado: ").lower() encontrado = False for produto in produtos: if nome in produto.get("nome").lower(): encontrado = True show(produto) if encontrado==False: print("Produto não localizado") def procurar_tipo(): logging.info("Pesquisando por tipo do produto") tipo = input("Tipo do produto a ser localizado: ").lower() encontrado = False for produto in produtos: if tipo in produto.get("tipo").lower(): encontando=True show(produto) if encontrado==False: print("Tipo não localizado") def excluir(): logging.info("Excluindo produto") nome = input("Nome do produto a ser excluído: ").lower() posicao = 0 encontrado = -1 for produto in produtos: if nome == produto.get("nome").lower(): encontrado = posicao show(produto) posicao+=1 if encontrado==-1: print("Produto não localizado") else: produtos.pop(encontrado) def carregar(): logging.info("Carregando dados do arquivo produtos.dat") with open('produtos.dat','r') as arquivo: for linha in arquivo: lista_produto = linha.split(";") nome = lista_produto[0].strip() qtde = int(lista_produto[1].strip()) preco = float(lista_produto[2].strip()) tipo = lista_produto[3].strip() produto = { "nome":nome, "qtde":qtde, "preco":preco, "tipo":tipo } produtos.append(produto) def salvar(): with open('produtos.dat','w') as arquivo: for p in produtos: arquivo.write(f"{p.get('nome')};{p.get('qtde')};{p.get('preco')};{p.get('tipo')}\n") opcao = -1 produtos = [] carregar() while (opcao!=0): menu() opcao = int(input("Digite a opção desejada: ")) logging.debug(f"opcao: {opcao}") if opcao == 1: cadastrar() elif opcao == 2: procurar_nome() elif opcao == 3: procurar_tipo() elif opcao == 4: excluir() elif opcao == 5: showAll(produtos) else: if (opcao!=0): logging.warning("Opção Inválida") else: salvar() Explanation: Faça um programa para manipular um cadastro de produtos de informática. Para cada produto deverá ser registrado o nome, qtde em estoque, preço unitário e tipo. O programa deverá permitir operações de inclusão, exclusão, busca por nome, busca por tipo e exibição ordenada dos dados. End of explanation nome_arquivo = input("Arquivo: ") with open(nome_arquivo,"r") as arquivo: qtde_linhas = 0 for linha in arquivo: qtde_linhas+=1 print(f"Qtde de linhas de {nome_arquivo}: {qtde_linhas}") Explanation: Faça um programa que receba do usuário um arquivo texto e mostre na tela quantas linhas esse arquivo possui. End of explanation nome_arquivo = input("Arquivo: ") vogais = ["a","e","i","o","u","á","é","í","ó","ú"] with open(nome_arquivo,"r") as arquivo: qtde_vogais = 0 qtde_letras = 0 for linha in arquivo: for letra in linha: qtde_letras+=1 if letra in vogais: qtde_vogais+=1 print(f"Qtde de letras em {nome_arquivo}: {qtde_letras}") print(f"Qtde de vogais em {nome_arquivo}: {qtde_vogais}") Explanation: Faça um programa que receba do usuário um arquivo texto e mostre na tela quantas letras são vogais. End of explanation arquivo_nome = input("Arquivo: ") caractere = input("Caractere: ") qtde = 0 with open(arquivo_nome, 'r') as arquivo: for linha in arquivo: qtde = qtde + linha.count(caractere) print(qtde) arquivo_nome = input("Arquivo: ") caractere = input("Caractere: ") qtde = 0 with open(arquivo_nome, 'r') as arquivo: for linha in arquivo: for letra in linha: if (letra == caractere): qtde = qtde + 1 print(qtde) Explanation: Faça um programa que receba do usuário um arquivo texto e um caracter. Mostre na tela quantas vezes aquele caractere ocorre dentro do arquivo. End of explanation import string arquivo_nome = input("Arquivo: ") alfabeto = list(string.ascii_lowercase) qtdes = [] for contador in range(len(alfabeto)): qtdes.append(0) with open(arquivo_nome, 'r') as arquivo: for linha in arquivo: for letra in alfabeto: qtdes[alfabeto.index(letra)]+=linha.count(letra) print(alfabeto) print(qtdes) Explanation: Faça um programa que receba do usuário um arquivo texto e mostre na tela quantas vezes cada letra do alfabeto aparece dentro do arquivo. End of explanation def substituir_vogais (texto): resultado = texto.replace("a","*") resultado = resultado.replace("e","*") resultado = resultado.replace("i","*") resultado = resultado.replace("o","*") return resultado.replace("u","*") nome_arquivo = input("Arquivo: ") texto = "" with open(nome_arquivo,"r") as arquivo: for linha in arquivo: texto += substituir_vogais(linha) with open("resultado.txt","w") as arquivo2: arquivo2.write(texto) print(f"O conteúdo modificado do arquivo {nome_arquivo} encontra-se em resultado.txt") Explanation: Faça um programa que receba do usuário um arquivo texto. Crie outro arquivo texto contendo o texto do arquivo de entrada, mas com as vogais substituídas por *. End of explanation import re nome_arquivo = input("Arquivo: ") texto = "" with open(nome_arquivo,"r") as arquivo: for linha in arquivo: texto += linha texto = re.sub("[aeiou]","*",texto) with open("resultado.txt","w") as arquivo2: arquivo2.write(texto) print(f"O conteúdo modificado do arquivo {nome_arquivo} encontra-se em resultado.txt") Explanation: Expressões regulares em python: https://www.w3schools.com/python/python_regex.asp Reescreva o programa anterior para utilizar expressões regulares. End of explanation import re cpf = input("CPF: ") expCPF = "\d{3}[.]\d{3}[.]\d{3}[-]\d{2}" if re.match(expCPF, cpf): print("Formato Válido") else: print("Formato Inválido") Explanation: Verificar se a entrada do usuário é um CPF em formato válido utilizando expressões regulares. End of explanation import re expIP = "[0-255]{1,3}[.][0-255]{1,3}[.][0-255]{1,3}[.][0-255]{1,3}" ip = input("IP: ") while not re.match(expIP,ip): print("IP Inválido") ip = input("IP: ") else: print("IP Válido") Explanation: Faça um programa que peça um número IP válido. O programa não deve terminar enquanto o número IP for inválido. A solução deve utilizar expressão regular. End of explanation import re expData = "(0[1-9]|[12][0-9]|3[01])/(0[1-9]|1[012])/(19|20)\d{2}" data = input("Data: ") while not re.match(expData,data): print("Formato da Data Inválido") data = input("Data: ") else: print("Formato da Data Válido") Explanation: Faça um programa que peça uma data no formato (DD/MM/YYYY). O programa não deve terminar enquanto o formato da data for inválido. A solução deve utilizar expressão regular. End of explanation import re texto = "Abacaxi Limão Pera abacate laranja" letra = input("Letra: ") x = re.findall(r"\b["+letra.lower() + letra.upper() + "]\w+", texto) print(x) Explanation: Faça um programa que exiba todas as palavras que começam com uma determinada letra (maiúscula ou minúscula) de um texto qualquer. A solução deve utilizar expressão regular. End of explanation import logging import re logging.basicConfig(level=logging.DEBUG, format='%(levelname)s - %(message)s') expNome = "(\w|\s){3,40}" expQtde = "\d+" expPreco = "(\d+|\d+[.]\d{1,2})" expTipo = "(\w|\s){3,30}" def menu(): print("="*50) print(" SISTEMA DE CONTROLE DE PRODUTOS DE INFORMÁTICA") print("="*50) print(" Opções: ") print(" [1] - Cadastrar") print(" [2] - Procurar por nome") print(" [3] - Procurar por tipo") print(" [4] - Excluir") print(" [5] - Exibir produtos") print(" [0] - Sair") print(f" Qtde de produtos: {len(produtos)}") print("="*50) def show(produto): print(f"{produto.get('nome')}\t{produto.get('qtde')}\t{produto.get('preco')}\t{produto.get('tipo')}") def showAll(produtos): print("NOME\tQTDE\tPRECO\tTIPO") produtos_ordenados = sorted(produtos, key=lambda produto:produto["nome"]) for produto in produtos_ordenados: show(produto) def cadastrar(): logging.info("Cadastrando produto de informática") nome = input("Nome do produto: ") while not re.match(expNome,nome): print("Nome de produto inválido. Mínimo de 3 e máximo de 40 caracteres.") nome = input("Nome do produto: ") qtde = input("Qtde: ") while not re.match(expQtde,qtde): print("Qtde inválida. Valores devem ser >=0.") qtde = input("Qtde: ") qtde = int(qtde) preco = input("Preço: ") while not re.match(expPreco,preco): print("Preço inválido. Valores devem ser >=0") preco = input("Preço: ") preco = float(preco) tipo = input("Tipo: ") while not re.match(expTipo,tipo): print("Tipo inválido. Mínimo de 3 e máximo de 30 caracteres.") tipo = input("Tipo: ") produto = { "nome":nome, "qtde":qtde, "preco":preco, "tipo":tipo } produtos.append(produto) def procurar_nome(): logging.info("Pesquisando por nome do produto") nome = input("Nome do produto a ser localizado: ").lower() encontrado = False for produto in produtos: if nome in produto.get("nome").lower(): encontrado = True show(produto) if encontrado==False: print("Produto não localizado") def procurar_tipo(): logging.info("Pesquisando por tipo do produto") tipo = input("Tipo do produto a ser localizado: ").lower() encontrado = False for produto in produtos: if tipo in produto.get("tipo").lower(): encontando=True show(produto) if encontrado==False: print("Tipo não localizado") def excluir(): logging.info("Excluindo produto") nome = input("Nome do produto a ser excluído: ").lower() posicao = 0 encontrado = -1 for produto in produtos: if nome == produto.get("nome").lower(): encontrado = posicao show(produto) posicao+=1 if encontrado==-1: print("Produto não localizado") else: produtos.pop(encontrado) def carregar(): logging.info("Carregando dados do arquivo produtos.dat") try: with open('produtos.dat','r') as arquivo: for linha in arquivo: lista_produto = linha.split(";") nome = lista_produto[0].strip() qtde = int(lista_produto[1].strip()) preco = float(lista_produto[2].strip()) tipo = lista_produto[3].strip() produto = { "nome":nome, "qtde":qtde, "preco":preco, "tipo":tipo } produtos.append(produto) except FileNotFoundError: logging.error("Arquivo não encontrado!") logging.info("Criando arquivo produtos.dat") try: arquivo = open('produtos.dat','w') arquivo.close() logging.info("Arquivo produtos.dat criado com sucesso") except: logging.info("Erro ao criar o arquivo. Verifique com o administrador as permissões do sistema.") except: logging.error("Ocorreu um problema não previsto ao tentar abrir o arquivo produtos.dat!") else: logging.info("Arquivo produtos.dat carregado com sucesso!") finally: logging.debug("Finalizada a função carregar()") def salvar(): with open('produtos.dat','w') as arquivo: for p in produtos: arquivo.write(f"{p.get('nome')};{p.get('qtde')};{p.get('preco')};{p.get('tipo')}\n") opcao = -1 produtos = [] carregar() while (opcao!=0): menu() try: opcao = int(input("Digite a opção desejada: ")) except ValueError: logging.error("Opção Inválida. Digite um número de 0 a 5.") except: logging.error("Ocorreu um erro não previsto na escolha da opção. Tente novamente!") logging.debug(f"opcao: {opcao}") if opcao == 1: cadastrar() elif opcao == 2: procurar_nome() elif opcao == 3: procurar_tipo() elif opcao == 4: excluir() elif opcao == 5: showAll(produtos) else: if (opcao==0): salvar() Explanation: Refaça o programa de cadastro de produtos de informática para validar os dados de entrada utilizando expressões regulares. End of explanation try: resultado1 = 10 * (1/0) except ZeroDivisionError: print("Ocorreu um erro de divisão por zero, reveja os valores.") try: resultado2 = 4 + spam*3 except NameError: print("Ops... Alguma variável não foi declarada") try: resultado3 = '2' + 2 except TypeError: print("Existem tipos incompatíveis na expressão.") Explanation: Tratamento de Exceções com python: End of explanation import matplotlib as mpl import matplotlib.pyplot as plt fig, ax = plt.subplots() # Cria uma figura e os eixos x e y. ax.set_title('Clima') ax.set_xlabel('dia') ax.set_ylabel('temperatura') #ax.set_xticks(range(1,8,1)) ax.plot([1, 2, 3, 4, 5, 6, 7], [22, 21, 28, 25, 29, 30, 33], label='temperatura'); # Insere os dados. ax.plot([1, 2, 3, 4, 5, 6, 7], [60, 58, 40, 45, 40, 33, 25], label='umidade'); # Insere os dados. ax.scatter([1, 2, 3, 4, 5, 6, 7], [22, 21, 28, 25, 29, 30, 33], [250, 50, 150, 150, 200, 250, 120], label='uv', color='red') ax.legend(); Explanation: Gráfico com Python End of explanation import string arquivo_nome = input("Arquivo: ") alfabeto = list(string.ascii_lowercase) qtdes = [] for contador in range(len(alfabeto)): qtdes.append(0) with open(arquivo_nome, 'r') as arquivo: for linha in arquivo: for letra in alfabeto: qtdes[alfabeto.index(letra)]+=linha.count(letra) print(alfabeto) print(qtdes) Explanation: Faça um programa que gere 50 números aleatórios de 0-50 e plote um gráfico de linha apresentando os 50 valores. Refaça o programa anterior em que o usuário informa um arquivo texto e mostra na tela quantas vezes cada letra do alfabeto aparece dentro do arquivo. Na nova versão os dados devem ser exibidos em um gráfico de barra. End of explanation
2,491
Given the following text description, write Python code to implement the functionality described below step by step Description: <!-- 17/11 Introducción a la de programación orientada a objetos. Uso de objetos dados. --> Programación Orientada a Objetos (POO) ¿Qué es un objeto? Comencemos por definir ¿qué es un objeto?. Según la RAE, un <a href="http Step1: Ahora, si siempre voy a tener que definir esas características de la mesa para poder usarla, lo más cómodo es definir el método __init__ que sirve para inicializar el objeto Step2: Como vemos, el método __init__ (aunque en realidad pasará lo mismo con casi todos los métodos de la clase), recibe como primer parámetro uno que se llama self. En realidad el nombre no tiene por qué ser ese, pero se suele usar por convención. <br> La traducción de self es uno mismo, y con eso quieren decir que en el primer parámetro que Python siempre será el mismo objeto (la instancia) del cual están ejecutando el método. Si bien self aparece entre los parámetros formales, no se ve entre los parámetros actuales, y eso es porque lo inserta el interprete automáticamente. No tiene que hacerlo uno mismo. <br> Así como este objeto esta compuesto por tres objetos estándar de Python (un int y dos str), también podría estar compuesto por objetos creados por nosotros Step3: Y como dijimos antes, una objeto no sólo agrupa sus características, sino también los métodos que nos permiten trabajar con él, como por ejemplo, podría ser calcular su superficie de apoyo Step4: En este caso, no sólo es importante ver cómo se hace para invocar un método de un objeto (que es poniendo el nombre del objeto, un punto y el nombre del método seguido por todos sus parámetros entre paréntesis) sino también cómo se puede conjugar el uso de los objetos. <br> En la función obtener_superficie_de_apoyo de la clase Mesa podemos ver que la única responsabilidad que tiene ese objeto es redirigir la consulta que se le hizo al objeto tabla. Es decir, podía preguntárselo a cualquiera de sus patas o a la tabla, pero sabía a quién tenía que preguntarselo. Y no importa si es una tabla redonda o rectangular, las dos clases saben cómo responder la pregunta de calcular_superficie. Python def obtener_superficie_de_apoyo(self)
Python Code: class Mesa(object): cantidad_de_patas = None color = None material = None mi_mesa = Mesa() mi_mesa.cantidad_de_patas = 4 mi_mesa.color = 'Marrón' mi_mesa.material = 'Madera' print 'Tendo una mesa de {0.cantidad_de_patas} patas de color {0.color} y esta hecha de {0.material}'.format(mi_mesa) Explanation: <!-- 17/11 Introducción a la de programación orientada a objetos. Uso de objetos dados. --> Programación Orientada a Objetos (POO) ¿Qué es un objeto? Comencemos por definir ¿qué es un objeto?. Según la RAE, un <a href="http://dle.rae.es/srv/fetch?id=QmweHtN">objeto</a> es una cosa. Y si vamos a la definición de <a href="http://dle.rae.es/srv/fetch?id=B3yTydM">cosa</a> de la RAE, veremos que dice Lo que tiene entidad, ya sea corporal o espiritual, natural o artificial, concreta, abstracta o virtual. <br> O sea, a todo lo que nos rodea que tiene entidad, se lo puede considerar un objeto. Y cada uno de esos objetos tienen distintas características, como pueden ser el color, tamaño, peso, etc. <br> Y a su vez, la forma que tendremos para interactuar con esos objetos, o lo que nos permite hacer cada uno de ellos, será distinto. <br> POO La programacion orientada a objetos es un <a href="https://es.wikipedia.org/wiki/Paradigma_de_programaci%C3%B3n">paradigma de programación</a> que se basa en el concepto de objetos para representar la realidad. Es imporante destacar que es una forma de representar la realidad para poder trabajar con esas abstracciones y hacer un algoritmo que tenga un objetivo en particular.<br> La POO junta en una misma estructura las variables que sirven para describir las carácterísticas (variables) de aquello que se esta modelando, junto con aquellas que determinan el estado en que se encuentra (también variables) y las funciones que le dan un comportamiento a dicha estructura. <br> Por ejemplo, si queremos modelar un curso de una materia, podemos crear distintos objetos, como pueden ser los alumnos, los profesores y el curso que los contiene a todos ellos. Los alumnos tendrán ciertas variables que los distingan entre sí, como pueden ser el padrón, nombre y apellido. Y otras que definan el estado en que se encuentra; como las notas de parciales, trabajos prácticos y coloquios, que determinan si el alumno: Recurso, Esta en condiciones de rendir coloquio o Aprobó. Y las funciones que definen su comportamiento pueden ser rendir exámen o entregar trabajo práctico<br> A todas esas variables que componen el objeto se las llaman atributos y las funciones que determinan su comportamiento se las llama métodos. <br> Clases y objetos Así como en la programación estructurada tenemos el concepto de tipo de dato y valores, en objetos tenemos los conceptos de clases, que es algo abstracto que define las características y comportamientos de un objeto (como eran los tipos de datos), y objetos, que son una instancia de esa clase. <br> Por ejemplo, todos sabemos a qué nos referimos cuando hablamos de una <a href="http://dle.rae.es/srv/fetch?id=P1le2lc">mesa</a>, y si vamos a la definición de la RAE encontraremos: Mueble compuesto de un tablero horizontal liso y sostenido a la altura conveniente, generalmente por una o varias patas, para diferentes usos, como escribir, comer, etc. Eso, vendría a ser una clase, es sólo la idea abstracta. <br> Pero después, la mesa que puede tener cada uno en su casa es distinta, y esas serían las distintas instancias de la clase Mesa. A su vez, cada mesa es un objeto distinto, por más que sean todas de la misma clase. Y cada uno de esos objetos, puede estar compuesto por otros objetos, como pueden ser una tabla y una o varias patas. POO en python En realidad, en Python todo es un objeto. Los strings, por ejemplo, son objetos de la clase str. Y tienen los métodos upper, capitalize, center, expandtabs, etc. <br> Para crear un objeto de una en particular lo que tenemos que hacer es invocar a la clase poniendo su nombre seguido de paréntesis. <br> Por ejemplo: Python mi_string = str() mi_lista = list() Y para invocar uno de sus métodos sólo es necesario usar una variable la clase en cuestión, poner un punto, y el nombre de un método seguido por paréntesis: Python en_mayusculas = mi_string.upper() Creando nuestras propias clases Pero más allá de las clases estándares que nos provee Python, también podemos crear nuestras propias clases. Y para eso usamos la palabra reservada class. <br> Python class Mesa(object): pass Ahora, esa mesa puede tener distintas características, como pueden ser la cantidad de patas, el color o el material del que están hechas: Python class Mesa(object): cantidad_de_patas = None color = None material = None Entonces, cuando quiera usar esa idea abstracta voy a tener que definir esas características: End of explanation class Mesa(object): cantidad_de_patas = None color = None material = None def __init__(self, patas, color, material): self.cantidad_de_patas = patas self.color = color self.material = material mi_mesa = Mesa(4, 'Marrón', 'Madera') print 'Tendo una mesa de {0.cantidad_de_patas} patas de color {0.color} y esta hecha de {0.material}'.format(mi_mesa) Explanation: Ahora, si siempre voy a tener que definir esas características de la mesa para poder usarla, lo más cómodo es definir el método __init__ que sirve para inicializar el objeto: End of explanation class TablaRectangular(object): base = None altura = None def __init__(self, base, altura): self.base = base self.altura = altura class TablaRedonda(object): radio = None def __init__(self, radio): self.radio = radio class Pata(object): altura = None def __init__(self, altura): self.altura = altura class Mesa(object): tabla = None patas = None def __init__(self, tabla, patas): self.tabla = tabla self.patas = patas tabla = TablaRectangular(100, 150) pata_1 = Pata(90) pata_2 = Pata(90) pata_3 = Pata(90) pata_4 = Pata(90) mi_mesa = Mesa(tabla, [pata_1, pata_2, pata_3, pata_4]) Explanation: Como vemos, el método __init__ (aunque en realidad pasará lo mismo con casi todos los métodos de la clase), recibe como primer parámetro uno que se llama self. En realidad el nombre no tiene por qué ser ese, pero se suele usar por convención. <br> La traducción de self es uno mismo, y con eso quieren decir que en el primer parámetro que Python siempre será el mismo objeto (la instancia) del cual están ejecutando el método. Si bien self aparece entre los parámetros formales, no se ve entre los parámetros actuales, y eso es porque lo inserta el interprete automáticamente. No tiene que hacerlo uno mismo. <br> Así como este objeto esta compuesto por tres objetos estándar de Python (un int y dos str), también podría estar compuesto por objetos creados por nosotros: End of explanation import math class TablaRectangular(object): base = None altura = None def __init__(self, base, altura): self.base = base self.altura = altura def calcular_superficie(self): return self.base * self.altura class TablaRedonda(object): radio = None def __init__(self, radio): self.radio = radio def calcular_superficie(self): return math.pi * self.radio**2 class Pata(object): altura = None def __init__(self, altura): self.altura = altura class Mesa(object): tabla = None patas = None def __init__(self, tabla, patas): self.tabla = tabla self.patas = patas def obtener_superficie_de_apoyo(self): return self.tabla.calcular_superficie() tabla = TablaRectangular(100, 150) pata_1 = Pata(90) pata_2 = Pata(90) pata_3 = Pata(90) pata_4 = Pata(90) mi_mesa = Mesa(tabla, [pata_1, pata_2, pata_3, pata_4]) sup = mi_mesa.obtener_superficie_de_apoyo() print 'La superficie de la mesa es {} cm2'.format(sup) Explanation: Y como dijimos antes, una objeto no sólo agrupa sus características, sino también los métodos que nos permiten trabajar con él, como por ejemplo, podría ser calcular su superficie de apoyo: End of explanation class Alumno(object): def __init__(self, padron, nombre, apellido): self.padron = padron self.nombre = nombre self.apellido = apellido self.parciales = [] self.tps = [] self.coloquios = [] def rendir_parcial(self, nota): self.parciales.append(nota) def entregar_trabajo_practico(self, nota): self.tps.append(nota) def rendir_coloquio(self, nota): self.coloquios.append(nota) def aprobo_algun_parcial(self): aprobo_alguno = False for nota in self.parciales: if nota >= 4: aprobo_alguno = True return aprobo_alguno def aprobo_todos_los_tp(self): aprobo_todos = True for nota in self.tps: if nota < 4: aprobo_todos = False return aprobo_todos def puede_rendir_coloquio(self): return self.aprobo_algun_parcial() and self.aprobo_todos_los_tp() alum = Alumno(12345, 'Juan', 'Perez') alum.rendir_parcial(2) alum.entregar_trabajo_practico(7) alum.entregar_trabajo_practico(9) if alum.puede_rendir_coloquio(): print 'El alumno puede rendir coloquio' else: print 'El alumno no puede rendor coloquio' print '¿Y si después rinde el parcial y se saca un 7?' alum.rendir_parcial(7) if alum.puede_rendir_coloquio(): print 'El alumno puede rendir coloquio' else: print 'El alumno no puede rendor coloquio' Explanation: En este caso, no sólo es importante ver cómo se hace para invocar un método de un objeto (que es poniendo el nombre del objeto, un punto y el nombre del método seguido por todos sus parámetros entre paréntesis) sino también cómo se puede conjugar el uso de los objetos. <br> En la función obtener_superficie_de_apoyo de la clase Mesa podemos ver que la única responsabilidad que tiene ese objeto es redirigir la consulta que se le hizo al objeto tabla. Es decir, podía preguntárselo a cualquiera de sus patas o a la tabla, pero sabía a quién tenía que preguntarselo. Y no importa si es una tabla redonda o rectangular, las dos clases saben cómo responder la pregunta de calcular_superficie. Python def obtener_superficie_de_apoyo(self): return self.tabla.calcular_superficie() Otro ejemplo Volviendo un poco al ejemplo planteado antes de querer modelar una materia, podríamos implementar los alumnos de la siguiente manera: ```Python class Alumno(object): def __init__(self, padron, nombre, apellido): self.padron = padron self.nombre = nombre self.apellido = apellido self.parciales = [] self.tps = [] self.coloquios = [] def rendir_parcial(self, nota): self.parciales.append(nota) def entregar_trabajo_practico(self, nota): self.tps.append(nota) def rendir_coloquio(self, nota): self.coloquios.append(nota) def aprobo_algun_parcial(self): aprobo_alguno = False for nota in self.parciales: if nota &gt;= 4: aprobo_alguno = True return aprobo_alguno def aprobo_todos_los_tp(self): aprobo_todos = True for nota in self.parciales: if nota &lt; 4: aprobo_todos = False return aprobo_todos def puede_rendir_coloquio(self): return self.aprobo_algun_parcial() and self.aprobo_todos_los_tp() ``` Después, para usa estas variables sólo es necesario definir una variable de la clase Alumno pasandole los parametros necesarios para poder inicializarlo: ```Python alum = Alumno(12345, 'Juan', 'Perez') alum.rendir_parcial(2) alum.entregar_trabajo_practico(7) alum.rendir_parcial(7) alum.entregar_trabajo_practico(9) if alum.puede_rendir_coloquio(): print 'El alumno puede rendir coloquio' else: print 'El alumno no puede rendor coloquio' ``` End of explanation
2,492
Given the following text description, write Python code to implement the functionality described below step by step Description: Problem 1 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. Step1: Problem 2 Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be Step2: Problem 3 The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143 ? Step3: Problem 4 A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99. Find the largest palindrome made from the product of two 3-digit numbers. Step4: Problem 5 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20? Step5: Problem 6 The sum of the squares of the first ten natural numbers is, 12 + 22 + ... + 102 = 385 The square of the sum of the first ten natural numbers is, (1 + 2 + ... + 10)2 = 552 = 3025 Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640. Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum. Step6: Problem 7 By listing the first six prime numbers Step7: Problem 8 The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832. 73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450 Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product? Step8: Problem 9 A Pythagorean triplet is a set of three natural numbers, a < b < c, for which, a2 + b2 = c2 For example, 32 + 42 = 9 + 16 = 25 = 52. There exists exactly one Pythagorean triplet for which a + b + c = 1000. Find the product abc. Step9: Problem 10 The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below two million.
Python Code: #Set up the sum of multiples and reset to zero multiples_sum = 0 #Create the function that divides all the numbers from 1-1000 by 3 or 5 and add them for i in range(1, 1000): if (i % 3 == 0 or i % 5 == 0): multiples_sum = multiples_sum + i #Print results print (multiples_sum) Explanation: Problem 1 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. End of explanation #Create empty dictionary to populate wth the sequence of fibonacci terms sum_even_valued = {} #Define fibonnaci function that will find the terms of the fibonacci series def fibonacci(n): sum_even_valued[n] = sum_even_valued.get(n, 0) or (n <= 1 and 1 or fiba(n-1) + fiba(n-2)) return sum_even_valued[n] #Reset n and x to find the fibonaccy terms and add them if they are smaller than 4000000 n = 0 x = 0 #Add the fibonaccy terms that are lower than 4000000 while fibonacci(x) <= 4000000: if not fibonacci(x) % 2: n = n + fibonacci(x) x=x+1 #Print results print(n) Explanation: Problem 2 Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms. End of explanation #Set the max number of he range of numbers that contain the prie numbers num=600851475143 #Define a break for the series of numbers to be found starting by two #These are the divisors i=2 #Define the function that checks if the module of the number is zero #Check if the number is for k in range(0,num): if i >= num: break elif num % i == 0: # Check if the number is evenly divisible by i num = num / i else: i= i + 1 #Print the result print ("biggest prime number is: "+str(num)) Explanation: Problem 3 The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143 ? End of explanation #Start the number n = 0 #check all the multiplications made of numbers of three digits and stop when read in both directions is equal for a in range(999, 100, -1): for b in range(a, 100, -1): x = a * b if x > n: s = str(a * b) if s == s[::-1]: n = a * b #Print results print(n) Explanation: Problem 4 A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99. Find the largest palindrome made from the product of two 3-digit numbers. End of explanation #Define the functions that finds the samallest number def smallest(n): for i in range(n, factorial(n) + 1, n): if multiple(i, n): return i return -1 #Define the number that is a multiple of the range of numbers def multiple(x, n): for i in range(1, n): if x % i != 0: return False return True #Define the factoral function def factorial(n): if n > 1: return n * factorial(n - 1) elif n >= 0: return 1 else: return -1 print (smallest(10)) print (smallest(20)) Explanation: Problem 5 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20? End of explanation #Define the range of numbers we are going to check r = range(1, 101) a = sum(r) #Multipy the sum of the number and subtract the suqae of the sum of the numbers print (a * a - sum(i*i for i in r)) Explanation: Problem 6 The sum of the squares of the first ten natural numbers is, 12 + 22 + ... + 102 = 385 The square of the sum of the first ten natural numbers is, (1 + 2 + ... + 10)2 = 552 = 3025 Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640. Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum. End of explanation #Define the function that finds the primer numbers def primes(): D = {} q = 2 while 1: if q not in D: yield q D[q*q] = [q] else: for p in D[q]: D.setdefault(p+q,[]).append(p) del D[q] q += 1 #Define the function that counts the position of the number def nth_prime(n): for i, prime in enumerate(primes()): if i == n - 1: return prime print(nth_prime(10001)) Explanation: Problem 7 By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13.What is the 10 001st prime number? End of explanation st = '7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450' size = 13 max(reduce(mul,map(int,st[i:i + size])) for i in range(len(st) - size)) Explanation: Problem 8 The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832. 73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450 Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product? End of explanation #check all the numbers from 1 to 1000 that fulfill the a+b+c = 1000 condition #if the lowest number fulfills the condition check the product of the squares for a in range(1, 1000): for b in range(a, 1000): c = 1000 - a - b if c > 0: if c*c == a*a + b*b: print (a*b*c) break Explanation: Problem 9 A Pythagorean triplet is a set of three natural numbers, a < b < c, for which, a2 + b2 = c2 For example, 32 + 42 = 9 + 16 = 25 = 52. There exists exactly one Pythagorean triplet for which a + b + c = 1000. Find the product abc. End of explanation #Define the number that is prime def prime(num): if num > 2 and num % 2 == 0: return False else: for i in range(3, int(math.sqrt(num)) + 1, 2): if num % i == 0: return False return True #Define a function that is adding up to the limit def adding_up(limit): sum = 0 for i in range(2, limit): if prime(i): sum += i return sum print(adding_up(2000000)) Explanation: Problem 10 The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below two million. End of explanation
2,493
Given the following text description, write Python code to implement the functionality described below step by step Description: Compute source power using DICS beamfomer Compute a Dynamic Imaging of Coherent Sources (DICS) [1]_ filter from single-trial activity to estimate source power for two frequencies of interest. References .. [1] Gross et al. Dynamic imaging of coherent sources Step1: Read raw data
Python Code: # Author: Roman Goj <[email protected]> # Denis Engemann <[email protected]> # # License: BSD (3-clause) import mne from mne.datasets import sample from mne.time_frequency import csd_epochs from mne.beamformer import dics_source_power print(__doc__) data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' subjects_dir = data_path + '/subjects' Explanation: Compute source power using DICS beamfomer Compute a Dynamic Imaging of Coherent Sources (DICS) [1]_ filter from single-trial activity to estimate source power for two frequencies of interest. References .. [1] Gross et al. Dynamic imaging of coherent sources: Studying neural interactions in the human brain. PNAS (2001) vol. 98 (2) pp. 694-699 End of explanation raw = mne.io.read_raw_fif(raw_fname) raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel # Set picks picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False, stim=False, exclude='bads') # Read epochs event_id, tmin, tmax = 1, -0.2, 0.5 events = mne.read_events(event_fname) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=(None, 0), preload=True, reject=dict(grad=4000e-13, mag=4e-12)) evoked = epochs.average() # Read forward operator forward = mne.read_forward_solution(fname_fwd) # Computing the data and noise cross-spectral density matrices # The time-frequency window was chosen on the basis of spectrograms from # example time_frequency/plot_time_frequency.py # As fsum is False csd_epochs returns a list of CrossSpectralDensity # instances than can then be passed to dics_source_power data_csds = csd_epochs(epochs, mode='multitaper', tmin=0.04, tmax=0.15, fmin=15, fmax=30, fsum=False) noise_csds = csd_epochs(epochs, mode='multitaper', tmin=-0.11, tmax=-0.001, fmin=15, fmax=30, fsum=False) # Compute DICS spatial filter and estimate source power stc = dics_source_power(epochs.info, forward, noise_csds, data_csds) for i, csd in enumerate(data_csds): message = 'DICS source power at %0.1f Hz' % csd.freqs[0] brain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir, time_label=message, figure=i) brain.set_data_time_index(i) brain.show_view('lateral') # Uncomment line below to save images # brain.save_image('DICS_source_power_freq_%d.png' % csd.freqs[0]) Explanation: Read raw data End of explanation
2,494
Given the following text description, write Python code to implement the functionality described below step by step Description: Create a training set, a test set and a set to predict for Step1: Inspect the features, I know these features (at leasr spectral indices) are correlated but also have high variance, I could pick my favorite features and use those instead
Python Code: features=list(hst3d.columns) features.remove('name') Explanation: Create a training set, a test set and a set to predict for End of explanation import seaborn as sns #plt.xscale('log') sns.pairplot(spex[features], hue=None) good_features=['H_2O-1/J-Cont', 'CH_4/H-Cont', 'H_2O-2/J-Cont'] from sklearn.decomposition import PCA pca = PCA(n_components=2, svd_solver='full') pca.fit(spex[good_features].values) spex_pcaed=pca.transform(spex[good_features].values) proj_sample=pca.transform(hst3d[good_features].values) colors=an.color_from_spts(spex.spt.values, cmap='viridis') plt.scatter(proj_sample[:,0],proj_sample[:,1], alpha=0.6,color='k') plt.scatter(spex_pcaed[:,0], spex_pcaed[:, 1], color=colors) plt.xlabel('axis-1', fontsize=18) plt.ylabel('axis-2', fontsize=18) plt.xlim([-1.5, 1.5]) plt.ylim([-.3, 1.5]) sns.distplot(spex.spt) Explanation: Inspect the features, I know these features (at leasr spectral indices) are correlated but also have high variance, I could pick my favorite features and use those instead End of explanation
2,495
Given the following text description, write Python code to implement the functionality described below step by step Description: This code is taken from the TensorFlow tutorial expert example. The purpose here is to create an MNIST classifier. I tried to remove all the magic numbers in the example. Step1: Below, we build a simple, fully connected neural network and see that it accurately classifies about 91% of the handwritten digits. Step2: Now, we train a deep neural network. This ANN has the following layers
Python Code: import tensorflow.examples.tutorials.mnist.input_data as id mnist = id.read_data_sets('MNIST_data', one_hot=True) import tensorflow as tf sess = tf.InteractiveSession() Explanation: This code is taken from the TensorFlow tutorial expert example. The purpose here is to create an MNIST classifier. I tried to remove all the magic numbers in the example. End of explanation x = tf.placeholder("float", shape=[None, 784]) y_ = tf.placeholder("float", shape=[None, 10]) #inputs and outputs W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) # weights and biases are defined, all initialized to zero sess.run(tf.initialize_all_variables()) y = tf.nn.softmax(tf.matmul(x, W) + b) # activation function (softmax as opposed to logistic) cross_entropy = -tf.reduce_sum(y_*tf.log(y)) # this isn't the cross-entropy function I saw in Nielsen's overview, but seems to be standard. # perhaps try this with nielsen's as well train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) # automatically computes derivatives # learning rate is 0.01 epochs = 1000 for i in range(epochs): batch = mnist.train.next_batch(50) # training with batches of 50 from the training data train_step.run(feed_dict={x: batch[0], y_:batch[1]}) # placeholders must always be fed correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) # argmax returns the index of the largest value, i.e. the predicted number accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})) # this is the accuracy on the test data after training # eval on a tensor is the same as passing the tensor to sess.run Explanation: Below, we build a simple, fully connected neural network and see that it accurately classifies about 91% of the handwritten digits. End of explanation # i.e. no negatives image_height = 28 image_width = 28 conv1_filters = 50 conv2_filters = 70 filter_size = 5 output_length = 10 batch_size = 50 fully_connected_length = int((image_height + filter_size - 1) * (image_width + filter_size - 1)) def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) # all bias variables initialized to 0.1 def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) # this is a convolution layer def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') W_conv1 = weight_variable([filter_size, filter_size, 1, conv1_filters]) b_conv1 = bias_variable([conv1_filters]) x_image = tf.reshape(x, [-1,image_height,image_width,1]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) h_pool1 = max_pool_2x2(h_conv1) W_fc1 = weight_variable([int(image_height/4 * image_width/4 * conv2_filters), fully_connected_length]) b_fc1 = bias_variable([fully_connected_length]) W_conv2 = weight_variable([filter_size, filter_size, conv1_filters, conv2_filters]) b_conv2 = bias_variable([conv2_filters]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2(h_conv2) h_pool2_flat = tf.reshape(h_pool2, [-1, int(image_height/4*image_width/4*conv2_filters)]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) keep_prob = tf.placeholder("float") h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) W_fc2 = weight_variable([fully_connected_length, output_length]) b_fc2 = bias_variable([output_length]) y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) sess.run(tf.initialize_all_variables()) for i in range(1001): batch = mnist.train.next_batch(batch_size) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={ x:batch[0], y_: batch[1], keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) print("test accuracy %g"%accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})) Explanation: Now, we train a deep neural network. This ANN has the following layers: 1. Convolution layer (with pooling) 2. Convolution layer (with pooling) 3. Dense layer (with dropout) After 1000 iterations (i.e. batches), we achieve 96.5% accuracy. End of explanation
2,496
Given the following text description, write Python code to implement the functionality described below step by step Description: Weather and Motor Vehicle Collisions Step1: Download weather data Step2: Cleaning the weather dataset Convert weather DateUTC to local time Step3: Merge weather and NYPD MVC datasets Step4: Make some nice data analysis Step5: Now lets try to find out if there are any condition that causes more incidents than others. We do this by plotting out heatmaps to get an idea of the distributions in the NYC area Step6: Finding the ratio between conditions that resulted in an incident. Borough level Step7: Let's try to look at zip codes in Brooklyn only
Python Code: import pandas as pd import numpy as np import datetime from datetime import date from dateutil.rrule import rrule, DAILY from __future__ import division import geoplotlib as glp from geoplotlib.utils import BoundingBox, DataAccessObject pd.set_option('display.max_columns', None) %matplotlib inline Explanation: Weather and Motor Vehicle Collisions End of explanation start_date = date(2012, 7, 1) end_date = date(2016, 2, 29) # data = pd.DataFrame() frames = [] url_template = 'https://www.wunderground.com/history/airport/KJFK/%s/%s/%s/DailyHistory.html?req_city=New+York&req_state=NY&req_statename=New+York&reqdb.zip=10001&reqdb.magic=4&reqdb.wmo=99999&format=1.csv' month = "" for dt in rrule(DAILY, dtstart=start_date, until=end_date): if (month != dt.strftime("%m")): month = dt.strftime("%m") print 'Downloading to memory: ' + dt.strftime("%Y-%m") frames.append(pd.read_csv(url_template % (dt.strftime("%Y"),dt.strftime("%m"), dt.strftime("%d")))) print "Saving data to csv..." data = pd.concat(frames) data.to_csv('weather_data_nyc_kjfk.csv', sep=',') Explanation: Download weather data End of explanation from datetime import datetime from dateutil import tz weather = pd.read_csv('datasets/weather_data_nyc_kjfk_clean.csv') def UTCtoActual(utcDate): from_zone = tz.gettz('UTC') to_zone = tz.gettz('America/New_York') utc = datetime.strptime(utcDate.DateUTC, '%Y-%m-%d %H:%M:%S')\ .replace(tzinfo=from_zone)\ .astimezone(to_zone) s = pd.Series([utc.year, utc.month, utc.day, utc.hour]) s.columns = ['Year', 'Month', 'Day', 'Hour'] return s #weather['DateActual'] = weather.DateUTC.map() weather[['Year', 'Month', 'Day', 'Hour']] = weather.apply(UTCtoActual, axis=1) weather.to_csv('datasets/weather_data_nyc_kjfk_clean2.csv') Explanation: Cleaning the weather dataset Convert weather DateUTC to local time End of explanation incidents = pd.read_csv('datasets/NYPD_Motor_Vehicle_Collisions.csv') weather = pd.read_csv('datasets/weather_data_nyc_kjfk_clean2.csv') weather.head(1) weather[(weather.Year == 2015) & (weather.Month == 11) & (weather.Day == 27)] features0 = ['Conditions', 'TemperatureC'] features = ['Conditions', \ 'Precipitationmm', \ 'TemperatureC', 'VisibilityKm'] def lookup_weather2(year, month, day, hour): w = weather[(weather.Year == year) & (weather.Month == month) & (weather.Day == day) & (weather.Hour == hour)] return w def lookup_weather(date, time): month = int(date.split('/')[0]) day = int(date.split('/')[1]) year = int(date.split('/')[2]) hour = int(time.split(':')[0]) d = lookup_weather2(year, month, day, hour).head(1) if (d.empty): dt_back = datetime.datetime(year, month, day, hour) - datetime.timedelta(hours=1) dt_forward = datetime.datetime(year, month, day, hour) + datetime.timedelta(hours=1) d_back = lookup_weather2(dt_back.year, dt_back.month, dt_back.day, dt_back.hour) if (not d_back.empty): return d_back d_forward = lookup_weather2(dt_forward.year, dt_forward.month, dt_forward.day, dt_forward.hour) if (not d_forward.empty): return d_forward return d def merge_weather(incident): date = incident.DATE time = incident.TIME #print "0" w = lookup_weather(date, time) #[unnamed, condition, dateUTC, Dew, Events, Gust, Humidity,Precipitationmm,Sea_Level_PressurehPa, TemperatureC] = w.values[0] #print "1" try: #print "2" #print w con = "-" temp = "-" rainmm = "-" viskm = "-" #print "2.5" if (not pd.isnull(w['Conditions'].iloc[0])): con = w['Conditions'].iloc[0] if (not pd.isnull(w['TemperatureC'].iloc[0])): temp = w['TemperatureC'].iloc[0] if (not pd.isnull(w['Precipitationmm'].iloc[0])): rainmm = w['Precipitationmm'].iloc[0] if (not pd.isnull(w['VisibilityKm'].iloc[0])): viskm = w['VisibilityKm'].iloc[0] #print 'con %s, temp %s, rainmm %s, viskm %s' % (con, temp, rainmm, viskm) #print "2.75" s = pd.Series([con, rainmm, temp, viskm]) #print "3" #print str(len(w.values[0])) #s = pd.Series(w.values[0]) #s = pd.Series([w['Conditions'].iloc[0], w['Dew PointC'].iloc[0], w['Gust SpeedKm/h'].iloc[0]]) #s.columns = features return s except: #print "4" print date + "x" + time s = pd.Series([None,None,None,None]) #s = pd.Series(["1","2","3","4","5","6","7","8","9"]) #s = pd.Series([]) #s.columns = features return s #lookup_weather2(2016, 2, 14, 7) #lookup_weather('03/14/2016', '3:27').values[0] #[unnamed, condition, dateUTC, Dew, Events, Gust, Humidity,Precipitationmm,Sea_Level_PressurehPa, TemperatureC] = lookup_weather('01/27/2016', '3:27').values[0] print "Applying weather data to incidents..." incidents[features] = incidents[incidents.DATE.str.split('/').str.get(2) != '2016'].apply(merge_weather, axis=1) print "Saving weather in-riched incident data..." incidents.to_csv('datasets/NYPD_Motor_Vehicle_Collisions_weather4.csv', sep=',') incidents[incidents.DATE.str.split('/').str.get(2) == '2016'] Explanation: Merge weather and NYPD MVC datasets End of explanation # Read dataset incidents = pd.read_csv('datasets/NYPD_Motor_Vehicle_Collisions_weather4.csv') # Filter 2016 incidents incidents = incidents[(incidents.DATE.str.split('/').str.get(2) != '2016') & (pd.notnull(incidents.Conditions)) & (incidents.Conditions != "Mist")] incidents # Distribution of incidents by weather conditions ys = [] xs = [] for c in incidents.Conditions.unique(): mask = (incidents.Conditions == c) filtered_incidents = incidents[mask] ys.append(len(filtered_incidents.index)) xs.append(c) df = pd.DataFrame(pd.Series(ys, index=xs, name="Incidents by weather conditions").sort_values()) df.plot(kind='barh', figsize=(8,8)) df Explanation: Make some nice data analysis End of explanation def plot_zip_weather(condition, data): ys = [] xs = [] for z in data['ZIP CODE'].unique(): mask = (data['ZIP CODE'] == z) filtered_incidents = data[mask] ys.append(len(filtered_incidents.index)) xs.append(z) df = pd.DataFrame(pd.Series(ys, index=xs, name="%s incidents by zip code" % condition).sort_values()) df.plot(kind='barh', figsize=(8,32)) def draw_kde(data): bbox = BoundingBox(north=data.LATITUDE.max()-0.055,\ west=data.LONGITUDE.min()+0.055,\ south=data.LATITUDE.min()-0.055,\ east=data.LONGITUDE.max()+0.055) coords = {'lat': data.LATITUDE.values.tolist(), 'lon': data.LONGITUDE.values.tolist()} glp.kde(coords, bw=5, cut_below=1e-4) glp.set_bbox(bbox) glp.inline() def plot_stuff(conditions, data): print "%s conditions" % conditions plot_zip_weather(conditions, data) draw_kde(data) snowy = incidents[incidents['Conditions'].str.contains('Snow')] rainy = incidents[incidents['Conditions'].str.contains('Rain')] clear = incidents[incidents['Conditions'].str.contains('Clear')] cloudy = incidents[(incidents['Conditions'].str.contains('Cloud')) | (incidents['Conditions'].str.contains('Overcast'))] haze = incidents[incidents['Conditions'].str.contains('Haze')] plot_stuff("Snowy", snowy) plot_stuff("Rainy", rainy) plot_stuff("Clear", clear) plot_stuff("Cloudy", cloudy) plot_stuff("Hazy", haze) Explanation: Now lets try to find out if there are any condition that causes more incidents than others. We do this by plotting out heatmaps to get an idea of the distributions in the NYC area End of explanation from collections import Counter ConditionIncidentCounter = Counter(incidents.Conditions.values) p_incident = {} for k,v in ConditionIncidentCounter.most_common(): p_incident[k] = v/len(incidents) p_incident # What is the probability of an incident based on the weather condition? # Normalize incidents based on the conditions. from collections import Counter ConditionIncidentCounter = Counter(incidents.Conditions.values) p_incident = {} for k,v in ConditionIncidentCounter.most_common(): p_incident[k] = v/len(incidents) p_incident # Do the same again but for individual areas of NYC p_incident_district = {} l = len(incidents) for district in incidents[pd.notnull(incidents.BOROUGH)].BOROUGH.unique(): filtered = incidents[incidents.BOROUGH == district] counter = Counter(filtered.Conditions.values) p_incident_district[district] = {} for k,v in counter.most_common(): p_incident_district[district][k] = v / len(list(counter.elements())); p_incident_district # Are there any areas in NYC that experience incidents based # on a condition unusually higher or lower compared to other areas? # Calculate the ratio of incidents based on the condition. def calcRatioForDistrict(districtCounter, overAllCounter, district): ys = [] xs = [] for con in incidents.Conditions.unique(): ys.append(districtCounter[con] / overAllCounter[con]) xs.append(con) return pd.Series(ys, index=xs) series = {} for b in incidents[pd.notnull(incidents.BOROUGH)].BOROUGH.unique(): series[b] = calcRatioForDistrict(p_incident_district[b], p_incident, b) df = pd.DataFrame(series) df.plot(kind="bar", subplots=True, figsize=(14,14),layout=(7,2), legend=False,sharey=True) Explanation: Finding the ratio between conditions that resulted in an incident. Borough level End of explanation # What is the probability of an incident based on the weather condition? # Normalize incidents based on the conditions. from collections import Counter borough = incidents[incidents.BOROUGH == 'MANHATTAN'] ConditionIncidentCounter = Counter(borough.Conditions.values) p_incident = {} for k,v in ConditionIncidentCounter.most_common(): p_incident[k] = v/len(borough) p_incident # Do the same again but for individual areas of NYC p_incident_borough_zip = {} l = len(borough) for z in borough[pd.notnull(incidents['ZIP CODE'])]['ZIP CODE'].unique(): filtered = borough[incidents['ZIP CODE'] == z] counter = Counter(filtered.Conditions.values) # z = str(z).split(".")[0] p_incident_borough_zip[z] = {} for k,v in counter.most_common(): p_incident_borough_zip[z][k] = v / len(list(counter.elements())); p_incident_borough_zip # Are there any areas in NYC that experience incidents based # on a condition unusually higher or lower compared to other areas? # Calculate the ratio of incidents based on the condition. def calcRatioForDistrict(districtCounter, overAllCounter, district): ys = [] xs = [] for con in incidents.Conditions.unique(): if (con in districtCounter): ys.append(districtCounter[con] / overAllCounter[con]) else: ys.append(0) xs.append(con) return pd.Series(ys, index=xs) series = {} for z in borough[pd.notnull(incidents['ZIP CODE'])]['ZIP CODE'].unique(): series[z] = calcRatioForDistrict(p_incident_borough_zip[z], p_incident, b) df = pd.DataFrame(series) df.plot(kind="bar", subplots=True, figsize=(14,100), layout=(50,2), legend=False, sharey=False) worst_day = incidents.DATE.value_counts().index[0] worst_day_count = incidents.DATE.value_counts()[0] incidents[incidents.DATE == worst_day] Explanation: Let's try to look at zip codes in Brooklyn only End of explanation
2,497
Given the following text description, write Python code to implement the functionality described below step by step Description: Instructions Work on a copy of this notebook Step1: Now, let's create and test a pipeline Step2: Let's first create a simple CUDA kernel within Bifrost. We will generate 1000 integers, feed them into Bifrost as a CUDA array, perform a kernel operation x * 3, then copy them back. Step6: Now, let's generate a full pipeline
Python Code: # @title Install C++ deps %%shell sudo apt-get -qq install exuberant-ctags libopenblas-dev software-properties-common build-essential # @title Install python deps %%shell pip install -q contextlib2 pint simplejson ctypesgen==1.0.2 # @title Build and Install Bifrost %%shell cd "${HOME}" if [ -d "${HOME}/bifrost_repo" ]; then echo "Already cloned." else git clone https://github.com/ledatelescope/bifrost bifrost_repo fi cd "${HOME}/bifrost_repo" git pull --all ./configure # Build and install: make -j all sudo make install export LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH} Explanation: Instructions Work on a copy of this notebook: File > Save a copy in Drive (you will need a Google account). Alternatively, you can download the notebook using File > Download .ipynb, then upload it to Colab. Turn on the GPU with Edit->Notebook settings->Hardware accelerator->GPU Execute the following cell (click on it and press Ctrl+Enter) to install dependencies. Continue to the next section. Notes: * If your Colab Runtime gets reset (e.g., due to inactivity), repeat steps 3, 4. * After installation, if you want to activate/deactivate the GPU, you will need to reset the Runtime: Runtime > Factory reset runtime and repeat steps 2-4. End of explanation import os # Environment path doesn't propagate, so add it manually: if "/usr/local/lib" not in os.environ['LD_LIBRARY_PATH']: os.environ['LD_LIBRARY_PATH'] += ":/usr/local/lib" import bifrost as bf import numpy as np Explanation: Now, let's create and test a pipeline: End of explanation x = np.random.randint(256, size=1000) x_orig = x x = bf.asarray(x, 'cuda') y = bf.empty_like(x) x.flags['WRITEABLE'] = False x.bf.immutable = True for _ in range(3): bf.map("y = x * 3", {'x': x, 'y': y}) x = x.copy('system') y = y.copy('system') if isinstance(x_orig, bf.ndarray): x_orig = x np.testing.assert_equal(y, x_orig * 3) Explanation: Let's first create a simple CUDA kernel within Bifrost. We will generate 1000 integers, feed them into Bifrost as a CUDA array, perform a kernel operation x * 3, then copy them back. End of explanation from bifrost.block import Pipeline, NumpyBlock, NumpySourceBlock def generate_different_arrays(): Yield four different groups of two arrays dtypes = ['float32', 'float64', 'complex64', 'int8'] shapes = [(4,), (4, 5), (4, 5, 6), (2,) * 8] for array_index in range(4): yield np.ones( shape=shapes[array_index], dtype=dtypes[array_index]) yield 2 * np.ones( shape=shapes[array_index], dtype=dtypes[array_index]) def switch_types(array): Return two copies of the array, one with a different type return np.copy(array), np.copy(array).astype(np.complex128) occurences = 0 def compare_arrays(array1, array2): Make sure that all arrays coming in are equal global occurences occurences += 1 np.testing.assert_almost_equal(array1, array2) blocks = [ (NumpySourceBlock(generate_different_arrays), {'out_1': 0}), (NumpyBlock(switch_types, outputs=2), {'in_1': 0, 'out_1': 1, 'out_2': 2}), (NumpyBlock(np.fft.fft), {'in_1': 2, 'out_1': 3}), (NumpyBlock(np.fft.ifft), {'in_1': 3, 'out_1': 4}), (NumpyBlock(compare_arrays, inputs=2, outputs=0), {'in_1': 1, 'in_2': 4})] Pipeline(blocks).main() # The function `compare_arrays` should be hit 8 times: assert occurences == 8 Explanation: Now, let's generate a full pipeline: End of explanation
2,498
Given the following text description, write Python code to implement the functionality described below step by step Description: K-Nearest Neighbors (KNN) by Chiyuan Zhang and S&ouml;ren Sonnenburg This notebook illustrates the <a href="http Step1: Let us plot the first five examples of the train data (first row) and test data (second row). Step2: Then we import shogun components and convert the data to shogun objects Step3: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect. Step4: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time Step5: We have the prediction for each of the 13 k's now and can quickly compute the accuracies Step6: So k=3 seems to have been the optimal choice. Accellerating KNN Obviously applying KNN is very costly Step7: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN Step8: Evaluate KNN with and without Cover Tree. This takes a few seconds Step9: Generate plots with the data collected in the evaluation Step10: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results. Comparison to Multiclass Support Vector Machines In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above. Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance). Step11: Let's apply the SVM to the same test data set to compare results Step12: Since the SVM performs way better on this task - let's apply it to all data we did not use in training.
Python Code: import numpy as np import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') from scipy.io import loadmat, savemat from numpy import random from os import path mat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat')) Xall = mat['data'] Yall = np.array(mat['label'].squeeze(), dtype=np.double) # map from 1..10 to 0..9, since shogun # requires multiclass labels to be # 0, 1, ..., K-1 Yall = Yall - 1 random.seed(0) subset = random.permutation(len(Yall)) Xtrain = Xall[:, subset[:5000]] Ytrain = Yall[subset[:5000]] Xtest = Xall[:, subset[5000:6000]] Ytest = Yall[subset[5000:6000]] Nsplit = 2 all_ks = range(1, 21) print Xall.shape print Xtrain.shape print Xtest.shape Explanation: K-Nearest Neighbors (KNN) by Chiyuan Zhang and S&ouml;ren Sonnenburg This notebook illustrates the <a href="http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm">K-Nearest Neighbors</a> (KNN) algorithm on the USPS digit recognition dataset in Shogun. Further, the effect of <a href="http://en.wikipedia.org/wiki/Cover_tree">Cover Trees</a> on speed is illustrated by comparing KNN with and without it. Finally, a comparison with <a href="http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM">Multiclass Support Vector Machines</a> is shown. The basics The training of a KNN model basically does nothing but memorizing all the training points and the associated labels, which is very cheap in computation but costly in storage. The prediction is implemented by finding the K nearest neighbors of the query point, and voting. Here K is a hyper-parameter for the algorithm. Smaller values for K give the model low bias but high variance; while larger values for K give low variance but high bias. In SHOGUN, you can use CKNN to perform KNN learning. To construct a KNN machine, you must choose the hyper-parameter K and a distance function. Usually, we simply use the standard CEuclideanDistance, but in general, any subclass of CDistance could be used. For demonstration, in this tutorial we select a random subset of 1000 samples from the USPS digit recognition dataset, and run 2-fold cross validation of KNN with varying K. First we load and init data split: End of explanation %matplotlib inline import pylab as P def plot_example(dat, lab): for i in xrange(5): ax=P.subplot(1,5,i+1) P.title(int(lab[i])) ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest') ax.set_xticks([]) ax.set_yticks([]) _=P.figure(figsize=(17,6)) P.gray() plot_example(Xtrain, Ytrain) _=P.figure(figsize=(17,6)) P.gray() plot_example(Xtest, Ytest) Explanation: Let us plot the first five examples of the train data (first row) and test data (second row). End of explanation from shogun import MulticlassLabels, RealFeatures from shogun import KNN, EuclideanDistance labels = MulticlassLabels(Ytrain) feats = RealFeatures(Xtrain) k=3 dist = EuclideanDistance() knn = KNN(k, dist, labels) labels_test = MulticlassLabels(Ytest) feats_test = RealFeatures(Xtest) knn.train(feats) pred = knn.apply_multiclass(feats_test) print "Predictions", pred[:5] print "Ground Truth", Ytest[:5] from shogun import MulticlassAccuracy evaluator = MulticlassAccuracy() accuracy = evaluator.evaluate(pred, labels_test) print "Accuracy = %2.2f%%" % (100*accuracy) Explanation: Then we import shogun components and convert the data to shogun objects: End of explanation idx=np.where(pred != Ytest)[0] Xbad=Xtest[:,idx] Ybad=Ytest[idx] _=P.figure(figsize=(17,6)) P.gray() plot_example(Xbad, Ybad) Explanation: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect. End of explanation knn.set_k(13) multiple_k=knn.classify_for_multiple_k() print multiple_k.shape Explanation: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time: When we have to determine the $K\geq k$ nearest neighbors we will know the nearest neigbors for all $k=1...K$ and can thus get the predictions for multiple k's in one step: End of explanation for k in xrange(13): print "Accuracy for k=%d is %2.2f%%" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest)) Explanation: We have the prediction for each of the 13 k's now and can quickly compute the accuracies: End of explanation from shogun import Time, KNN_COVER_TREE, KNN_BRUTE start = Time.get_curtime() knn.set_k(3) knn.set_knn_solver_type(KNN_BRUTE) pred = knn.apply_multiclass(feats_test) print "Standard KNN took %2.1fs" % (Time.get_curtime() - start) start = Time.get_curtime() knn.set_k(3) knn.set_knn_solver_type(KNN_COVER_TREE) pred = knn.apply_multiclass(feats_test) print "Covertree KNN took %2.1fs" % (Time.get_curtime() - start) Explanation: So k=3 seems to have been the optimal choice. Accellerating KNN Obviously applying KNN is very costly: for each prediction you have to compare the object against all training objects. While the implementation in SHOGUN will use all available CPU cores to parallelize this computation it might still be slow when you have big data sets. In SHOGUN, you can use Cover Trees to speed up the nearest neighbor searching process in KNN. Just call set_use_covertree on the KNN machine to enable or disable this feature. We also show the prediction time comparison with and without Cover Tree in this tutorial. So let's just have a comparison utilizing the data above: End of explanation def evaluate(labels, feats, use_cover_tree=False): from shogun import MulticlassAccuracy, CrossValidationSplitting import time split = CrossValidationSplitting(labels, Nsplit) split.build_subsets() accuracy = np.zeros((Nsplit, len(all_ks))) acc_train = np.zeros(accuracy.shape) time_test = np.zeros(accuracy.shape) for i in range(Nsplit): idx_train = split.generate_subset_inverse(i) idx_test = split.generate_subset_indices(i) for j, k in enumerate(all_ks): #print "Round %d for k=%d..." % (i, k) feats.add_subset(idx_train) labels.add_subset(idx_train) dist = EuclideanDistance(feats, feats) knn = KNN(k, dist, labels) knn.set_store_model_features(True) if use_cover_tree: knn.set_knn_solver_type(KNN_COVER_TREE) else: knn.set_knn_solver_type(KNN_BRUTE) knn.train() evaluator = MulticlassAccuracy() pred = knn.apply_multiclass() acc_train[i, j] = evaluator.evaluate(pred, labels) feats.remove_subset() labels.remove_subset() feats.add_subset(idx_test) labels.add_subset(idx_test) t_start = time.clock() pred = knn.apply_multiclass(feats) time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels() accuracy[i, j] = evaluator.evaluate(pred, labels) feats.remove_subset() labels.remove_subset() return {'eout': accuracy, 'ein': acc_train, 'time': time_test} Explanation: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN: End of explanation labels = MulticlassLabels(Ytest) feats = RealFeatures(Xtest) print("Evaluating KNN...") wo_ct = evaluate(labels, feats, use_cover_tree=False) wi_ct = evaluate(labels, feats, use_cover_tree=True) print("Done!") Explanation: Evaluate KNN with and without Cover Tree. This takes a few seconds: End of explanation import matplotlib fig = P.figure(figsize=(8,5)) P.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*') P.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*') P.legend(["Test Accuracy", "Training Accuracy"]) P.xlabel('K') P.ylabel('Accuracy') P.title('KNN Accuracy') P.tight_layout() fig = P.figure(figsize=(8,5)) P.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*') P.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d') P.xlabel("K") P.ylabel("time") P.title('KNN time') P.legend(["Plain KNN", "CoverTree KNN"], loc='center right') P.tight_layout() Explanation: Generate plots with the data collected in the evaluation: End of explanation from shogun import GaussianKernel, GMNPSVM width=80 C=1 gk=GaussianKernel() gk.set_width(width) svm=GMNPSVM(C, gk, labels) _=svm.train(feats) Explanation: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results. Comparison to Multiclass Support Vector Machines In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above. Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance). End of explanation out=svm.apply(feats_test) evaluator = MulticlassAccuracy() accuracy = evaluator.evaluate(out, labels_test) print "Accuracy = %2.2f%%" % (100*accuracy) Explanation: Let's apply the SVM to the same test data set to compare results: End of explanation Xrem=Xall[:,subset[6000:]] Yrem=Yall[subset[6000:]] feats_rem=RealFeatures(Xrem) labels_rem=MulticlassLabels(Yrem) out=svm.apply(feats_rem) evaluator = MulticlassAccuracy() accuracy = evaluator.evaluate(out, labels_rem) print "Accuracy = %2.2f%%" % (100*accuracy) idx=np.where(out.get_labels() != Yrem)[0] Xbad=Xrem[:,idx] Ybad=Yrem[idx] _=P.figure(figsize=(17,6)) P.gray() plot_example(Xbad, Ybad) Explanation: Since the SVM performs way better on this task - let's apply it to all data we did not use in training. End of explanation
2,499
Given the following text description, write Python code to implement the functionality described below step by step Description: Here, we construct a simple neural network transform with the ability to add layers and change the optimizer while training. Note that this code is largely identical to the Keras example, and simply wrapped inside the ContinuousTransform class. Step1: The class allows use to replace all layers after a given index in the model. In this example, we replace the last layer (a single softmax activation) with a series of 3 layers, followed by a final softmax activation. Step2: We can also change the optimizer being used. Here we adjust the learning rate and momentum, and replace the previous optimizer. Step3: Lastly, we can directly any parameters we've exposed in the class. In this case, we have number of epochs and batch size, along with a verbose output parameter.
Python Code: from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import SGD class SimpleNN(ContinuousTransform): def init_func(self,target_df,X_train_df,y_train_df,X_test_df,y_test_df): model=Sequential() model.add(Dense(64, input_dim=784, init='uniform')) model.add(Activation('tanh')) model.add(Dropout(0.5)) model.add(Dense(64,init='uniform')) model.add(Activation('softmax')) self.opt = SGD(lr=0.1,decay=1e-6,momentum=0.9,nesterov=True) model.compile(loss='categorical_crossentropy', optimizer=self.opt) self.model = model self.target = target_df self.nb_epoch = 2 self.batch_size = 16 self.verbose = 0 target_df.set_matrix(0) def continuous_func(self,target_df,X_train_df,y_train_df,X_test_df,y_test_df): self.model.fit(X_train_df.get_matrix(), y_train_df.get_matrix(), nb_epoch=self.nb_epoch, batch_size=self.batch_size, verbose=self.verbose) score = self.model.evaluate(X_test_df.get_matrix(), y_test_df.get_matrix(), batch_size=self.batch_size, verbose=self.verbose) target_df.set_matrix(np.array([[score]])) def add_layers(self,layers,idx): self.target.stop() while len(self.model.layers) > idx: self.model.layers.pop() for l in layers: self.model.add(l) self.model.compile(loss='categorical_crossentropy', optimizer=self.opt) self.target.go() def set_optimizer(self,opt): self.target.stop() self.opt = opt self.model.compile(loss='categorical_crossentropy', optimizer=self.opt) self.target.go() df["output/","score/"] = SimpleNN(df["data/train/","input/raw/"], df["data/train/","input/label/"], df["data/test/","input/raw/"], df["data/test/","input/label/"]) Explanation: Here, we construct a simple neural network transform with the ability to add layers and change the optimizer while training. Note that this code is largely identical to the Keras example, and simply wrapped inside the ContinuousTransform class. End of explanation new_layers = [ Activation('tanh'), Dropout(0.5), Dense(10,init='uniform'), Activation('softmax') ] df["output/","score/"].T.add_layers(new_layers,4) Explanation: The class allows use to replace all layers after a given index in the model. In this example, we replace the last layer (a single softmax activation) with a series of 3 layers, followed by a final softmax activation. End of explanation new_opt = SGD(lr=0.01,decay=1e-6,momentum=0.8,nesterov=True) df["output/","score/"].T.set_optimizer(new_opt) Explanation: We can also change the optimizer being used. Here we adjust the learning rate and momentum, and replace the previous optimizer. End of explanation df["output/","score/"].T.nb_epoch = 4 df["output/","score/"].T.batch_size = 32 df["output/","score/"].T.verbose = 1 Explanation: Lastly, we can directly any parameters we've exposed in the class. In this case, we have number of epochs and batch size, along with a verbose output parameter. End of explanation